.. _Path Planning: https://cmr.red/planning-docs .. _Controls: https://cmr.red/controls-docs .. _Software Architecture: https://cmr.red/software-arch-docs ======================================== Introduction ======================================== Welcome to the Perceptions documentation for Carnegie Mellon Racing (CMR) Driverless! CMR is a student organization at Carnegie Mellon University that designs and builds electric and autonomous formula-style racecars to compete in Formula Student Racing competitions each year. Perceptions constitutes the system of the car that ingests and interprets sensor data from the environment. Via this interpretation, the perceptions module enables `Path Planning`_ and `Controls`_ modules to make high-level decisions to control the car. This documentation is meant to provide an introduction and high level conceptual overview to our perceptions pipeline for our 2024-25 racecar, 24a. For more detail, including source code, please visit our `GitHub repository `_ ======================================== The FSAE Driverless Perceptions Problem ======================================== Given a track delimited by yellow and blue cones, we must reliably and efficiently determine the track and cones and pass down the driverless pipeline. Below is a diagram of one of the main FSAE Driverless events, Trackdrive, where teams drive 10 laps of a previously unknown course autonomously. .. figure:: source/index/trackdrive_diagram.png :width: 600 :align: center *Figure taken from 2025 Formula Student Germany Competition Handbook.* ======= Sensors ======= .. figure:: source/index/sensors.png :width: 600 :align: center *24a at track testing.* A variety of sensors are employed to accomplish this task: - `HESAI AT128 Hybrid Solid State LiDAR `_ - `ZED2 `_ / `ZED1 `_ Stereo Cameras (only used for RGB frames) - `Movella MTi-680G RTK GNSS/INS GPS `_ These three sensors efficiently generate a local view of the track and cones. .. figure:: source/index/perc_algo_diagram.svg :width: 600 :align: center *Overall Pipeline Diagram* What data do we work with and where does it go? =============================================== LiDAR Module ------------ Our single LiDAR provides a primary source of depth information via `point clouds `_, which are discrete sets of points in space. Several processing algorithms (see :doc:`lidar explainers `) are then applied, eventually resulting in a set of points that represent the centroid of cones on the track in front of us. Coloring Module --------------- Our dual RGB cameras provide a primary source of color information. Cameras are used for rgb image processing (see :doc:`coloring explainers `) to classify cones by color. Classified cones are then passed down the pipeline to `Path Planning`_ and `Controls`_. ========= Directory ========= Checkout the links below to learn more about our sensor stack and our main algorithms! .. toctree:: :maxdepth: 2 source/explainers/explainers source/implementation/api_reference .. toctree:: :hidden: source/more_cmr_driverless .. _more_cmr_driverless: ===================== More CMR Driverless ===================== .. _CMR Driverless Landing page: https://cmr.red/driverless-docs Back to `CMR Driverless Landing page`_. .. _Software Architecture Pipeline: https://cmr.red/swarch-docs Learn more about the `Software Architecture Pipeline`_. .. _Path Planning Pipeline: https://cmr.red/planning-docs Learn more about the `Path Planning Pipeline`_. .. _Controls Pipeline: https://cmr.red/controls-docs Learn more about the `Controls Pipeline`_. .. _DV Mechanical Pipeline: https://cmr.red/mech-docs Learn more about the `DV Mechanical Pipeline`_. .. _DV Electrical Pipeline: https://cmr.red/elec-docs Learn more about the `DV Electrical Pipeline`_.