Perceptions Coloring Module
-
namespace point_to_pixel
Typedefs
Enums
Functions
-
Cone find_closest_cone(const Cones &cones)
Finds the next closest cone to the first cone in the vector.
- Parameters:
cones – Vector of cones
- Returns:
Cone Closest cone
-
double calculate_angle(const Cone &from, const Cone &to)
Calculates the angle between two cones.
- Parameters:
from – First cone
to – Second cone
- Returns:
double Angle in radians
-
Cones order_cones(const Cones &unordered_cones, double max_distance_threshold)
Orders cones by their path direction.
- Parameters:
unordered_cones – Vector of unordered cones
max_distance_threshold – Maximum distance between two cones before ordering clips detected cones
- Returns:
Cones Vector of ordered cones
-
std::pair<std::vector<std::vector<double>>, std::vector<double>> cones_to_xy(const TrackBounds &track_bounds)
Converts TrackBounds, a struct containing yellow cone vector and blue cone vector, to XY training data.
- Parameters:
track_bounds – TrackBounds stuct
- Returns:
std::pair<std::vector<std::vector<double>>, std::vector<double>> Feature matrix and label vector
-
void supplement_cones(TrackBounds &track_bounds)
Adds dummy cones to the side of the car, blue on left and yellow on right.
- Parameters:
track_bounds – TrackBounds struct
-
struct Cone
- #include <cones.hpp>
-
struct TrackBounds
- #include <cones.hpp>
-
class GeneralPredictor
- #include <general_predictor.hpp>
Subclassed by point_to_pixel::HSVPredictor, point_to_pixel::YoloPredictor
-
class HSVPredictor : public point_to_pixel::GeneralPredictor
- #include <hsv_predictor.hpp>
Public Functions
-
inline HSVPredictor(const cv::Scalar &yellow_filter_low, const cv::Scalar &yellow_filter_high, const cv::Scalar &blue_filter_low, const cv::Scalar &blue_filter_high, const cv::Scalar &orange_filter_low, const cv::Scalar &orange_filter_high)
constructor
- Parameters:
yellow_filter_low – min clipping val for yellow filter
yellow_filter_high – max clipping val for yellow filter
blue_filter_low – min clipping val for blue filter
blue_filter_high – max clipping val for blue filter
orange_filter_low – min clipping val for orange filter
orange_filter_high – max clipping val for orange filter
- Returns:
HSVPredictor object
-
virtual ConeClass predict_color(std::pair<Eigen::Vector3d, Eigen::Vector3d> pixel_pair, std::pair<cv::Mat, cv::Mat> frame_pair, double confidence_threshold) override
applies HSV predictor on images and classifies pixels based on the colors (in HSV) of the pixels around.
- Parameters:
pixel_pair – pair of two pixels (left and right image) representing a cone centroid transformed to image space.
frame_pair – pair of image frames
confidence_threshold – YOLO min confidence threshold for classification
- Returns:
ConeClass of the pixel_pair representing a cone centroid. ORANGE == 0; YELLOW == 1; BLUE == 2; UNKNOWN == -1
-
inline HSVPredictor(const cv::Scalar &yellow_filter_low, const cv::Scalar &yellow_filter_high, const cv::Scalar &blue_filter_low, const cv::Scalar &blue_filter_high, const cv::Scalar &orange_filter_low, const cv::Scalar &orange_filter_high)
-
class YoloPredictor : public point_to_pixel::GeneralPredictor
- #include <yolo_predictor.hpp>
Public Functions
-
inline YoloPredictor(const std::string &model_path)
constructor
- Parameters:
model_path – string path to .onnx file
- Returns:
YoloPredictor object
-
virtual ConeClass predict_color(std::pair<Eigen::Vector3d, Eigen::Vector3d> pixel_pair, std::pair<cv::Mat, cv::Mat> frame_pair, double confidence_threshold) override
applies YOLO model on images and classifies pixels based on the bounding box classes they fall in. If a pixel fall in multiple bounding boxes, a heuristic is applied that compares pixel depth and bounding box area. The idea is that the smaller the bounding box, the greater the depth of the cone it corresponds to.
- Parameters:
pixel_pair – pair of two pixels (left and right image) representing a cone centroid transformed to image space.
frame_pair – pair of image frames
confidence_threshold – YOLO min confidence threshold for classification
- Returns:
ConeClass of the pixel_pair representing a cone centroid. ORANGE == 0; YELLOW == 1; BLUE == 2; UNKNOWN == -1
-
void process_frame(const cv::Mat &frame, bool is_left_frame)
applies YOLO model to image
- Parameters:
frame – camera frame
is_left_frame – boolean flag that decides which instance variable to save YOLO output to
-
void draw_bounding_boxes(cv::Mat &frame, cv::Mat &canvas, std::vector<cv::Mat> detections, int rows, int cols, double confidence_threshold)
uses cv2 to draw filled bounding boxes on the frames for visualization
- Parameters:
frame – camera frame
canvas – canvas frame
detections – output from YOLO model
rows – rows of detection matrix to parse detections
cols – cols of detection matrix to parse detections
confidence_threshold – YOLO min confidence threshold for classification
-
inline YoloPredictor(const std::string &model_path)
-
class CameraManager
- #include <camera_manager.hpp>
Public Functions
-
StampedFrame find_closest_frame(const rclcpp::Time &callbackTime, const rclcpp::Logger &logger)
Finds the closest frame to a callback time from the image deque.
- Parameters:
callbackTime – The time to find a matching frame for
logger – ROS logger for error reporting
- Returns:
camera_manager::StampedFrame The closest frame with timestamp
-
bool initialize_camera(const rclcpp::Logger &logger)
Dynamically assigns camera ids and corrects initial id assigmment. Also initializes ZED camera with intrinsic rectification matrices.
- Parameters:
logger – ROS logger for status messages
- Returns:
bool Success status
-
StampedFrame capture_and_rectify_frame(const rclcpp::Logger &logger, bool is_left_camera, bool use_inner_lens)
Captures and rectifies a frame from a ZED camera.
- Parameters:
logger – ROS logger for status messages
is_left_camera – If using left sided zed set to true
use_inner_lens – If using inner lenses set to true
- Returns:
camera_manager::StampedFrame The timestamp and rectified frame
-
void update_deque(StampedFrame new_frame)
Updates the deque with a new frame.
- Parameters:
new_frame – The new frame to add to the deque
-
StampedFrame find_closest_frame(const rclcpp::Time &callbackTime, const rclcpp::Logger &logger)
-
class StateManager
- #include <state_manager.hpp>
Public Functions
-
std::pair<Eigen::Vector3d, Eigen::Vector3d> transform_point(const rclcpp::Logger &logger, geometry_msgs::msg::Vector3 &point, int64_t left_frame_time, int64_t right_frame_time, int64_t lidar_frame_time, const std::pair<Eigen::Matrix<double, 3, 4>, Eigen::Matrix<double, 3, 4>> &projection_matrix_pair)
Transforms a 3D point in LiDAR space to 2D pixel coordinates in camera space.
- Parameters:
point – The 3D point from LiDAR
projection_matrix_l – Projection matrix for left camera
projection_matrix_r – Projection matrix for right camera
- Returns:
std::pair<Eigen::Vector2d, Eigen::Vector2d> Pixel coordinates in both cameras
-
std::pair<double, double> global_frame_to_local_frame(std::pair<double, double> global_frame_change, double yaw)
Takes a change in x and y in global frame and returns a change in x and y in local CMR frame.
- Parameters:
global_frame_change – The change in x and y in global frame
yaw – yaw in radians
- Returns:
std::pair<double, double>
-
State get_prev_state(const rclcpp::Logger &logger, uint64_t frame_time)
Get the velocity and yaw at the time of the frame.
- Parameters:
logger – Logger for logging messages
frame_time – Time of the frame in nanoseconds
- Returns:
std::pair<geometry_msgs::msg::TwistStamped::SharedPtr, geometry_msgs::msg::Vector3Stamped::SharedPtr>
Update velocity deque.
- Parameters:
vel_msg – Velocity message
- Returns:
void
Update yaw deque.
- Parameters:
yaw_msg – Yaw message
- Returns:
void
-
std::pair<Eigen::Vector3d, Eigen::Vector3d> transform_point(const rclcpp::Logger &logger, geometry_msgs::msg::Vector3 &point, int64_t left_frame_time, int64_t right_frame_time, int64_t lidar_frame_time, const std::pair<Eigen::Matrix<double, 3, 4>, Eigen::Matrix<double, 3, 4>> &projection_matrix_pair)
-
struct ObsConeInfo
- #include <cone_history_node.hpp>
We motion model on the position of the car to the observer, instead of thinking in global frame wrt to the start. This is because we want the current position to be the most accurate, and other positions to be less accurate.
what do you need to be able to do
update an old cone with some new cone that data associates to it
much easier to just track the global position with respect to the start position
track a counter for how many times the cone was seen with a certain color
- then vote on the most commonly seen color
obsconeinfo should track the calculated global cone position everytime we data associate a cone to it we will update the global position with the more recent one
1.) make a change to obsconeinfo to just store
global cone position normalized to the start pose
store a lifespan for the current history
2.)
-
class ConeHistoryNode : public rclcpp::Node
- #include <cone_history_node.hpp>
-
class PointToPixelNode : public rclcpp::Node
- #include <point_to_pixel_node.hpp>
-
namespace recoloring
Functions
-
TrackBounds recolor_cones(TrackBounds track_bounds, double C)
applies support vector machines to correct for misclassified cones. The algorithm takes in two sets of cones, calculates the best seperation boundary between them, and then reclassifies the cones
- Parameters:
track_bounds – set of yellow and set of blue Cone objects
C – C param for svm controls trade-off between correct classification and maximum margin seperation boundary
- Returns:
TrackBounds object that represents a newly reclassified left and right set of cones
-
TrackBounds recolor_cones(TrackBounds track_bounds, double C)
-
Cone find_closest_cone(const Cones &cones)