For C++, the EdgeDetector.hpp and EdgeDetector.cpp files should be implemented. Any required libraries should be added to CMakeLists.txt. For Python, edge_detector.py should be completed with installation instructions for dependencies.
It should be possible to provide different images as input and display the output with edges detected in green. Additionally, a README.md file should include installation instructions, implementation steps, concepts used, and potential improvements.
Edge detection identifies rapid changes in pixel intensity. The implemented solutions are:
- Laplacian Edge Detector: Identifies edges as zero-crossing points in the second derivative (2D Laplacian) of the image function.
- Canny Edge Detector: Uses a gradient operator (Sobel filter) to compute gradients at each pixel, then applies a 1D Laplacian along the gradient orientation. Strong zero crossings are classified as edges.
Implementation Details:
- The image is converted from RGB to grayscale to simplify computation.
- A Gaussian blur is applied to reduce noise, preventing false zero crossings.
- The function
edge_detectinedge_detector.pyimplements these methods:0: Canny Edge Detector (default)1: Laplacian Edge Detector
- A commented-out section overlays results from both detectors for comparison.
Provide ROS .srv and .msg files to create a ROS service for edge detection. Implement a client that detects edges in images from a directory.
- A ROS service named
Image_edge_detect_serviceacceptsinput_img_path(string) and returnsoutput_img_path(string), storing the edge-detected image. - To use:
- Run
../src/edge_detect_img_node.pyto start the service.
- Run
Additional Task: Detect edges in images from a ROS topic when playing a .bag file, then visualize input and output images in RViz.
Result:
- A ROS node
image_raw_listenersubscribes to/camera/color/image_rawto receivesensor_msgs/Image, processes edge detection, and republishes results toedge_detected_image. - To perform this:
- Play a provided
.bagfile:rosbag play --clock -l <path_to_bagfile> - Run
../src/edge_detect_bag_data_node.py - Add RViz displays for
/camera/color/image_rawandedge_detected_image
- Play a provided
Additional Task: Convert detected edge pixels from (u, v) to 3D (x, y, z) using depth images and camera parameters. Publish 3D data to a ROS topic (edge_points) of type sensor_msgs/PointCloud.
Result:
- Using intrinsic camera parameters (
/camera/color/camera_info) and depth data (/camera/depth/image_rect_raw), the 3D coordinates are computed as:x = depth * (u - cx) / fxy = depth * (v - cy) / fyz = depth
- The
edge_pointstopic publishessensor_msgs/PointCloud. - To perform this:
- Play a
.bagfile:rosbag play --clock -l <path_to_bagfile> - Run
../src/edge_detect_bag_data_node.py - Start publishing
edge_points:../src/edge_pointcloud_pub_node.py - Verify output:
rostopic echo /edge_points
- Play a
Visualize 3D edge points in RViz as markers alongside a robot model.
- Markers are published in
camera_color_optical_framesince/camera/...topics use this frame. - The
edge_3D_markertopic publishesvisualization_msgs/Marker. - To view markers:
- Play a
.bagfile:rosbag play --clock -l <path_to_bagfile> - Run
../src/edge_detect_bag_data_node.py - Start point cloud publishing:
../src/edge_pointcloud_pub_node.py - Open RViz and add a
Markerdisplay for/edge_3D_marker
- Play a
Transforming 3D Data to Robot Frames:
TransformListener.transformPoint()converts 3D coordinates fromcamera_color_optical_frameto robot frames using transformations from/tfand/tf_static.- The transformation logic is implemented but commented out in
../src/edge_pointcloud_pub_node.pydue to testing limitations.
- Refactoring Python scripts into classes for better modularity.
- Translating
rospyimplementations intoroscppfor efficiency. - Enhancing noise filtering to improve edge detection accuracy.
This repository provides robust edge detection solutions with ROS integration, visualization in RViz, and 3D coordinate transformation for robotic applications.