This repository has been archived by the owner on Sep 20, 2024. It is now read-only.
-
Notifications
You must be signed in to change notification settings - Fork 44
Workshop 6 TFs and Vision
Marc Hanheide edited this page Oct 5, 2022
·
10 revisions
- Update:
sudo apt-get update && sudo apt-get upgrade
- Install today's packages:
sudo apt-get install \ ros-noetic-opencv-apps \ ros-noetic-rqt-image-view \ ros-noetic-uol-cmp9767m-base \ ros-noetic-find-object-2d \ ros-noetic-video-stream-opencv \ ros-noetic-topic-tools \ ros-noetic-rqt-tf-tree
- make sure to close all terminals and open them fresh after the update
- Display the tf tree of the Thorvald robot, explain what a frame is (http://wiki.ros.org/tf might help, as might this scientific paper)
- find a way to display the position of the robot's Kinect camera (frame
thorvald_001/kinect2_rgb_optical_frame
) in global (thorvald_001/odom
) coordinates- you may either implement Python code, following the TranformListener example or the newer tutorials, or
- figure out how to use a command-line tool:
rosrun tf tf_echo
- provide an image stream from live camera on topic
/cam/image_raw
:roslaunch video_stream_opencv camera.launch video_stream_provider:=/dev/video0 camera_name:=cam visualize:=true
- provide an image stream from a video file on topic
/cam/image_raw
:roslaunch video_stream_opencv camera.launch video_stream_provider:=video.mp4 camera_name:=cam visualize:=true
- relay a topic (useful to feed link up two pipelines), e.g.
rosrun topic_tools relay /thorvald_001/kinect2_camera/hd/image_color_rect /cam/image_raw
- view image streams:
rqt_image_view
- measure the frequency of an image topic:
rostopic hz
- create a catkin package
my_opencv_test
, which should depend oncv_bridge
androspy
(remember how to do that?) - be inspired by the implementation of
opencv_bridge.py
and code some small piece of python code that subscribes to the simulated noetic of your Thorvald robot, and e.g. masks our any green stuff in the image - (optional) also publish the result from the above operation
- Look at http://wiki.ros.org/opencv_apps
- install with
sudo apt-get install ros-noetic-opencv-apps
- e.g. run any of the following:
rosrun opencv_apps simple_flow image:=/cam/image_raw
rosrun opencv_apps find_contours image:=/cam/image_raw
- always view output with
rqt_image_view
- read http://introlab.github.io/find-object/ and http://wiki.ros.org/find_object_2d
- read about the feature detectors and descriptors used in the OpenCV Documentation
- install with
sudo apt-get install ros-noetic-find-object-2d
- run as
rosrun find_object_2d find_object_2d image:=/cam/image_raw
- train objects and try different features
First, read about yolo, and even the original yolo paper
- clone into your workspace
git clone --recursive https://github.com/leggedrobotics/darknet_ros.git
- install its depepndencies:
rosdep update; rosdep install --from-paths . -i -y
- build it:
catkin_make
- edit
./darknet_ros/darknet_ros/config/ros.yaml
to use the correct image topic, e.g.:camera_reading: topic: /cam/image_raw queue_size: 1
- source your workspace and run as
roslaunch darknet_ros darknet_ros.launch