Utilities for HUNAVsim, NAV2, and VISualization
people_visualizer: Visualizes detected humans as markers in rvizrobot_pose_publisher: Publishes robot posetf_keyboard_publisher: Publishes a transform that's adjustable using keyboard
hudet.launch.py: Starts human detection using a Zed2 camera.map_server.launch.py: Starts the map server without other Nav2 functionalitiesmars.launch.py: Starts a HuNavSim simulation in Gazebotb3_custom_sim.launch.py: Starts Nav2 with basic robot navigation functionality
(TODO: Setups involving multiple cameras, which would mostly require repeating the steps below, each time specifying a different camera name)
-
Set up Zed2 camera(s) in the environment, with camera launch arguments defined in
<zed_launch_args_file>(example) along with zed node and tf publisher parameters defined in<zed_and_tf_params_file>(example). -
Create a map of the environment and save it to
<map_path> -
Launch map server, and optionally note the approximate positions of Zed2 cameras
ros2 launch hunavis map_server.launch.py use_simulator:=False map_path:=<map_path>
empty_room.yamlis an example of<map_path>- Tip: Use the
2D Pose Estimatefeature inrvizto set the camera pose on the map
-
Launch human detection
ros2 launch hunavis hudet.launch.py use_simulator:=False zed_launch_args_file:=<zed_launch_args_file>
zed_launch_args.yamlis an example of<zed_launch_args_file>- If this is the first time deep learning models are run on the camera, the Zed SDK will begin to optimize them. Optionally, follow instructions here to optimize the models manually. For example, the following optimizes all the models that come with the camera:
ZED_Diagnostic -aio
-
Run tf publisher node to adjust camera pose with respect to the map.
ros2 run hunavis tf_keyboard_publisher --ros-args --params-file <zed_and_tf_params_file>
- Optionally,
<zed_and_tf_params_file>can be updated with the fine-tuned tf
- Optionally,