-
Notifications
You must be signed in to change notification settings - Fork 6
Description
At the moment to generate the CONCERT robot URDF we use the modular package. As it can be seen from modular.launch it accepts arguments to control whether or not to add cameras, velodyne, etc.
In this launch file we use it to generate two URDF files, let's call them urdf_gz and urdf_xbot:
<!-- Load the URDF/SRDF into the ROS Parameter Server -->
<param name="robot_description_gz"
command="python3 $(arg modular_description) -o urdf
-a gazebo_urdf:=true
realsense:=$(arg realsense)
velodyne:=$(arg velodyne)
-r modularbot_gz"/>
<param name="robot_description_xbot"
command="python3 $(arg modular_description) -o urdf
-a gazebo_urdf:=false
realsense:=$(arg realsense)
velodyne:=$(arg velodyne)
-r modularbot"/>
Basically, the possible arguments are:
- gazebo_urdf. Allows to generate either urdf_gz or urdf_xbot by controlling the inclusion of:
- the floating joint needed by xbot but not by gazebo
- all the gazebo tags, which will be not included in the urdf used by xbot. This is mainly to have a "cleaner" urdf to be used by xbot (and other libraries that need just kin./dyn. parameters) whithout all the gazebo tags with simulation parameters and plugins, that are needed just by gazebo
- velodyne. Allows to control the inclusion of Velodyne lidars (true for simulating them).
- realsense. Allows to control the inclusion of Realsense cameras (true for simulating them).
In particular, for the cameras I think is a bit tricky to determine how to include them, depending if we are simulating them or not. For example, see the discussion regarding this for the Centauro platform: #26 and #29.
In general, I think there are two ways (feel free to propose a third):
-
let XBot publish all the tfs when simulating the cameras (
realsense:=true). This means the camera tf tree will be part of both urdf_gz and urdf_xbot.
Whenrealsense:=falseonly the root frames (*_bottom_screw_framefor the D camera and the*_pose_framefor the T camera) are included in the model, so that the realsense node (or an equivalent one) will take care of publishing the rest of the tf tree.
This is a similar approach to that used for Centauro and it's how is currently implemented (last commit: a7bd2ce).
Possible drawbacks of this approach are:- the fact that the urdf used in simulation is different than the one used on the real robot (according to @alaurenzi this has caused some issues in the past),
- that the gazebo tag and plugin included by the realsense xacro will end up in the urdf_xbot anyway (unless we modify
realsense_gazebo_description)
-
publish all camera tfs with external robot_state_publishers when simulating the cameras (
realsense:=true). This means the camera tf tree will be part only of urdf_gz.
The urdf_xbot will contain only the root frames (*_bottom_screw_frameand*_pose_frame) both forrealsense:=trueorrealsense:=false. So it will be the same urdf both in simulation and on the real robot.
The tfs will be published by the realsense node (or an equivalent) when on the real robot. While, when in simulation, will need to be published by external robot_state_publishers, since xbot will not be publishing them anymore.
The only drawbacks here is that we'll need to have 4 separate "robot_description" (one for each camera) and 4 separate robot_state_publisher, so making the launchfile a bit more complex. But the other two drawbacks of option 1 should be solved.
What do you think @alaurenzi @liesrock @aled96 @torydebra?
Considering your experience in the past with the cameras what do you think will be the best option? Or do you have other suggestions?