You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
@@ -19,6 +18,7 @@ Please take a look at the feature list below for full details on what the system
19
18
20
19
## News / Events
21
20
21
+
***December 1, 2020** - Released improved memory management, active feature pointcloud publishing, limiting number of features in update to bound compute, and other small fixes. See v2.3 [PR#117](https://github.com/rpng/open_vins/pull/117) for details.
22
22
***November 18, 2020** - Released groundtruth generation utility package, [vicon2gt](https://github.com/rpng/vicon2gt) to enable creation of groundtruth trajectories in a motion capture room for evaulating VIO methods.
23
23
***July 7, 2020** - Released zero velocity update for vehicle applications and direct initialization when standing still. See [PR#79](https://github.com/rpng/open_vins/pull/79) for details.
24
24
***May 18, 2020** - Released secondary pose graph example repository [ov_secondary](https://github.com/rpng/ov_secondary) based on [VINS-Fusion](https://github.com/HKUST-Aerial-Robotics/VINS-Fusion). OpenVINS now publishes marginalized feature track, feature 3d position, and first camera intrinsics and extrinsics. See [PR#66](https://github.com/rpng/open_vins/pull/66) for details and discussion.
@section dev-profiling-compute Profiling Processing Time
9
+
10
+
One way (besides inserting timing statements into the code) is to leverage a profiler such as [valgrind](https://www.valgrind.org/).
11
+
This tool allows for recording of the call stack of the system.
12
+
To use this with a ROS node, we can do the following (based on [this](http://wiki.ros.org/roslaunch/Tutorials/Roslaunch%20Nodes%20in%20Valgrind%20or%20GDB) guide):
- Append `launch-prefix="valgrind --tool=callgrind --callgrind-out-file=/tmp/callgrind.txt"` to your ROS node. This will cause the node to run with valgrind.
16
+
- Change the bag length to be only 10 or so seconds (since profiling is slow)
17
+
18
+
@code{.shell-session}
19
+
sudo apt install valgrind
20
+
roslaunch ov_msckf pgeneva_serial_eth.launch
21
+
@endcode
22
+
23
+
After running the profiling program we will want to visualize it.
24
+
There are some good tools for that, specifically we are using [gprof2dot](https://github.com/jrfonseca/gprof2dot) and [xdot.py](https://github.com/jrfonseca/xdot.py).
25
+
First we will post-process it into a xdot graph format and then visualize it for inspection.
Copy file name to clipboardexpand all lines: docs/gs-installing.dox
+25-17
Original file line number
Diff line number
Diff line change
@@ -6,39 +6,48 @@
6
6
7
7
@section gs-install-ros ROS Dependency
8
8
9
-
Our codebase is built on top of the [Robot Operating System (ROS)](https://www.ros.org/) and has been tested building on Ubuntu 16.04systems with ROS Kinetic Kame.
9
+
Our codebase is built on top of the [Robot Operating System (ROS)](https://www.ros.org/) and has been tested building on Ubuntu 16.04, 18.04, 20.04 systems with ROS Kinetic, Melodic, and Noetic.
10
10
We also recommend installing the [catkin_tools](https://github.com/catkin/catkin_tools) build for easy ROS building.
11
-
Please see the official instructions [here](http://wiki.ros.org/kinetic/Installation/Ubuntu), which have also been copied below.
11
+
All ROS installs include [OpenCV](https://github.com/opencv/opencv), but if you need to build OpenCV from source ensure you build the contributed modules as we use Aruco feature extraction.
12
+
See the [opencv_contrib](https://github.com/opencv/opencv_contrib) readme on how to configure your cmake command when you build the core OpenCV library.
13
+
We have tested building with OpenCV 3.2, 3.3, 3.4, 4.2, and 4.5.
14
+
Please see the official instructions to install ROS:
We do support ROS-free builds, but don't recommend using this interface as we have limited support for it.
25
+
You will need to ensure you have installed OpenCV and Eigen3 which are the only dependencies.
17
26
If ROS is not found on the system, one can use command line options to run the simulation without any visualization.
18
27
If you are using the ROS-free interface, you will need to properly construct the @ref ov_msckf::VioManagerOptions struct with proper information and feed inertial and image data into the correct functions.
We leverage [OpenCV 3.4.6](https://opencv.org/) for this project and recommend you compile it from source.
36
-
You can try linking to the one included with ROS but we have always had success with building from source to ensure we have all contributed OpenCV libraries.
37
-
One should make sure you can see some of the "contrib" (e.g. xfeature2d) to ensure you have linked to the contrib modules.
44
+
We leverage [OpenCV](https://opencv.org/) for this project which you can typically use the install from ROS.
45
+
If these do not work (or are using non-ROS building), then you can try building OpenCV from source ensuring you include the contrib modules.
46
+
One should make sure you can see some of the "contrib" (e.g. aruco) when you cmake to ensure you have linked to the contrib modules.
@@ -111,6 +112,7 @@ Please take a look at the other launch files or the ov_msckf::VioManager constru
111
112
| Group | Description |
112
113
|---|---|
113
114
| bag topics | ROS topics that we will parse the IMU and camera data from. If we are only using one camera, i.e. monocular, then only the first camera topic is used. |
115
+
| stereo pairs | Even set of camera ids, which stereo tracking will be tried to be performed on. For example a 0,1,2,3 means that you want to perform stereo tracking of the 0,1 and 2,3 camera pairs. |
114
116
| bag params. | Location of the bag we will read along with the start time, in seconds, and duration we want to run on. |
115
117
| world/filter params. | This has most of the core parameters that can be tuned to improve filter performance including the sliding window size, representation, gravity, and number of environmental SLAM features. One can also change the number of cameras from 1 to 2 to do stereo matching and update. |
116
118
| tracker/extractor params. | For our visual front-end tracker we have a few key parameters that we can tune, most importantly is the number of features extracted. |
0 commit comments