Skip to content

mariomlz99/MineInsight

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

5 Commits
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

MineInsight: A Multi-sensor Dataset for Humanitarian Demining Robotics in Off-Road Environments

[For Reviewers] Demonstration Video

---

mineinsight_yt_logo

---

RMA_Logo RAS_Logo 4DPL_Logo

KUL_Logo ACRO_Logo

Mario MaliziaΒΉ, Charles HamesseΒΉ, Ken HasselmannΒΉ

Geert De CubberΒΉ, Nikolaos TsiogkasΒ², Eric DemeesterΒ², Rob HaeltermanΒΉ

ΒΉ Royal Military Academy of Belgium, Β² KU Leuven

πŸ“œ Paper | πŸ“‚ GitHub


---------

This work is under review

---------

Repository Index


[1] Motivation

Landmines remain a persistent threat in conflict-affected regions, posing risks to civilians and impeding post-war recovery. Traditional demining methods are often slow, hazardous, and costly, necessitating the development of robotic solutions for safer and more efficient landmine detection.

MineInsight is a publicly available multi-spectral dataset designed to support advancements in robotic demining and off-road navigation. It features a diverse collection of sensor data, including visible (RGB, monochrome), short-wave infrared (VIS-SWIR), long-wave infrared (LWIR), and LiDAR scans. The dataset includes dual-view sensor scans from both a UGV and its robotic arm, providing multiple viewpoints to mitigate occlusions and improve detection accuracy.

With over 38,000 RGB frames, 53,000 VIS-SWIR frames, and 108,000 LWIR frames recorded in both daylight and nighttime conditions, featuring 35 different targets distributed along 3 tracks, MineInsight serves as a benchmark for developing and evaluating detection algorithms. It also offers an estimation of object localization, supporting researchers in algorithm validation and performance benchmarking.

MineInsight follows best practices from established robotic datasets and provides a valuable resource for the community to advance research in landmine detection, off-road navigation, and sensor fusion.


dataset_presentation_pic

[2] Experimental Setup

This section follows the terminology and conventions outlined in the accompanying paper.
For a more detailed understanding of the methodology and experimental design, please refer to the paper.

Sensors Overview

Experimental Setup

Platform and Robotic Arm Platform Sensor Suite Robotic Arm Sensor Suite
Clearpath Husky A200 UGV
Universal Robots UR5e Robotic Arm
Livox Mid-360 LiDAR
Sevensense Core Research Module
Microstrain 3DM-GV7-AR IMU
Teledyne FLIR Boson 640
Alvium 1800 U-130 VSWIR
Alvium 1800 U-240
Livox AVIA

Sensors Coordinate Systems

The coordinate systems (and their TF name) of all sensors in our platform are illustrated in the figure below.

Note: The positions of the axis systems in the figure are approximate.
This visualization provides insight into the relative orientations between sensors,
whether in the robotic arm sensor suite or the platform sensor suite.

For the full transformation chain, refer to the following ROS 2 topics in the dataset:

  • /tf_static β†’ Contains static transformations between sensors.
  • /tf β†’ Contains dynamic transformations recorded during operation.

tf_sens

[3] Environments and Sequences

The dataset was collected across 3 distinct tracks, each designed to represent a demining scenario with varying terrain and environmental conditions. These tracks contain a diverse set of targets, positioned to challenge algorithms development. The figures represents a top-view pointcloud distribution of the targets along the track.

dataset_tracks_presentation

[4] Targets

For each track, a detailed inventory PDF is available, providing the full list of targets along with their respective details.
You can find them in the tracks_inventory folder of this repository:

πŸ“„ Track 1 Inventory Β |Β  πŸ“„ Track 2 Inventory Β |Β  πŸ“„ Track 3 Inventory

Each PDF catalogs each item with:

  • ID: Unique identifier for each target;
  • Name: Official name of the target;
  • Image: A visual reference of the object for recognition;
  • CAT-UXO link: Detailed explanation of the target (available only for landmines).

[5] Calibration

The dataset includes intrinsic and extrinsic calibration files for all cameras and LiDARs.

Intrinsic Calibration

intrinsics_calibration/

  • lwir_camera_intrinsics.yaml β†’ LWIR camera
  • rgb_camera_intrinsics.yaml β†’ RGB camera
  • sevensense_cameras_intrinsics.yaml β†’ Sevensense grayscale cameras
  • swir_camera_intrinsics.yaml β†’ VIS-SWIR camera

Extrinsic Calibration

extrinsics_calibration/

  • lwir_avia_extrinsics.yaml β†’ LWIR ↔ Livox AVIA
  • rgb_avia_extrinsics.yaml β†’ RGB ↔ Livox AVIA
  • sevensense_mid360_extrinsics.yaml β†’ Sevensense ↔ Livox Mid-360
  • swir_avia_extrinsics.yaml β†’ VIS-SWIR ↔ Livox AVIA

Note:
Intrinsic parameters are also included in the extrinsics calibration files, as they were evaluated using raw camera images.

[6] Data

We release 2 sequences per track, resulting in a total of 6 sequences.
The data is available in three different formats:

  • πŸ—„ ROS 2 Bags
  • πŸ—„ ROS 2 Bags with Livox Custom Msg
  • πŸ–Ό Raw Images

ROS 2 Bags Structure

Each ROS 2 Bag, includes:

Click here to view all the topics with a detailed explaination
Topic Message Type Description
/allied_swir/image_raw/compressed sensor_msgs/msg/CompressedImage SWIR camera raw image
/allied_swir/image_raw/rectified/compressed sensor_msgs/msg/CompressedImage SWIR camera rectified image
/alphasense/cam_0/image_raw/compressed sensor_msgs/msg/CompressedImage Sevensense Core Greyscale camera 0 raw image
/alphasense/cam_0/image_raw/rectified/compressed sensor_msgs/msg/CompressedImage Sevensense Core Greyscale camera 0 rectified image
/alphasense/cam_1/image_raw/compressed sensor_msgs/msg/CompressedImage Sevensense Core Greyscale camera 1 raw image
/alphasense/cam_1/image_raw/rectified/compressed sensor_msgs/msg/CompressedImage Sevensense Core Greyscale camera 1 rectified image
/alphasense/cam_2/image_raw/compressed sensor_msgs/msg/CompressedImage Sevensense Core Greyscale camera 2 raw image
/alphasense/cam_2/image_raw/rectified/compressed sensor_msgs/msg/CompressedImage Sevensense Core Greyscale camera 2 rectified image
/alphasense/cam_3/image_raw/compressed sensor_msgs/msg/CompressedImage Sevensense Core Greyscale camera 3 raw image
/alphasense/cam_3/image_raw/rectified/compressed sensor_msgs/msg/CompressedImage Sevensense Core Greyscale camera 3 rectified image
/alphasense/cam_4/image_raw/compressed sensor_msgs/msg/CompressedImage Sevensense Core Greyscale camera 4 raw image
/alphasense/cam_4/image_raw/rectified/compressed sensor_msgs/msg/CompressedImage Sevensense Core Greyscale camera 4 rectified image
/alphasense/imu sensor_msgs/msg/Imu IMU data from Sevensense Core
/avia/livox/imu sensor_msgs/msg/Imu IMU data from Livox AVIA LiDAR
/avia/livox/lidar/pointcloud2 sensor_msgs/msg/PointCloud2 Point cloud data from Livox AVIA LiDAR
/flir/thermal/compressed sensor_msgs/msg/CompressedImage LWIR camera raw image
/flir/thermal/rectified/compressed sensor_msgs/msg/CompressedImage LWIR camera rectified image
/flir/thermal/colorized/compressed sensor_msgs/msg/CompressedImage LWIR camera raw image with colorized overlay
/flir/thermal/rectified/colorized/compressed sensor_msgs/msg/CompressedImage LWIR camera rectified image with colorized overlay
/microstrain/imu sensor_msgs/msg/Imu IMU data from Microstrain (internal)
/mid360/livox/imu sensor_msgs/msg/Imu IMU data from Livox Mid-360 LiDAR
/mid360/livox/lidar/pointcloud2 sensor_msgs/msg/PointCloud2 Point cloud data from Livox Mid-360 LiDAR
/odometry/filtered nav_msgs/msg/Odometry Filtered odometry data (ROS 2 localization, fusion output )
/odometry/wheel nav_msgs/msg/Odometry Wheel odometry data from UGV wheel encoder
/tf tf2_msgs/msg/TFMessage Real-time transformations between coordinate frames
/tf_static tf2_msgs/msg/TFMessage Static transformations

If you are downloading a ROS 2 Bag with Livox Custom Msg, you will find the following additional topics:

Topic Message Type Description
/avia/livox/lidar livox_interfaces/msg/CustomMsg Raw point cloud data from Livox AVIA LiDAR in custom Livox format
/mid360/livox/lidar livox_ros_driver2/msg/CustomMsg Raw point cloud data from Livox Mid-360 LiDAR in custom Livox format

Note: These messages include timestamps for each point in the point cloud scan.
To correctly decode and use these messages, install the official Livox drivers:

For installation instructions, refer to the documentation in the respective repositories.

ROS 2 Bags Downloads

You can download the datasets from the links below:

Track 1

πŸ”Ή Sequence 1:

πŸ”Ή Sequence 2:

Track 2

πŸ”Ή Sequence 1:

πŸ”Ή Sequence 2:

Track 3

πŸ”Ή Sequence 1:

πŸ”Ή Sequence 2:

Raw Images

You can download each folder containing the images from the links below:

Track / Seq RGB VIS-SWIR LWIR
Track 1 - Seq 1 track_1_s1_rgb [3.8 GB] track_1_s1_swir [1.2 GB] track_1_s1_lwir [671.0 MB]
Track 1 - Seq 2 track_1_s2_rgb [12.7 GB] track_1_s2_swir [4.2 GB] track_1_s2_lwir [3.1 GB]
Track 2 - Seq 1 track_2_s1_rgb [2.8 GB] track_2_s1_swir [872.5 MB] track_2_s1_lwir [521.3 MB]
Track 2 - Seq 2 track_2_s2_rgb [15.8 GB] track_2_s2_swir [2.9 GB] track_2_s2_lwir [2.3 GB]
Track 3 - Seq 1

❌

track_3_s1_swir [630.3 MB] track_3_s1_lwir [568.3 MB]
Track 3 - Seq 2

❌

track_3_s2_swir [2.6 GB] track_3_s2_lwir [2.0 GB]

Each folder (.zip) follows the naming convention:

track_(nt)_s(ns)_camera.zip

Where:

  • (nt) β†’ Track number (1, 2, 3)
  • (ns) β†’ Sequence number (1, 2)
  • camera β†’ Image type (rgb, swir, or lwir)

[7] Target Location Annotations

Each target location is estimated for each sequence of each track (refer to the paper for this estimation). The estimation of the target locations can be found according to the data you are using:

Target Location Using Raw Images:

The target locations are already included in the folder downloaded in the previous sections [add hyperlink].

Each folder contains:

  • Images β†’ Stored in .jpg format
  • Annotations β†’ Corresponding .txt files

The generic naming convention for each jpg/txt is:

track_(nt)_s(ns)_camera_timestampsec_timestampnanosec (.jpg / .txt)

The YOLOv5 / YOLOv8 format is used for annotations of the targets position in the .txt files.

<class_id> <x_center> <y_center> <width> <height>

Each ID corresponds to an object, and the full ID description can be found in the YAML file:
targets_list.yaml

Target Location Using ROS 2 Bags:

The code inside target_location_ros2_bags allows you to localize targets in images by reprojecting 3D point cloud data onto image frames. It supports RGB, VIS-SWIR, and LWIR cameras, automatically handling bounding boxes, timestamps, and target labeling.

Folder Structure:

πŸ“‚ target_location_ros2_bags/
│── πŸ“‚ param/                 # Configuration YAML files
│── πŸ“‚ target_locations_csv/   # CSV files with target locations
│── πŸ“‚ tracks_targets_list/    # YAML mapping target IDs to labels
│── ros2_bag_targets_display.py  # Main Python script

Note: For simplicity, the params.yaml file repeats the extrinsics and part of the intrinsics of each camera, avoiding dependencies on other configuration files higher in the repository hierarchy.

How to run:

  1. Set up the environment (Ensure ROS 2 and dependencies are installed):
source /opt/ros/$DISTRO/setup.bash
pip install numpy pandas opencv-python pyyaml scipy
  1. Modify the configuration (Check param/params.yaml for bag path, CSV file, and camera topic you want to process).

  2. Run the script to process the bag and display results:

python3 ros2_bag_targets_display.py

[8] Acknowledgments

The authors thank Alessandra Miuccio and TimothΓ©e FrΓ©ville for their support in the hardware and software design.
They also thank Sanne Van Hees and Jorick Van Kwikenborne for their assistance in organizing the measurement campaign.

[9] Citation

If you use MineInsight in your own work, please cite the accompanying paper (provisional arXiv reference)

@article{mineinsight,
  author       = {Mario Malizia and Charles Hamesse and Ken Hasselmann and
                  Geert De Cubber and Nikolaos Tsiogkas and Eric Demeester and
                  Rob Haelterman},
  title        = {{MineInsight}: A Multi-sensor Dataset for Humanitarian Demining Robotics in Off-Road Environments},
  journal      = {arXiv Preprint},
  year         = {2025},
  doi          = {10.48550/arXiv.2506.04842},
  url          = {https://arxiv.org/abs/2506.04842}
}

[10] License

This work is licensed under the Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International (CC BY-NC-SA 4.0).
You are free to share and adapt this work for non-commercial purposes, as long as you credit the authors and apply the same license to any derivative works.

For full details, see:
CC BY-NC-SA 4.0 License

[11] Related Work

About

MineInsight: A Multi-spectral Dataset for Humanitarian Demining Robotics in Off-Road Environments

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Languages