Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
9 changes: 3 additions & 6 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -27,25 +27,22 @@ While gs-madrona currently depends on Genesis, we plan to decouple it in the nea
- CUDA kernel caching with dirty-check rebuild
- Fixed vertex normal computation in the ray tracer
- Benchmark scripts comparing Madrona with other batch renderers, including IsaacLab and ManiSkill
- Support for normal and semantic/instance segmentation output
- Per-camera dynamic FOV, near/far plane control
- Light color specification and attenuation based on distance and angle

## Removed Features
- Legacy depth-only rendering via color buffer
- Batch rendering pipeline based on JAX

## Known Limitations
- Only color and depth outputs are currently supported
- Shadows are only cast from the first light with `castshadow=true`
- When rendering multiple cameras with different resolutions, the first camera’s resolution is used for the entire batch

## Roadmap / Future Plans
**gs-madrona** will continue evolving to support higher-quality rendering and broader functionality. Upcoming features include:
- Batch rendering support for cameras with varying resolutions
- Normal buffer and semantic/instance segmentation output
- Per-camera dynamic FOV control
- Camera-specific near/far plane configuration
- Light color specification
- Dynamic light parameters (position, direction, intensity, color, enable/disable)
- Light attenuation based on distance and angle
- Ambient lighting control (color and intensity)
- PBR material and texture support
- Output rendering results to video files
Expand Down
33 changes: 19 additions & 14 deletions scripts/perf_benchmark/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -29,21 +29,32 @@ perf_benchmark/
│ ├── benchmark_config_smoke_test.yml # Quick test configuration
│ ├── benchmark_config_madrona.yml # Madrona-specific config
│ ├── benchmark_config_omni.yml # Omniverse-specific config
│ ├── benchmark_config_maniskill.yml # ManiSkill-specific config
│ └── benchmark_config_full.yml # Comprehensive test config
```

## Quick Start


### 1. Optional steps
IsaacLab and Maniskill needs to be installed if they need to be benchmarked.

Install IsaacLab
- Download and install IsaacLab from https://developer.nvidia.com/isaac-sim
- Add IsaacLab to your PATH:

Install Maniskill
- Install ManiSkill2 following the [official instructions](https://github.com/haosulab/ManiSkill2)
To enable benchmarking with IsaacLab and ManiSkill, follow these optional setup steps:

- Install IsaacLab
- Download and install IsaacLab from [NVIDIA IsaacLab Documentation](https://isaac-sim.github.io/IsaacLab/main/source/setup/installation/index.html).
- Add the IsaacLab installation directory to your system `PATH`.
- Install ManiSkill
- Install ManiSkill2 by following [ManiSkill Documentation](https://maniskill.readthedocs.io/en/latest/user_guide/).
- Set Environment Variables
- Both IsaacLab and ManiSkill use the `ASSET_DIR` environment variable to locate the Genesis assets directory.
- Use `genesis.utils.misc.asset_dir()` in Genesis to retrieve the exact directory path and then set the environment variable:
```
export ASSET_DIR=/path/to/genesis/asset_dir
```
- Preprocess Required Assets
- IsaacLab benchmarks require MJCF assets to be preprocessed for compatibility with Omniverse.
```bash
python process_xml.py --file ./configs/benchmark_config_omni.yml
```

### 2. Run a Quick Smoke Test

Expand All @@ -63,12 +74,6 @@ python batch_benchmark.py -f benchmark_config_full.yml
python batch_benchmark.py -f benchmark_config_full.yml -c /name/of/previous/run/folder
```

### 5. Preprocess MUJUCO XML Assets to make it compatible with Omniverse (if needed)

```bash
python process_xml.py --file ./genesis/assets/xml/franka_emika_panda/panda.xml
```

## Configuration Files

Configuration files are YAML-based and define the test parameters. Here's an example structure:
Expand Down
15 changes: 15 additions & 0 deletions scripts/perf_benchmark/batch_benchmark.py
Original file line number Diff line number Diff line change
Expand Up @@ -225,6 +225,21 @@ def create_benchmark_result_file(continue_from):
return benchmark_result_file


def write_benchmark_result_file(args: BenchmarkArgs, performance_results: dict):
os.makedirs(os.path.dirname(args.benchmark_result_file), exist_ok=True)
with open(args.benchmark_result_file, "a") as f:
f.write(
f"succeeded,{args.mjcf},{args.renderer},"
f"{args.rasterizer},{args.n_envs},{args.n_steps},"
f"{args.resX},{args.resY},"
f"{args.camera_posX},{args.camera_posY},{args.camera_posZ},"
f"{args.camera_lookatX},{args.camera_lookatY},{args.camera_lookatZ},"
f"{args.camera_fov},"
f"{performance_results['time_taken_gpu']},{performance_results['time_taken_per_env_gpu']},{performance_results['time_taken_cpu']},"
f"{performance_results['time_taken_per_env_cpu']},{performance_results['fps']},{performance_results['fps_per_env']}\n"
)


def get_previous_runs(continue_from_file):
if continue_from_file is None:
return []
Expand Down
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
14 changes: 14 additions & 0 deletions scripts/perf_benchmark/benchmark_assets/plane_urdf/plane.mtl
Original file line number Diff line number Diff line change
@@ -0,0 +1,14 @@
newmtl Material
Ns 10.0000
Ni 1.5000
d 1.0000
Tr 0.0000
Tf 1.0000 1.0000 1.0000
illum 2
Ka 0.0000 0.0000 0.0000
Kd 1.0000 1.0000 1.0000
Ks 0.0000 0.0000 0.0000
Ke 0.0000 0.0000 0.0000
map_Ka cube.tga
map_Kd checker.png

26 changes: 26 additions & 0 deletions scripts/perf_benchmark/benchmark_assets/plane_urdf/plane.urdf
Original file line number Diff line number Diff line change
@@ -0,0 +1,26 @@
<?xml version="1.0" ?>
<robot name="plane">
<link name="planeLink">
<contact>
<lateral_friction value="1"/>
</contact>
<inertial>
<origin rpy="0 0 0" xyz="0 0 0"/>
<mass value="1.0"/>
<inertia ixx="1.0" ixy="0" ixz="0" iyy="1.0" iyz="0" izz="1.0"/>
</inertial>
<visual>
<origin rpy="0 0 0" xyz="0 0 0"/>
<geometry>
<mesh filename="plane100.obj" scale="1 1 1"/>
</geometry>
</visual>
<collision>
<origin rpy="0 0 0" xyz="0 0 -5"/>
<geometry>
<box size="200 200 10"/>
</geometry>
</collision>
</link>
</robot>

22 changes: 22 additions & 0 deletions scripts/perf_benchmark/benchmark_assets/plane_urdf/plane100.obj
Original file line number Diff line number Diff line change
@@ -0,0 +1,22 @@
# Blender v2.66 (sub 1) OBJ File: ''
# www.blender.org
mtllib plane.mtl
o Plane
v 100.000000 -100.000000 0.000000
v 100.000000 100.000000 0.000000
v -100.000000 100.000000 0.000000
v -100.000000 -100.000000 0.000000

vt 100.000000 0.000000
vt 100.000000 100.000000
vt 0.000000 100.000000
vt 0.000000 0.000000



usemtl Material
s off
f 1/1 2/2 3/3
f 1/1 3/3 4/4


Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
b3f970c53c90c6563970b4d47b8e0bab
18 changes: 18 additions & 0 deletions scripts/perf_benchmark/benchmark_assets/plane_usd/config.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,18 @@
asset_path: genesis/assets/urdf/plane/plane.urdf
usd_dir: null
usd_file_name: null
force_usd_conversion: true
make_instanceable: true
fix_base: true
root_link_name: null
link_density: 0.0
merge_fixed_joints: true
convert_mimic_joints_to_normal_joints: false
joint_drive: null
collider_type: convex_hull
self_collision: false
replace_cylinders_with_capsules: false
collision_from_visuals: false
##
# Generated by UrdfConverter on 2025-06-10 at 18:41:48.
##
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
46 changes: 26 additions & 20 deletions scripts/perf_benchmark/benchmark_madrona.py
Original file line number Diff line number Diff line change
@@ -1,8 +1,9 @@
import os

import genesis as gs
from genesis.utils.image_exporter import FrameImageExporter

from batch_benchmark import BenchmarkArgs
from batch_benchmark import BenchmarkArgs, write_benchmark_result_file
from benchmark_profiler import BenchmarkProfiler


Expand Down Expand Up @@ -64,17 +65,17 @@ def init_gs(benchmark_args):
pos=(0.0, 0.0, 1.5),
dir=(1.0, 1.0, -2.0),
directional=True,
castshadow=True,
castshadow=False,
cutoff=45.0,
intensity=0.5,
)
scene.add_light(
pos=(4, -4, 4),
dir=(-1, 1, -1),
directional=False,
castshadow=True,
castshadow=False,
cutoff=45.0,
intensity=1,
intensity=0.5,
)
########################## build ##########################
scene.build(n_envs=benchmark_args.n_envs)
Expand All @@ -88,33 +89,38 @@ def run_benchmark(scene, benchmark_args):

# warmup
scene.step()
rgb, depth, _, _ = scene.render_all_cameras()
rgb, depth, _, _ = scene.render_all_cameras(rgb=True, depth=True)

# Profiler
profiler = BenchmarkProfiler(n_steps, n_envs)
output_dir = os.path.dirname(benchmark_args.benchmark_result_file)
os.makedirs(output_dir, exist_ok=True)
image_dirname = f"{benchmark_args.renderer}-{benchmark_args.rasterizer}-{benchmark_args.n_envs}-{benchmark_args.resX}"
image_dir = os.path.join(output_dir, image_dirname)
if n_steps < 10:
exporter = FrameImageExporter(image_dir)

for i in range(n_steps):
profiler.on_simulation_start()
scene.step()
profiler.on_rendering_start()
rgb, depth, _, _ = scene.render_all_cameras()
rgb, depth, _, _ = scene.render_all_cameras(rgb=True, depth=True)
profiler.on_rendering_end()

if n_steps < 10:
exporter.export_frame_all_cameras(i, rgb=rgb)
profiler.end()
profiler.print_summary()

time_taken_gpu = profiler.get_total_rendering_gpu_time()
time_taken_cpu = profiler.get_total_rendering_cpu_time()
time_taken_per_env_gpu = profiler.get_total_rendering_gpu_time_per_env()
time_taken_per_env_cpu = profiler.get_total_rendering_cpu_time_per_env()
fps = profiler.get_rendering_fps()
fps_per_env = profiler.get_rendering_fps_per_env()

# Append a line with all args and results in csv format
os.makedirs(os.path.dirname(benchmark_args.benchmark_result_file), exist_ok=True)
with open(benchmark_args.benchmark_result_file, "a") as f:
f.write(
f"succeeded,{benchmark_args.mjcf},{benchmark_args.renderer},{benchmark_args.rasterizer},{benchmark_args.n_envs},{benchmark_args.n_steps},{benchmark_args.resX},{benchmark_args.resY},{benchmark_args.camera_posX},{benchmark_args.camera_posY},{benchmark_args.camera_posZ},{benchmark_args.camera_lookatX},{benchmark_args.camera_lookatY},{benchmark_args.camera_lookatZ},{benchmark_args.camera_fov},{time_taken_gpu},{time_taken_per_env_gpu},{time_taken_cpu},{time_taken_per_env_cpu},{fps},{fps_per_env}\n"
)
performance_results = {
"time_taken_gpu": profiler.get_total_rendering_gpu_time(),
"time_taken_cpu": profiler.get_total_rendering_cpu_time(),
"time_taken_per_env_gpu": profiler.get_total_rendering_gpu_time_per_env(),
"time_taken_per_env_cpu": profiler.get_total_rendering_cpu_time_per_env(),
"fps": profiler.get_rendering_fps(),
"fps_per_env": profiler.get_rendering_fps_per_env(),
}
write_benchmark_result_file(benchmark_args, performance_results)

except Exception as e:
print(f"Error during benchmark: {e}")
raise
Expand Down
Loading