-
Notifications
You must be signed in to change notification settings - Fork 4.4k
Description
Is your feature request related to a problem? Please describe.
We are using ML-Agents for robot training (https://github.com/loongOpen/Unity-RL-Playground) and would like to know how to achieve large-scale parallel training acceleration using GPUs, similar to NVIDIA's IsaacLab. Currently, under the same computer configuration, our training speed is significantly slower than that of IsaacLab.
Describe the solution you'd like
I'd like a concise guide on optimizing our ML-Agents setup for large-scale GPU-accelerated parallel training, including hardware configs, software optimizations, parallel training strategies, performance tuning, and relevant case studies/examples to match IsaacLab's training speed.
Describe alternatives you've considered
I've considered exploring high-performance computer configurations as an alternative solution to accelerate ML-Agents training.