Skip to content

GPU acceleration for training, similar to what IsaacLab does. #6244

@linqi-ye

Description

@linqi-ye

Is your feature request related to a problem? Please describe.
We are using ML-Agents for robot training (https://github.com/loongOpen/Unity-RL-Playground) and would like to know how to achieve large-scale parallel training acceleration using GPUs, similar to NVIDIA's IsaacLab. Currently, under the same computer configuration, our training speed is significantly slower than that of IsaacLab.

Describe the solution you'd like
I'd like a concise guide on optimizing our ML-Agents setup for large-scale GPU-accelerated parallel training, including hardware configs, software optimizations, parallel training strategies, performance tuning, and relevant case studies/examples to match IsaacLab's training speed.

Describe alternatives you've considered
I've considered exploring high-performance computer configurations as an alternative solution to accelerate ML-Agents training.

Metadata

Metadata

Assignees

No one assigned

    Labels

    requestIssue contains a feature request.

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions