Skip to content

Conversation

@ltngonnguyen
Copy link

I have created a Docker image to streamline the setup process for LLaMA-X model training on GPU rental services like vast.ai. Currently, setting up the required dependencies such as CUDA and PyTorch is a time-consuming and repetitive chore, hindering the efficiency of researchers and developers. With this Docker image, we can eliminate the need to repeat these steps every single time, making the setup process quick and hassle-free.

This Docker image encapsulates the necessary software stack, including CUDA, PyTorch, and other dependencies, allowing users to spin up a ready-to-use environment for LLaMA-X model training in minutes.

The image itself is based on Nvidia's official CUDA 11.3 docker image, with conda installing pytorch and all dependencies. I've tested it on a couple different vast.ai GPU instances and all worked.

Adding the docker image for fast deploying on GPU rental services such as vast.ai
@sdake
Copy link

sdake commented Jun 15, 2023

@ltngonnguyen looks pretty cool. Can you share your dockerfile you used?

Thank you,
-steve

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants