A simple chatbot web application built in Go, Python and Node.js that connects to a local LLM service (llama.cpp) to provide AI-powered responses.
The application uses the following environment variables defined in the .env
file:
LLM_BASE_URL
: The base URL of the LLM APILLM_MODEL_NAME
: The model name to use
To change these settings, simply edit the .env
file in the root directory of the project.
-
Clone the repository:
git clone https://github.com/docker/hello-genai cd hello-genai
-
Run the application using the script:
./run.sh
-
Open your browser and visit the following links:
http://localhost:8080 for the GenAI Application in Go
http://localhost:8081 for the GenAI Application in Python
http://localhost:8082 for the GenAI Application in Node
- macOS (recent version)
- Either:
- Docker and Docker Compose (preferred)
- Go 1.21 or later
- Local LLM server
If you're using a different LLM server configuration, you may need to modify the.env
file.