UniLLM is a versatile Python library and command-line tool designed to provide unified access to various large language models such as ChatGPT, Llama2, Mistral, Claude, MistralAI, RAG, Llama3, and CommandRPlus. This library simplifies the integration of these models into your projects or allows for direct interaction via the command line.
- Unified API for interacting with multiple language models.
- Support for both API and local models.
- Extensible framework allowing the addition of more models in the future.
- Command-line tool for easy interaction with models.
- Configuration via YAML file for API keys.
Install UniLLM using pip:
pip install unillm
Configure your API keys for the models by creating a .unillm.yaml
file in your home directory:
chatgpt: YOUR_CHATGPT_API_KEY
claude: YOUR_CLAUDE_API_KEY
mistralai: YOUR_MISTRALAI_API_KEY
# Add other model API keys as needed
Model | Support API | Support Local |
---|---|---|
ChatGPT | ✅ | |
Llama2 | ✅ | |
Mistral | ✅ | ✅ |
Claude | ✅ | |
MistralAI | ✅ | |
RAG | ✅ | ✅ |
Llama3 | ✅ | |
CommandRPlus | ✅ |
Interact with language models seamlessly in your Python projects:
from unillm import UniLLM
# Initialize Llama with specific settings
model = UniLLM('Llama2', peft_path="path_to_peft_model", max_new_tokens=1024)
# Generate a response
response = model.generate_response("How can AI help humans?")
print(response)
Start the CLI by running:
unillm
Follow the prompts to select a model and enter your queries. For example:
Please choose a model by number (default is 1):
1: ChatGPT
2: Llama2
...
👨Please Ask a Question: What are the latest AI trends?
🤖 (ChatGPT): AI trends include...
To exit, type exit
.
We welcome contributions! If you have suggestions or enhancements, fork the repository, create a feature branch, and submit a pull request.
This project is licensed under the MIT License - see the LICENSE file for details.