The Intelligent Terminal | Context-Aware Natural Language Commands
demo.mp4
-
β‘Natural Language to Shell Commands: Converts plain English queries into accurate shell commands.
-
π₯οΈCross-Platform Compatibility: Works seamlessly across Windows, Linux, and macOS.
-
π€Hybrid AI-Model Support: Supports both local (Ollama) and cloud-based (Groq, OpenAI, Google, etc.) LLMs for flexibility.
-
πPrivacy-First Approach: Defaults to local models and runs completely offline, so that no data leaves your device unless you enable cloud APIs.
-
π―Context-Aware Execution: Remembers command history, tracks files, and adapts suggestions accordingly.
-
β οΈ Secure Command Execution: Blocks dangerous commands and asks for confirmation. -
π οΈIntelligent Debugging & Auto-Correction: Identifies error, autonomously debugs issues and suggests corrected command.
-
πSmart Autocompletion: Provides tab completions for files and folder present in working directory.
-
π€Direct Execution & Queries: Directly execute shell commands with '!' (e.g., !ls -la), or Ask shell-related questions using '?' (e.g., How do I create a new SSH key?).
-
π³Built-in Support for Git, Docker, and Dev Tools: Seamlessly understands and executes Git, Docker, Kubernetes, and package manager commands.
-
π·οΈUser Defined Aliases: The alias system allows you to create, remove, list, import and export shortcuts for complex commands.
- Python 3.9+
pip(Windows)pipx(Linux & macOS)
pip install promptshellpipx ensurepath # Ensure pipx is in PATH (Linux/macOS)
pipx install promptshellpromptshell- Local models need 4GB+ RAM (llama3:8b) to 16GB+ (llama3:70b)
- API performance varies by provider (deepseek-r1-distill-llama-70b in Groq is recommended)
- Response quality depends on selected model capabilities
# Install Ollama for local LLM's
$ curl -fsSL https://ollama.com/install.sh | sh # Linux
$ brew install ollama # MacOS
$ winget install Ollama.Ollama # Windows (via Powershell Administrator)
# Verify Installation
$ ollama --version
# Get base model
$ ollama pull <model_name># Interactive configuration wizard
$ --config
? Select operation mode: (Use arrow keys)
Β» local (Privacy-first, needs 4GB+ RAM)
api (Faster but requires internet)
# If local mode is selected
? Choose local model: (Use arrow keys) # lists ollama models installed on your system
Β» llama3.1:8b
llama3:8b-instruct-q4_1
deepseek-r1:latest
mistral:latest
# If API mode is selected
? API provider selection: (Use arrow keys)
Β» Groq
OpenAI
Google
Anthropic
Fireworks
OpenRouter
Deepseek
? Select model for Groq: (Use arrow keys)
Β» llama-3.1-8b-instant
deepseek-r1-distill-llama-70b
gemma2-9b-it
llama-3.3-70b-versatile
llama3-70b-8192
llama3-8b-8192
mixtral-8x7b-32768
Custom model...
? Enter API key for Groq: #[hidden input]
β
Configuration updated!
Saved to /home/username/.config/PromptShell/promptshell_config.conf # (Linux & macOS)
Saved to C:\Users\username\AppData\Roaming\PromptShell\promptshell_config.conf #(Windows)
Active model: #[Selected-model]
Configuration updated!# Start the PromptShell REPL
$ promptshell
# Execute natural language queries
$ backup all .txt files in a folder named backup
# Directly execute raw shell commands (bypassing AI processing) with '!'
$ !mkdir backup && copy *.txt backup\
# Ask questions by prefixing or suffixing your query with '?'
$ What's the command to list all hidden files?
# Configure or change the LLM provider
$ --config
# Start the interactive tutorial
$ --tutorial
# View help and usage instructions
$ --help
# Clear the terminal screen
$ clear
# Exit PromptShell
$ quit--version: Display the current version of PromptShell
Create shortcuts for frequently used commands:
# Create an alias
alias add gpm "git push origin main" # without description
alias add gpn "git push origin main" --desc "Push changes to orgin/main" # with description
# Use the alias
!gpm
# List all aliases
alias list
# List a specified alias
alias list <alias_name>
# Remove an alias
alias remove gpm
#Remove all alias
alias clear
# Import/export aliases
alias import ~/backup/aliases.json
alias export ~/backup/aliases.jsonPromptShell is currently in alpha stage of development.
Known Limitations:
- Ollama models may hallucinate and produce inaccurate responses.
- Some API providers may have rate limits or require paid plans.
Roadmap:
- Local LLM support
- Interactive configuration setup
- Improve command execution safety measures
- Add support for user-defined command aliases
- Implement speech input support
- Expand model compatibility (e.g., fine-tuned small-scale models)
We welcome contributions! Here's how to help:
- Fork the repository.
- Create a branch:
git checkout -b feature/your-idea. - Commit changes:
git commit -m "Add your feature". - Push to the branch:
git push origin feature/your-idea. - Open a pull request.
This project is licensed under the Apache 2.0 License. See LICENSE for details.
- Built by Kirti Rathi
- Email: [email protected]
Note:
Always verify commands before execution.
Use at your own risk with critical operations.
This project is not affiliated with any API or model providers.
Local models require adequate system resources.
Internet is required for API mode.



