Skip to content

atharva-again/q

Repository files navigation

q

A fast, minimalistc, and powerful command-line AI assistant that brings the power of LLMs directly to your terminal. Just BYOK (Bring Your Own Key) and get started.

An image demonstrating q CLI query. An image demonstrating q CLI setup.

An image demonstrating q CLI query with -t flag. An image demonstrating q CLI stats feature.

Features

  • Fast & Lightweight: Minimal dependencies, quick responses
  • Beautiful Output: Rich markdown rendering with syntax highlighting
  • Multiple AI Providers: Support for Google Gemini and OpenAI models
  • Flexible Configuration: Easy setup with interactive wizard
  • Response Control: Choose response length (tiny, medium, large)
  • Cross-Platform: Works on Linux, macOS, and Windows

Install, Update & Uninstall

Quick Method (Recommended)

Linux/macOS

curl -fsSL https://raw.githubusercontent.com/atharva-again/q/main/install.sh | bash

Windows (PowerShell)

irm https://raw.githubusercontent.com/atharva-again/q/main/install.ps1 | iex

This will:

  • Download the appropriate binary for your system if none exists
  • Install it to your PATH
  • If an installation exists, it will ask you whether to update or uninstall

Manual Installation

  1. Download the latest release from GitHub Releases
  2. Extract the archive
  3. Move the binary to a directory in your PATH
  4. Run q -S to configure

Setup & Configuration

Run setup to configure your AI provider and preferences:

q -S

This will guide you through:

  • Choosing your AI provider (Gemini/OpenAI)
  • Selecting a model
  • Setting default response length
  • Entering your API key

You can find API keys here:

Build from Source

git clone https://github.com/atharva-again/q.git
cd q
go build -o q .
sudo mv q /usr/local/bin/
q -S

Usage

Basic Query

q What is the capital of India?

Response Length Options

# Short responses
q Explain quantum computing -t

# Medium responses (default)
q -m How does photosynthesis work? -m

# Detailed responses
q Write a comprehensive guide to Docker -l

Help

q -h  # Show help
q -S  # Run setup

Configuration

Configuration is stored in:

  • Linux/macOS: ~/.config/q/config.json
  • Windows: %APPDATA%\q\config.json

Example config:

{
  "provider": "gemini",
  "model": "gemini-2.0-flash",
  "api_key": "your-api-key-here",
  "default_length": "medium"
}

Supported Models

Google Gemini

  • gemini-2.5-flash
  • gemini-2.5-pro
  • gemini-2.0-flash
  • gemini-2.5-flash-lite
  • gemini-2.0-flash-lite

OpenAI

  • gpt-5
  • gpt-5-mini
  • gpt-5-nano
  • gpt-4.1

Development

Prerequisites

  • Go 1.25.4 or later
  • Git

Building

# Build for current platform
go build -o q .

# Build for multiple platforms
./build.sh

Testing

go test ./...

Contributing

Contributions are welcome! Please feel free to submit a Pull Request.

  1. Fork the repository
  2. Create your feature branch (git checkout -b feature/amazing-feature)
  3. Commit your changes (git commit -m 'Add some amazing feature')
  4. Push to the branch (git push origin feature/amazing-feature)
  5. Open a Pull Request

License

This project is licensed under the MIT License - see the LICENSE file for details.

Author

Atharva Verma - [email protected]

About

A simple CLI tool that allows you to talk to Gemini/ChatGPT from your terminal. BYOK.

Resources

License

Stars

Watchers

Forks

Packages

No packages published