Skip to content

charmi-reddy/prompt-improver

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

6 Commits
 
 
 
 
 
 

Repository files navigation

Prompt Improver 🚀

Grammarly for AI Prompts


🧩 Problem

Large Language Models like ChatGPT are extremely powerful, but most users struggle to get good results because their prompts are poorly phrased, vague, or incomplete.
People usually know what they want, but not how to ask in a way the AI clearly understands.

As a result:

  • Prompts are ambiguous or underspecified
  • The AI produces irrelevant or low-quality responses
  • Users blame the model instead of the prompt

There is a clear gap between human thinking and AI understanding.


💡 Idea

The core idea behind Prompt Improver is simple:

Improve the prompt, not the answer.

Instead of building another chatbot, this project acts as a prompt-rewriting layer that transforms messy or unclear user inputs into clean, structured, AI-ready prompts.

Think of it as:

Grammarly — but for AI prompts.

The tool:

  • Does not answer the user’s question
  • Does not add new intent
  • Only rewrites the prompt to improve clarity, structure, and usefulness

🛠️ Solution

How it works

  1. A user writes a rough or unclear prompt.
  2. The prompt is sent to a backend API.
  3. The backend uses an LLM only for prompt rewriting, guided by strict prompt-engineering rules.
  4. The rewritten prompt is returned to the user.
  5. The user can now submit a much clearer prompt to ChatGPT or other AI tools.

The system is designed to work before the prompt reaches an AI model.


Architecture Overview

  • A serverless backend API exposes a /api/rewrite endpoint.
  • The rewrite logic is isolated into a dedicated module for easy iteration.
  • A Chrome extension (local install) intercepts user input on ChatGPT and replaces it with the improved prompt.
  • The backend and extension are fully decoupled.

⚙️ Tech Stack

Backend

  • Node.js (v20) – runtime environment
  • TypeScript – type safety and maintainability
  • OpenAI API (gpt-4o-mini) – used strictly for prompt rewriting
  • Custom Prompt Engineering – enforces rewriting-only behavior
  • Vercel Serverless Functions – scalable deployment target
  • dotenv – secure environment variable management
  • tsx – TypeScript execution during local development

Browser Extension

  • Chrome Extension (Manifest v3)
  • Content Scripts – interact with ChatGPT input fields
  • Background Service Worker – communicates with the backend API

Tooling

  • Thunder Client – API testing
  • Git & GitHub – version control and collaboration

✅ Key Design Principles

  • Prompt-first, not answer-first
  • Separation of concerns (rewrite logic ≠ AI response logic)
  • Model-agnostic design (can support other LLMs later)
  • Minimal permissions in the browser extension
  • Security-first handling of API keys

📌 Current Status

  • Backend API fully implemented and AI-integrated
  • Prompt rewriting logic tested and refined
  • Chrome extension functional via local installation
  • Ready for deployment and further UX improvements

🚧 Future Enhancements

  • Support for multiple rewrite modes (coding, exams, creative writing)
  • Confirmation UI (accept / undo rewrite)
  • Support for other AI platforms (Gemini, Claude)
  • Rate limiting and lightweight authentication
  • Public Chrome Web Store release

🧠 Why This Project Matters

This project demonstrates:

  • Practical prompt engineering
  • Real-world AI integration (beyond chatbots)
  • Clean backend architecture
  • Thoughtful product design around AI usability

Prompt Improver focuses on making AI tools more accessible by fixing the input, not overengineering the output.


📄 License

MIT License

About

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors