Grammarly for AI Prompts
Large Language Models like ChatGPT are extremely powerful, but most users struggle to get good results because their prompts are poorly phrased, vague, or incomplete.
People usually know what they want, but not how to ask in a way the AI clearly understands.
As a result:
- Prompts are ambiguous or underspecified
- The AI produces irrelevant or low-quality responses
- Users blame the model instead of the prompt
There is a clear gap between human thinking and AI understanding.
The core idea behind Prompt Improver is simple:
Improve the prompt, not the answer.
Instead of building another chatbot, this project acts as a prompt-rewriting layer that transforms messy or unclear user inputs into clean, structured, AI-ready prompts.
Think of it as:
Grammarly — but for AI prompts.
The tool:
- Does not answer the user’s question
- Does not add new intent
- Only rewrites the prompt to improve clarity, structure, and usefulness
- A user writes a rough or unclear prompt.
- The prompt is sent to a backend API.
- The backend uses an LLM only for prompt rewriting, guided by strict prompt-engineering rules.
- The rewritten prompt is returned to the user.
- The user can now submit a much clearer prompt to ChatGPT or other AI tools.
The system is designed to work before the prompt reaches an AI model.
- A serverless backend API exposes a
/api/rewriteendpoint. - The rewrite logic is isolated into a dedicated module for easy iteration.
- A Chrome extension (local install) intercepts user input on ChatGPT and replaces it with the improved prompt.
- The backend and extension are fully decoupled.
- Node.js (v20) – runtime environment
- TypeScript – type safety and maintainability
- OpenAI API (gpt-4o-mini) – used strictly for prompt rewriting
- Custom Prompt Engineering – enforces rewriting-only behavior
- Vercel Serverless Functions – scalable deployment target
- dotenv – secure environment variable management
- tsx – TypeScript execution during local development
- Chrome Extension (Manifest v3)
- Content Scripts – interact with ChatGPT input fields
- Background Service Worker – communicates with the backend API
- Thunder Client – API testing
- Git & GitHub – version control and collaboration
- Prompt-first, not answer-first
- Separation of concerns (rewrite logic ≠ AI response logic)
- Model-agnostic design (can support other LLMs later)
- Minimal permissions in the browser extension
- Security-first handling of API keys
- Backend API fully implemented and AI-integrated
- Prompt rewriting logic tested and refined
- Chrome extension functional via local installation
- Ready for deployment and further UX improvements
- Support for multiple rewrite modes (coding, exams, creative writing)
- Confirmation UI (accept / undo rewrite)
- Support for other AI platforms (Gemini, Claude)
- Rate limiting and lightweight authentication
- Public Chrome Web Store release
This project demonstrates:
- Practical prompt engineering
- Real-world AI integration (beyond chatbots)
- Clean backend architecture
- Thoughtful product design around AI usability
Prompt Improver focuses on making AI tools more accessible by fixing the input, not overengineering the output.
MIT License