After Grammarly disabled its API, no equivalent grammar-checking tool exists for VSCode. While LTeX catches spelling mistakes and some grammatical errors, it lacks the deeper linguistic understanding that Grammarly provides.
This extension bridges the gap by leveraging large language models (LLMs). It chunks text into paragraphs, asks an LLM to proofread each paragraph, and highlights potential errors. Users can then click on highlighted errors to view and apply suggested corrections.
- LLM-powered grammar checking in American English
- Inline corrections via quick fixes
- Choice of models: Use a local
llama3.2:3b
model via Ollama orgpt-40-mini
through the VSCode LM API - Rewrite suggestions to improve clarity
- Synonym recommendations for better word choices
When the first command is executed, a dialog appears allowing users to select either a local Ollama model or the GitHub Copilot model.
- Local Model: Requires installing and running a local Ollama server.
- Online Model: Requires a GitHub Copilot subscription.
- "LLM Writing Tool: Start Text Check for Current Document"
Continuously checks the text in the current document. Prompts the user to select an LLM model. - "LLM Writing Tool: Stop Text Check for Current Document"
Stops real-time grammar checking. - "LLM writing tool: Rewrite current selection" Rewrites the selected text for clarity.
- "LLM writing tool: Get synonyms for selection" Suggests synonyms for the selected expression.
- "LLM writing tool: Select model" Selects the LLM model to use for grammar checking. Stops real-time grammar checking if it is running.
- Install the extension from the VSCode Marketplace.
- Install Ollama and pull
llama3.2:3b
for local grammar checking, or subscribe to GitHub Copilot for online LLM access.
- The extension splits the text into sections and sends them to the selected LLM for proofreading.
- It then compares the LLM’s suggestions with the original text to detect changes.
- Detected errors are highlighted, and users can apply quick fixes with a click.
- Responses are cached to minimize repeated API calls.
- Every 5 seconds, the extension checks for text changes and reprocesses modified sections.
- On-disk caching to improve startup times and reduce redundant API requests.
- Smarter text chunking to ensure uniform section sizes (e.g., ~2 full lines per section instead of splitting by line).
- Support for additional languages, starting with British English. Future versions may support any language available in the LLM.
- Evaluation of alternative models for improved results, with prompt adjustments as needed.
Contributions are welcome! Feel free to:
- Open an issue
- Submit a pull request
- Contact me directly: [email protected]