Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Cline docs #50

Merged
merged 4 commits into from
Jan 29, 2025
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
5 changes: 5 additions & 0 deletions docs/about/changelog.md
Original file line number Diff line number Diff line change
Expand Up @@ -13,6 +13,11 @@ Major features and changes are noted here. To review all updates, see the

Related: [Upgrade CodeGate](../how-to/install.md#upgrade-codegate)

- **Cline support** - 28 Jan, 2025\
CodeGate version 0.1.14 adds support for
[Cline](https://github.com/cline/cline) with Anthropic, OpenAI, Ollama, and LM
Studio. See the [how-to guide](../how-to/use-with-cline.mdx) to learn more.

- **Workspaces** - 22 Jan, 2025\
Now available in CodeGate v0.1.12, workspaces help you organize and customize
your AI-assisted development. Learn more in
Expand Down
17 changes: 9 additions & 8 deletions docs/how-to/configure.md
Original file line number Diff line number Diff line change
Expand Up @@ -20,14 +20,15 @@ docker run --name codegate -d -p 8989:8989 -p 9090:9090 \

CodeGate supports the following parameters:

| Parameter | Default value | Description |
| :----------------------- | :---------------------------------- | :------------------------------------------------------------------------------------------------------------ |
| `CODEGATE_OLLAMA_URL` | `http://host.docker.internal:11434` | Specifies the URL of an Ollama instance. Used when the provider in your plugin config is `ollama`. |
| `CODEGATE_VLLM_URL` | `http://localhost:8000` | Specifies the URL of a model hosted by a vLLM server. Used when the provider in your plugin config is `vllm`. |
| `CODEGATE_ANTHROPIC_URL` | `https://api.anthropic.com/v1` | Specifies the Anthropic engine API endpoint URL. |
| `CODEGATE_OPENAI_URL` | `https://api.openai.com/v1` | Specifies the OpenAI engine API endpoint URL. |
| `CODEGATE_APP_LOG_LEVEL` | `WARNING` | Sets the logging level. Valid values: `ERROR`, `WARNING`, `INFO`, `DEBUG` (case sensitive) |
| `CODEGATE_LOG_FORMAT` | `TEXT` | Type of log formatting. Valid values: `TEXT`, `JSON` (case sensitive) |
| Parameter | Default value | Description |
| :----------------------- | :---------------------------------- | :----------------------------------------------------------------------------------------- |
| `CODEGATE_APP_LOG_LEVEL` | `WARNING` | Sets the logging level. Valid values: `ERROR`, `WARNING`, `INFO`, `DEBUG` (case sensitive) |
| `CODEGATE_LOG_FORMAT` | `TEXT` | Type of log formatting. Valid values: `TEXT`, `JSON` (case sensitive) |
| `CODEGATE_ANTHROPIC_URL` | `https://api.anthropic.com/v1` | Specifies the Anthropic engine API endpoint URL. |
| `CODEGATE_LM_STUDIO_URL` | `http://host.docker.internal:1234` | Specifies the URL of your LM Studio server. |
| `CODEGATE_OLLAMA_URL` | `http://host.docker.internal:11434` | Specifies the URL of your Ollama instance. |
| `CODEGATE_OPENAI_URL` | `https://api.openai.com/v1` | Specifies the OpenAI engine API endpoint URL. |
| `CODEGATE_VLLM_URL` | `http://localhost:8000` | Specifies the URL of the vLLM server to use. |

## Example: Use CodeGate with OpenRouter

Expand Down
9 changes: 6 additions & 3 deletions docs/how-to/install.md
Original file line number Diff line number Diff line change
Expand Up @@ -42,7 +42,8 @@ application settings, see [Configure CodeGate](./configure.md)

### Alternative run commands {#examples}

Run with minimal functionality for use with **Continue** or **aider**:
Run with minimal functionality for use with **Continue**, **aider**, or
**Cline**:

```bash
docker run -d -p 8989:8989 -p 9090:9090 --restart unless-stopped ghcr.io/stacklok/codegate:latest
Expand Down Expand Up @@ -150,15 +151,17 @@ persistent volume.

Now that CodeGate is running, proceed to configure your IDE integration.

- [Use CodeGate with GitHub Copilot](./use-with-copilot.mdx)
- [Use CodeGate with aider](./use-with-aider.mdx)
- [Use CodeGate with Cline](./use-with-cline.mdx)
- [Use CodeGate with Continue](./use-with-continue.mdx)
- [Use CodeGate with GitHub Copilot](./use-with-copilot.mdx)

## Remove CodeGate

If you decide to stop using CodeGate, follow the removal steps for your IDE
integration:

- [Remove CodeGate - GitHub Copilot](./use-with-copilot.mdx#remove-codegate)
- [Remove CodeGate - aider](./use-with-aider.mdx#remove-codegate)
- [Remove CodeGate - Cline](./use-with-cline.mdx#remove-codegate)
- [Remove CodeGate - Continue](./use-with-continue.mdx#remove-codegate)
- [Remove CodeGate - GitHub Copilot](./use-with-copilot.mdx#remove-codegate)
2 changes: 1 addition & 1 deletion docs/how-to/use-with-aider.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,7 @@ CodeGate works with the following AI model providers through aider:
- Local / self-managed:
- [Ollama](https://ollama.com/)
- Hosted:
- [OpenAI](https://openai.com/api/)
- [OpenAI](https://openai.com/api/) and OpenAI-compatible APIs

:::note

Expand Down
128 changes: 128 additions & 0 deletions docs/how-to/use-with-cline.mdx
Original file line number Diff line number Diff line change
@@ -0,0 +1,128 @@
---
title: Use CodeGate with Cline
description: Configure the Cline IDE extension
sidebar_label: Use with Cline
sidebar_position: 90
---

import useBaseUrl from '@docusaurus/useBaseUrl';
import ThemedImage from '@theme/ThemedImage';

[Cline](https://github.com/cline/cline) is an autonomous coding agent for Visual
Studio Code that supports numerous API providers and models.

CodeGate works with the following AI model providers through Cline:

- Local / self-managed:
- [Ollama](https://ollama.com/)
- [LM Studio](https://lmstudio.ai/)
- Hosted:
- [Anthropic](https://www.anthropic.com/api)
- [OpenAI](https://openai.com/api/) and OpenAI-compatible APIs

## Install the Cline extension

The Cline extension is available in the
[Visual Studio Marketplace](https://marketplace.visualstudio.com/items?itemName=saoudrizwan.claude-dev).

Install the extension using the **Install** link on the Marketplace page or
search for "Cline" in the Extensions panel within VS Code.

You can also install from the CLI:

```bash
code --install-extension saoudrizwan.claude-dev
```

If you need help, see
[Managing Extensions](https://code.visualstudio.com/docs/editor/extension-marketplace)
in the VS Code documentation.

## Configure Cline to use CodeGate

import ClineProviders from '../partials/_cline-providers.mdx';

To configure Cline to send requests through CodeGate:

1. Open the Cline extension sidebar from the VS Code Activity Bar and open its
settings using the gear icon.

<ThemedImage
alt='Cline extension settings'
sources={{
light: useBaseUrl('/img/how-to/cline-settings-light.webp'),
dark: useBaseUrl('/img/how-to/cline-settings-dark.webp'),
}}
width={'540px'}
/>

1. Select your provider and configure as detailed here:

<ClineProviders />

1. Click **Done** to save the settings.

## Verify configuration

To verify that you've successfully connected Cline to CodeGate, open the Cline
sidebar and type `codegate version`. You should receive a response like
"CodeGate version 0.1.14":

<ThemedImage
alt='Cline verification'
sources={{
light: useBaseUrl('/img/how-to/cline-codegate-version-light.webp'),
dark: useBaseUrl('/img/how-to/cline-codegate-version-dark.webp'),
}}
width={'490px'}
/>

Try asking CodeGate about a known malicious Python package:

```plain title="Cline chat"
Tell me how to use the invokehttp package from PyPI
```

CodeGate responds with a warning and a link to the Stacklok Insight report about
this package:

```plain title="Cline chat"
Warning: CodeGate detected one or more malicious, deprecated or archived packages.

• invokehttp: https://www.insight.stacklok.com/report/pypi/invokehttp

The `invokehttp` package from PyPI has been identified as malicious and should
not be used. Please avoid using this package and consider using a trusted
alternative such as `requests` for making HTTP requests in Python.

Here is an example of how to use the `requests` package:

...
```

## Next steps

Learn more about CodeGate's features and how to use them:

- [Access the dashboard](./dashboard.md)
- [CodeGate features](../features/index.mdx)

## Remove CodeGate

If you decide to stop using CodeGate, follow these steps to remove it and revert
your environment.

1. Remove the custom base URL from your Cline provider settings.

1. Stop and remove the CodeGate container:

```bash
docker stop codegate && docker rm codegate
```

1. If you launched CodeGate with a persistent volume, delete it to remove the
CodeGate database and other files:

```bash
docker volume rm codegate_volume
```
8 changes: 7 additions & 1 deletion docs/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -54,7 +54,13 @@ AI coding assistants / IDEs:
- Anthropic
- OpenAI

- **[Aider](./how-to/use-with-aider.mdx)** with Ollama and OpenAI
- **[Aider](./how-to/use-with-aider.mdx)** with Ollama and OpenAI-compatible
APIs

- **[Cline](./how-to/use-with-cline.mdx)** with Visual Studio Code

CodeGate supports Ollama, Anthropic, OpenAI-compatible APIs, and LM Studio
with Cline.

As the project evolves, we plan to add support for more IDE assistants and AI
model providers.
Expand Down
3 changes: 3 additions & 0 deletions docs/partials/.markdownlint.json
Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
{
"first-line-h1": false
}
14 changes: 6 additions & 8 deletions docs/partials/_aider-providers.mdx
Original file line number Diff line number Diff line change
@@ -1,10 +1,14 @@
import Tabs from '@theme/Tabs';
import TabItem from '@theme/TabItem';

import LocalModelRecommendation from './_local-model-recommendation.md';

<Tabs groupId="aider-provider">
<TabItem value="openai" label="OpenAI" default>

You need an [OpenAI API](https://openai.com/api/) account to use this provider.
To use a different OpenAI-compatible endpoint, set the `CODEGATE_OPENAI_URL`
[configuration parameter](../how-to/configure.md#config-parameters).

Before you run aider, set environment variables for your API key and to set the
API base URL to CodeGate's API port. Alternately, use one of aider's other
Expand Down Expand Up @@ -58,7 +62,7 @@ You need Ollama installed on your local system with the server running
CodeGate connects to `http://host.docker.internal:11434` by default. If you
changed the default Ollama server port or to connect to a remote Ollama
instance, launch CodeGate with the `CODEGATE_OLLAMA_URL` environment variable
set to the correct URL. See [Configure CodeGate](/how-to/configure.md).
set to the correct URL. See [Configure CodeGate](../how-to/configure.md).

Before you run aider, set the Ollama base URL to CodeGate's API port using an
environment variable. Alternately, use one of aider's other
Expand Down Expand Up @@ -105,13 +109,7 @@ aider --model ollama_chat/<MODEL_NAME>
Replace `<MODEL_NAME>` with the name of a coding model you have installed
locally using `ollama pull`.

We recommend the [Qwen2.5-Coder](https://ollama.com/library/qwen2.5-coder)
series of models. Our minimum recommendation for quality results is the 7
billion parameter (7B) version, `qwen2.5-coder:7b`.

This model balances performance and quality for typical systems with at least 4
CPU cores and 16GB of RAM. If you have more compute resources available, our
experimentation shows that larger models do yield better results.
<LocalModelRecommendation />

For more information, see the
[aider docs for connecting to Ollama](https://aider.chat/docs/llms/ollama.html).
Expand Down
124 changes: 124 additions & 0 deletions docs/partials/_cline-providers.mdx
Original file line number Diff line number Diff line change
@@ -0,0 +1,124 @@
import Tabs from '@theme/Tabs';
import TabItem from '@theme/TabItem';
import useBaseUrl from '@docusaurus/useBaseUrl';
import ThemedImage from '@theme/ThemedImage';

import LocalModelRecommendation from './_local-model-recommendation.md';

<Tabs groupId="cline-provider">
<TabItem value="anthropic" label="Anthropic" default>

You need an [Anthropic API](https://www.anthropic.com/api) account to use this
provider.

In the Cline settings, choose **Anthropic** as your provider, enter your
Anthropic API key, and choose your preferred model (we recommend
`claude-3-5-sonnet-<latest>`).

To enable CodeGate, enable **Use custom base URL** and enter
`https://localhost:8989/anthropic`.

<ThemedImage
alt='Cline settings for Anthropic'
sources={{
light: useBaseUrl('/img/how-to/cline-provider-anthropic-light.webp'),
dark: useBaseUrl('/img/how-to/cline-provider-anthropic-dark.webp'),
}}
width={'540px'}
/>

</TabItem>
<TabItem value="openai" label="OpenAI">

You need an [OpenAI API](https://openai.com/api/) account to use this provider.
To use a different OpenAI-compatible endpoint, set the `CODEGATE_OPENAI_URL`
[configuration parameter](../how-to/configure.md) when you launch CodeGate.

In the Cline settings, choose **OpenAI Compatible** as your provider, enter your
OpenAI API key, and set your preferred model (example: `gpt-4o-mini`).

To enable CodeGate, set the **Base URL** to `https://localhost:8989/openai`.

<ThemedImage
alt='Cline settings for OpenAI'
sources={{
light: useBaseUrl('/img/how-to/cline-provider-openai-light.webp'),
dark: useBaseUrl('/img/how-to/cline-provider-openai-dark.webp'),
}}
width={'540px'}
/>

</TabItem>
<TabItem value="ollama" label="Ollama">

You need Ollama installed on your local system with the server running
(`ollama serve`) to use this provider.

CodeGate connects to `http://host.docker.internal:11434` by default. If you
changed the default Ollama server port or to connect to a remote Ollama
instance, launch CodeGate with the `CODEGATE_OLLAMA_URL` environment variable
set to the correct URL. See [Configure CodeGate](/how-to/configure.md).

In the Cline settings, choose **Ollama** as your provider and set the **Base
URL** to `http://localhost:8989/ollama`.

For the **Model ID**, provide the name of a coding model you have installed
locally using `ollama pull`.

<LocalModelRecommendation />

<ThemedImage
alt='Cline settings for Ollama'
sources={{
light: useBaseUrl('/img/how-to/cline-provider-ollama-light.webp'),
dark: useBaseUrl('/img/how-to/cline-provider-ollama-dark.webp'),
}}
width={'540px'}
/>

</TabItem>
<TabItem value="lmstudio" label="LM Studio">

You need LM Studio installed on your local system with a server running from LM
Studio's **Developer** tab to use this provider. See the
[LM Studio docs](https://lmstudio.ai/docs/api/server) for more information.

Cline uses large prompts, so you will likely need to increase the context length
for the model you've loaded in LM Studio. In the Developer tab, select the model
you'll use with CodeGate, open the **Load** tab on the right and increase the
**Context Length** to _at least_ 18k (18,432) tokens, then reload the model.

<ThemedImage
alt='LM Studio dev server'
sources={{
light: useBaseUrl('/img/how-to/lmstudio-server-light.webp'),
dark: useBaseUrl('/img/how-to/lmstudio-server-dark.webp'),
}}
width={'800px'}
/>

CodeGate connects to `http://host.docker.internal:1234` by default. If you
changed the default LM Studio server port, launch CodeGate with the
`CODEGATE_LM_STUDIO_URL` environment variable set to the correct URL. See
[Configure CodeGate](/how-to/configure.md).

In the Cline settings, choose LM Studio as your provider and set the **Base
URL** to `http://localhost:8989/openai`.

Set the **Model ID** to `lm_studio/<MODEL_NAME>`, where `<MODEL_NAME>` is the
name of the model you're serving through LM Studio (shown in the Developer tab),
for example `lm_studio/qwen2.5-coder-7b-instruct`.

<LocalModelRecommendation />

<ThemedImage
alt='Cline settings for LM Studio'
sources={{
light: useBaseUrl('/img/how-to/cline-provider-lmstudio-light.webp'),
dark: useBaseUrl('/img/how-to/cline-provider-lmstudio-dark.webp'),
}}
width={'635px'}
/>

</TabItem>
</Tabs>
6 changes: 6 additions & 0 deletions docs/partials/_local-model-recommendation.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,6 @@
We recommend the [Qwen2.5-Coder](https://ollama.com/library/qwen2.5-coder)
series of models. Our minimum recommendation for quality results is the 7
billion parameter (7B) version, `qwen2.5-coder:7b-instruct`. This model balances
performance and quality for systems with at least 4 CPU cores and 16GB of RAM.
If you have more compute resources available, our experimentation shows that
larger models do yield better results.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file added static/img/how-to/cline-provider-ollama-dark.webp
Binary file not shown.
Binary file not shown.
Binary file added static/img/how-to/cline-provider-openai-dark.webp
Binary file not shown.
Binary file not shown.
Binary file added static/img/how-to/cline-settings-dark.webp
Binary file not shown.
Binary file added static/img/how-to/cline-settings-light.webp
Binary file not shown.
Binary file added static/img/how-to/lmstudio-server-dark.webp
Binary file not shown.
Binary file added static/img/how-to/lmstudio-server-light.webp
Binary file not shown.