Skip to content

Commit

Permalink
Merge pull request #241 from samzong/en/models
Browse files Browse the repository at this point in the history
docs(models): add english version
  • Loading branch information
samzong authored Feb 14, 2025
2 parents 5e22d18 + 8c070ae commit 3ce3531
Show file tree
Hide file tree
Showing 8 changed files with 292 additions and 0 deletions.
91 changes: 91 additions & 0 deletions docs/zh/docs/en/models/api-call.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,91 @@
---
status: new
translated: true
---

# Model Invocation

The `d.run` platform offers two deployment options for large language models, allowing you to choose based on your specific needs:

- **MaaS by Token**: Utilizes token-based billing, sharing resources, and enables model invocation without requiring instance deployment
- **Model Service**: Provides dedicated instances with per-instance billing, offering unlimited API call access

## Supported Models and Deployment Options

| Model Name | MaaS by Token | Model Service |
| ----------------------------- | ------------- | ------------- |
| 🔥 DeepSeek-R1 || |
| 🔥 DeepSeek-V3 || |
| Phi-4 | ||
| Phi-3.5-mini-instruct | ||
| Qwen2-0.5B-Instruct | ||
| Qwen2.5-7B-Instruct |||
| Qwen2.5-14B-Instruct | ||
| Qwen2.5-Coder-32B-Instruct | ||
| Qwen2.5-72B-Instruct-AWQ |||
| baichuan2-13b-Chat | ||
| Llama-3.2-11B-Vision-Instruct |||
| glm-4-9b-chat |||

## Model Endpoints

A model endpoint is a URL or API address that allows users to access and send requests for model inference.

| Invocation Method | Endpoint |
| ----------------- | ------------------- |
| MaaS by Token | `chat.d.run` |
| Model Service | `<region>-02.d.run` |

## API Invocation Examples

### Invoking via MaaS by Token

To invoke models using the MaaS by Token method, follow these steps:

1. **Obtain API Key**: Log in to your user console and create a new API Key
2. **Set Endpoint**: Replace the SDK endpoint with `chat.d.run`
3. **Invoke Model**: Use the official model name along with the new API Key for invocation

**Example Code (Python)**:

```python
import openai

openai.api_key = "your-api-key" # Replace with your API Key
openai.api_base = "https://chat.d.run"

response = openai.Completion.create(
model="public/deepseek-r1",
prompt="What is your name?"
)

print(response.choices[0].text)
```

### Invoking via Model Service

To invoke models using the Model Service method, follow these steps:

1. **Obtain API Key**: Log in to your user console and create a new API Key
2. **Set Endpoint**: Replace the SDK endpoint with `<region>-02.d.run`
3. **Invoke Model**: Use the official model name along with the new API Key for invocation

**Example Code (Python)**:

```python
import openai

openai.api_key = "your-api-key" # Replace with your API Key
openai.api_base = "<region>-02.d.run"

response = openai.Completion.create(
model="u-1100a15812cc/qwen2",
prompt="What is your name?"
)

print(response.choices[0].text)
```

## Support and Feedback

For any questions or feedback, please contact our [Technical Support Team](../contact/index.md).
35 changes: 35 additions & 0 deletions docs/zh/docs/en/models/user-guides/bob-translate.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,35 @@
---
translated: true
---

# Using d.run in Bob Translate

This guide explains how to call model services from d.run within Bob Translate.

[Bob](https://bobtranslate.com/) is a macOS platform translation and OCR software that allows you to translate and perform OCR directly within any application. It's quick, efficient, and easy to use!

![Bob Translate](../images/bobtranslate.png)

## Installing Bob Translate

You can download and install Bob Translate from the [Mac App Store](https://apps.apple.com/cn/app/bob-%E7%BF%BB%E8%AF%91%E5%92%8C-ocr-%E5%B7%A5%E5%85%B7/id1630034110).

## Configuring Bob Translate

Open the settings page in Bob Translate, add a translation service, and select the service type as `OpenAI`.

![Bob Translate](../images/bobtranslate-2.png)

Add your API Key and API Host obtained from d.run:

- **API Key**: Enter your API Key
- **API Host**:
- For MaaS: Use `https://chat.d.run`
- For independently deployed model services, refer to the model instance details, typically `https://<region>.d.run`
- **Custom Model**: Specify as `public/deepseek-r1`

![Bob Translate](../images/bobtranslate-3.png)

## Demo of Bob Translate Usage

![Bob Translate](../images/bobtranslate-4.png)
38 changes: 38 additions & 0 deletions docs/zh/docs/en/models/user-guides/cherry-studio.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,38 @@
---
translated: true
---

# Using d.run in Cherry Studio

[🍒 Cherry Studio](https://cherry-ai.com/) is a desktop client for LLMs, supporting integration with multiple LLM providers including OpenAI, GPT-3, and d.run.

![Cherry Studio](../images/cherry-studio.jpg)

## Installing Cherry Studio

You can download the installation package from the [Cherry Studio official website](https://cherry-ai.com/).

Versions are available for MacOS, Windows, and Linux.

## Configuring Cherry Studio

Open the Cherry Studio configuration page and add a model provider, such as naming it `d.run` with the provider type set to `OpenAI`.

![Cherry Studio](../images/cherry-studio-2.png)

Enter your API Key and API Host obtained from d.run:

- API Key: Enter your API Key
- API Host:
- For MaaS, use `https://chat.d.run`
- For independently deployed models, refer to the model instance details, typically `https://<region>.d.run`

### Managing Available Models

Cherry Studio automatically detects available models. You can enable the models you need from the model list.

![Cherry Studio](../images/cherry-studio-4.png)

## Cherry Studio Usage Demo

![Cherry Studio](../images/cherry-studio-5.png)
41 changes: 41 additions & 0 deletions docs/zh/docs/en/models/user-guides/cline-in-vscode.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,41 @@
---
translated: true
---

# Using d.run in VSCode and Cline

[Cline](https://cline.bot/) is a VSCode plugin that enables you to use d.run model services within VSCode.

## Installing Cline

Search for and install the Cline plugin in VSCode.

![Cline](../images/cline-in-vscode.png)

You can also download and use RooCode, which is a branch of Cline.

> Note: Cline was originally known as Claude Dev. RooCode (RooCline) is based on this branch.
If you're unable to directly download the plugin due to network restrictions, consider downloading the `.vsix` file from the VSCode Extension Marketplace and installing it via `Install from VSIX`.

- [Cline](https://marketplace.visualstudio.com/items?itemName=saoudrizwan.claude-dev)
- [RooCode](https://marketplace.visualstudio.com/items?itemName=RooVeterinaryInc.roo-cline): A branch of Cline

## Configuring Cline

Open the Cline configuration page:

![Cline](../images/cline-in-vscode-2.png)

- **API Provider**: Select "OpenAI Compatible"
- **Base URL**: Enter `https://chat.d.run`
- **API Key**: Input your API key
- **Model ID**: Enter your model ID
- Obtainable from d.run's Model Square, with MaaS models prefixed as public/deeepseek-r1
- For independently deployed models, retrieve it from the model service list

![Cline](../images/cline-in-vscode-3.png)

## Cline Usage Demo

![Cline](../images/cline-in-vscode-4.png)
27 changes: 27 additions & 0 deletions docs/zh/docs/en/models/user-guides/index.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,27 @@
---
translated: true
---

# Example Scenarios for Usage

You can refer to the following example scenarios to configure and use model services provided by d.run in your development work.

## Model Invocation Examples

- Refer to [Model Invocation Examples](../api-call.md) to choose the method of invoking the model.
- Refer to [Get API Key](../apikey.md) to obtain the key.

## Scenario List

The following scenarios are described:

| Application Scene | Operation Instructions |
| --- | ---- |
| [Cline in VSCode](https://github.com/cline/cline) | [Using d.run's model services in VSCode via Cline/RooCode](./cline-in-vscode.md) |
| [Cherry Studio](https://cherry-ai.com) | [Using d.run's model services in Cherry Studio](./cherry-studio.md) |
| [Bob Translate](https://bobtranslate.com) | [Using d.run's model services in Bob Translate](./bob-translate.md) |
| [Lobe Chat](https://github.com/lobehub/lobe-chat) | [Using d.run's model services in Lobe Chat](./lobe-chat.md) |

## Contribution Notes

If you have more usage scenarios, we welcome you to share them with us through [Github PR](https://github.com/d-run/drun-docs).
49 changes: 49 additions & 0 deletions docs/zh/docs/en/models/user-guides/lobe-chat.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,49 @@
---
translated: true
---

# Using Lobe Chat Translator

[Lobe Chat](https://lobehub.com/en) is an open-source modern AI chat framework.
It supports multiple AI providers (OpenAI/Claude 3/Gemini/Ollama/Qwen/DeepSeek), knowledge bases (file uploads/knowledge management/RAG), and multi-modality (visual/TTS/plugins/art). Deploy your private ChatGPT/Claude application for free with one click.

![Lobe Chat](../images/lobe-chat.png)

## Install Lobe Chat

For detailed installation instructions, please refer to the
[official documentation of Lobe Chat](https://lobehub.com/en/docs/self-hosting/start).
Lobe Chat offers various deployment and installation methods.

This guide uses Docker as an example, primarily introducing how to use d.run's model service.

```bash

# LobeChat supports configuring API Key and API Host directly during deployment

$ docker run -d -p 3210:3210 \
-e OPENAI_API_KEY=sk-xxxx \ # Enter your API Key
-e OPENAI_PROXY_URL=https://chat.d.run/v1 \ # Enter your API Host
-e ENABLED_OLLAMA=0 \
-e ACCESS_CODE=drun \
--name lobe-chat \
lobehub/lobe-chat:latest
```

## Configure Lobe Chat

Lobe Chat also allows users to add model service provider configurations after deployment.

![Lobe Chat](../images/lobe-chat-2.png)

Enter the API Key and API Host obtained from d.run.

- API Key: Enter your API Key
- API Host:
- For MaaS, use `https://chat.d.run`
- For independently deployed models, check the model instance details, typically `https://<region>.d.run`
- Configure custom models: e.g., `public/deepseek-r1`

## Lobe Chat Usage Demo

![Lobe Chat](../images/lobe-chat-3.png)
4 changes: 4 additions & 0 deletions docs/zh/docs/models/user-guides/index.md
Original file line number Diff line number Diff line change
@@ -1,3 +1,7 @@
---
status: new
---

# d.run 使用示例场景

您可以参考以下示例场景,在开发工作中配置并使用 d.run 提供的模型服务。
Expand Down
7 changes: 7 additions & 0 deletions docs/zh/navigation.yml
Original file line number Diff line number Diff line change
Expand Up @@ -202,3 +202,10 @@ plugins:
字节跳动: ByteDance
联系我们: Contact Us
2025 年人工智能趋势展望: AI Trend in 2025
模型调用: Model Invocation
在 Bob 翻译中使用: Using in Bob Translate
在 LobeChat 中使用: Using in LobeChat
在 Cherry Studio 中使用: Using in Cherry Studio
在 VSCode 和 Cline 中使用: Using in VSCode
使用示例场景: Use Cases
示例场景介绍: Example Scenarios

0 comments on commit 3ce3531

Please sign in to comment.