Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
37 changes: 32 additions & 5 deletions README.en.md
Original file line number Diff line number Diff line change
Expand Up @@ -51,12 +51,14 @@ Once installed, Claw can invoke InkOS atomic commands and control-surface operat
```bash
inkos config set-global \
--lang en \
--provider <openai|anthropic|custom> \
--provider <openai|anthropic|custom|google> \
--base-url <API endpoint> \
--api-key <your API key> \
--model <model name>

# provider: openai / anthropic / custom (use custom for OpenAI-compatible proxies)
# provider: openai / anthropic / custom / google
# - custom: use for OpenAI-compatible proxies
# - google: Gemini native API (recommended default model: gemini-2.5-flash)
# base-url: your API provider URL
# api-key: your API key
# model: your model name
Expand All @@ -73,10 +75,10 @@ inkos init my-novel # Initialize project

```bash
# Required
INKOS_LLM_PROVIDER= # openai / anthropic / custom (use custom for any OpenAI-compatible API)
INKOS_LLM_BASE_URL= # API endpoint
INKOS_LLM_PROVIDER= # openai / anthropic / custom / google
INKOS_LLM_BASE_URL= # API endpoint (for google use https://generativelanguage.googleapis.com/v1beta)
INKOS_LLM_API_KEY= # API Key
INKOS_LLM_MODEL= # Model name
INKOS_LLM_MODEL= # Model name (recommended for Google native: gemini-2.5-flash)

# Language (defaults to global setting or genre default)
# INKOS_DEFAULT_LANGUAGE=en # en or zh
Expand All @@ -102,6 +104,21 @@ inkos config show-models # View current routing

Agents without explicit overrides fall back to the global model.

**Google Native Provider (Gemini)**

You can also use Gemini through the native Google provider instead of an OpenAI-compatible shim:

```bash
inkos config set-global \
--lang en \
--provider google \
--base-url https://generativelanguage.googleapis.com/v1beta \
--api-key <your-google-api-key> \
--model gemini-2.5-flash
```

The recommended default model is `gemini-2.5-flash`. Gemini `3.x` preview models were validated online as well, but they are still treated as preview options rather than the default stable recommendation.

### v1 Update

**InkOS Studio + Writing Pipeline Overhaul**
Expand Down Expand Up @@ -221,6 +238,16 @@ Different agents can use different models and providers. Writer on Claude (stron

Supports any OpenAI-compatible endpoint (`--provider custom`). Stream auto-fallback — when SSE isn't supported, InkOS retries with sync mode automatically. Fallback parser handles non-standard output from smaller models, and partial content recovery kicks in on stream interruption.

### Google Native Provider

InkOS also supports `--provider google` for Gemini's native API. The following were validated online:

- non-stream text generation via `generateContent`
- Gemini-native function/tool calling turns
- replaying tool results back to Gemini, including preserving and forwarding `thoughtSignature`

Current boundary: this is the validated scope today, not a claim of full provider parity. The recommended default remains `gemini-2.5-flash`; Gemini `3.x` preview models were validated, but are not the default stable recommendation.

### Reliability

Every chapter creates an automatic state snapshot — `inkos write rewrite` rolls back any chapter to its pre-write state. The Writer outputs a pre-write checklist (context scope, resources, pending hooks, risks) and a post-write settlement table; the Auditor cross-validates both. File locking prevents concurrent writes. Post-write validator includes cross-chapter repetition detection and 11 hard rules with auto spot-fix.
Expand Down
37 changes: 32 additions & 5 deletions README.ja.md
Original file line number Diff line number Diff line change
Expand Up @@ -51,12 +51,14 @@ npm でインストール済み、またはリポジトリをクローン済み
```bash
inkos config set-global \
--lang en \
--provider <openai|anthropic|custom> \
--provider <openai|anthropic|custom|google> \
--base-url <APIエンドポイント> \
--api-key <APIキー> \
--model <モデル名>

# provider: openai / anthropic / custom(OpenAI互換プロキシにはcustomを使用)
# provider: openai / anthropic / custom / google
# - custom:OpenAI互換プロキシ向け
# - google:Gemini ネイティブ API(推奨デフォルトモデル: gemini-2.5-flash)
# base-url: APIプロバイダーURL
# api-key: APIキー
# model: モデル名
Expand All @@ -73,10 +75,10 @@ inkos init my-novel # プロジェクトを初期化

```bash
# 必須
INKOS_LLM_PROVIDER= # openai / anthropic / custom(OpenAI互換APIにはcustomを使用)
INKOS_LLM_BASE_URL= # APIエンドポイント
INKOS_LLM_PROVIDER= # openai / anthropic / custom / google
INKOS_LLM_BASE_URL= # APIエンドポイント(google は https://generativelanguage.googleapis.com/v1beta)
INKOS_LLM_API_KEY= # APIキー
INKOS_LLM_MODEL= # モデル名
INKOS_LLM_MODEL= # モデル名(Google ネイティブ推奨: gemini-2.5-flash)

# 言語(グローバル設定またはジャンルのデフォルトに準拠)
# INKOS_DEFAULT_LANGUAGE=en # en または zh
Expand All @@ -102,6 +104,21 @@ inkos config show-models # 現在のルーティングを表示

明示的なオーバーライドがないエージェントはグローバルモデルにフォールバックします。

**Google ネイティブ Provider(Gemini)**

OpenAI互換のシムを使わず、`--provider google` で Gemini ネイティブ API を直接利用できます。

```bash
inkos config set-global \
--lang en \
--provider google \
--base-url https://generativelanguage.googleapis.com/v1beta \
--api-key <your-google-api-key> \
--model gemini-2.5-flash
```

推奨デフォルトモデルは `gemini-2.5-flash` です。Gemini `3.x` プレビューモデルもオンラインで検証済みですが、現時点ではプレビュー扱いであり、デフォルトの安定推奨にはしていません。

### v1 アップデート

**InkOS Studio + 執筆パイプライン全面アップグレード**
Expand Down Expand Up @@ -221,6 +238,16 @@ inkos compose chapter my-book

任意のOpenAI互換エンドポイント(`--provider custom`)に対応。ストリーム自動フォールバック — SSEがサポートされていない場合、InkOS は自動的に同期モードでリトライ。フォールバックパーサーが小型モデルの非標準出力を処理し、ストリーム中断時には部分コンテンツリカバリが作動。

### Google ネイティブ Provider

InkOS は `--provider google` による Gemini ネイティブ API にも対応しています。オンラインで検証済みなのは次の範囲です。

- `generateContent` による非ストリームのテキスト生成
- Gemini ネイティブの関数呼び出し / ツール呼び出しターン
- ツール結果の Gemini への再投入、および `thoughtSignature` の保持と引き継ぎ

現時点の境界: これは検証済みスコープの説明であり、他 Provider との完全な機能同等性を主張するものではありません。推奨デフォルトは引き続き `gemini-2.5-flash` で、Gemini `3.x` プレビューモデルは検証済みですがデフォルトの安定推奨ではありません。

### 信頼性

章ごとに自動ステートスナップショットを作成 — `inkos write rewrite` で任意の章を執筆前の状態にロールバック可能。Writerは執筆前チェックリスト(コンテキストスコープ、リソース、保留中のフック、リスク)と執筆後決済テーブルを出力し、Auditorが両方をクロスバリデーション。ファイルロックにより同時書き込みを防止。執筆後バリデーターにはクロスチャプター反復検出と11のハードルールによる自動スポット修正を搭載。
Expand Down
42 changes: 34 additions & 8 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -59,15 +59,17 @@ clawhub install inkos # 从 ClawHub 安装 InkOS Skill

```bash
inkos config set-global \
--provider <openai|anthropic|custom> \
--provider <openai|anthropic|custom|google> \
--base-url <API 地址> \
--api-key <你的 API Key> \
--model <模型名>

# provider: openai / anthropic / custom(兼容 OpenAI 格式的中转站选 custom)
# base-url: 你的 API 提供商地址
# api-key: 你的 API Key
# model: 你的模型名称
# provider: openai / anthropic / custom / google
# - custom:兼容 OpenAI 格式的中转站
# - google:Gemini 原生接口(推荐默认模型:gemini-2.5-flash)
# base-url: API 提供商地址
# api-key: API Key
# model: 模型名称
```

配置保存在 `~/.inkos/.env`,所有项目共享。之后新建项目不用再配。
Expand All @@ -81,10 +83,10 @@ inkos init my-novel # 初始化项目

```bash
# 必填
INKOS_LLM_PROVIDER= # openai / anthropic / custom(兼容 OpenAI 接口的都选 custom)
INKOS_LLM_BASE_URL= # API 地址(支持中转站、智谱、Gemini 等
INKOS_LLM_PROVIDER= # openai / anthropic / custom / google
INKOS_LLM_BASE_URL= # API 地址(custom 用兼容 OpenAI 的地址;google 用 https://generativelanguage.googleapis.com/v1beta
INKOS_LLM_API_KEY= # API Key
INKOS_LLM_MODEL= # 模型名
INKOS_LLM_MODEL= # 模型名(Google 原生推荐 gemini-2.5-flash)

# 可选
# INKOS_LLM_TEMPERATURE=0.7 # 温度
Expand All @@ -107,6 +109,20 @@ inkos config show-models # 查看当前路由

未单独配置的 Agent 自动使用全局模型。

**Google 原生 Provider(Gemini)**

可直接使用 `--provider google` 走 Gemini 原生接口,无需再伪装成 OpenAI 兼容端点:

```bash
inkos config set-global \
--provider google \
--base-url https://generativelanguage.googleapis.com/v1beta \
--api-key <your-google-api-key> \
--model gemini-2.5-flash
```

当前推荐默认模型是 `gemini-2.5-flash`。`gemini-3.x` 预览模型已经做过联机验证,但目前更适合当作预览选项,不作为默认稳定推荐。

### v1 更新

**InkOS Studio + 写作管线全面升级**
Expand Down Expand Up @@ -200,6 +216,16 @@ inkos compose chapter 吞天魔帝

支持任何 OpenAI 兼容接口(`--provider custom`)。Stream 自动降级——中转站不支持 SSE 时自动回退 sync。Fallback 解析器处理小模型不规范输出,流中断时自动恢复部分内容。

### Google 原生 Provider

InkOS 也支持 `--provider google` 直连 Gemini 原生接口。当前已经联机验证:

- 非流式文本生成(`generateContent`)
- Gemini 原生函数调用 / 工具调用回合
- 工具调用后的结果回放,以及 `thoughtSignature` 的保留与续传

当前边界:这是已验证能力范围,不宣称与其他 Provider 完全等价;推荐默认模型仍是 `gemini-2.5-flash`,`gemini-3.x` 预览模型已验证可用,但暂不作为默认稳定推荐。

### 可靠性保障

每章自动创建状态快照,`inkos write rewrite` 可回滚任意章节。写手动笔前输出自检表(上下文、资源、伏笔、风险),写完输出结算表,审计员交叉验证。文件锁防止并发写入。写后验证器含跨章重复检测和 11 条硬规则自动 spot-fix。
Expand Down
4 changes: 2 additions & 2 deletions packages/cli/package.json
Original file line number Diff line number Diff line change
Expand Up @@ -43,8 +43,8 @@
"typecheck": "tsc --noEmit"
},
"dependencies": {
"@actalk/inkos-core": "1.1.1",
"@actalk/inkos-studio": "1.1.1",
"@actalk/inkos-core": "workspace:*",
"@actalk/inkos-studio": "workspace:*",
"commander": "^13.0.0",
"dotenv": "^16.4.0",
"epub-gen-memory": "^1.0.10",
Expand Down
8 changes: 4 additions & 4 deletions packages/cli/src/commands/config.ts
Original file line number Diff line number Diff line change
Expand Up @@ -87,10 +87,10 @@ configCommand
configCommand
.command("set-global")
.description("Set global LLM config (~/.inkos/.env), shared by all projects")
.requiredOption("--provider <provider>", "LLM provider (openai / anthropic)")
.requiredOption("--base-url <url>", "API base URL")
.requiredOption("--provider <provider>", "LLM provider (openai / anthropic / custom / google; use google for Gemini native)")
.requiredOption("--base-url <url>", "API base URL (for Google native: https://generativelanguage.googleapis.com/v1beta)")
.requiredOption("--api-key <key>", "API key")
.requiredOption("--model <model>", "Model name")
.requiredOption("--model <model>", "Model name (recommended Google native default: gemini-2.5-flash)")
.option("--temperature <n>", "Temperature")
.option("--max-tokens <n>", "Max output tokens")
.option("--thinking-budget <n>", "Anthropic thinking budget")
Expand Down Expand Up @@ -177,7 +177,7 @@ configCommand
.argument("<agent>", `Agent name (${KNOWN_AGENTS.join(", ")})`)
.argument("<model>", "Model name")
.option("--base-url <url>", "API base URL (for different provider)")
.option("--provider <provider>", "Provider type (openai / anthropic / custom)")
.option("--provider <provider>", "Provider type (openai / anthropic / custom / google; use google for Gemini native)")
.option("--api-key-env <envVar>", "Env variable name for API key (e.g., PACKYAPI_KEY)")
.option("--stream", "Enable streaming (default)")
.option("--no-stream", "Disable streaming")
Expand Down
24 changes: 13 additions & 11 deletions packages/cli/src/commands/init.ts
Original file line number Diff line number Diff line change
Expand Up @@ -74,10 +74,10 @@ export const initCommand = new Command("init")
"# Project-level LLM overrides (optional)",
"# Global config at ~/.inkos/.env will be used by default.",
"# Uncomment below to override for this project only:",
"# INKOS_LLM_PROVIDER=openai",
"# INKOS_LLM_BASE_URL=",
"# INKOS_LLM_PROVIDER=google",
"# INKOS_LLM_BASE_URL=https://generativelanguage.googleapis.com/v1beta",
"# INKOS_LLM_API_KEY=",
"# INKOS_LLM_MODEL=",
"# INKOS_LLM_MODEL=gemini-2.5-flash",
"",
"# Web search (optional):",
"# TAVILY_API_KEY=tvly-xxxxx",
Expand All @@ -90,11 +90,12 @@ export const initCommand = new Command("init")
[
"# LLM Configuration",
"# Tip: Run 'inkos config set-global' to set once for all projects.",
"# Provider: openai (OpenAI / compatible proxy), anthropic (Anthropic native)",
"INKOS_LLM_PROVIDER=openai",
"INKOS_LLM_BASE_URL=",
"# Recommended Google native default: gemini-2.5-flash (Gemini 3.x preview models were validated, but are not the default stable recommendation).",
"# Provider: openai (OpenAI native), custom (OpenAI-compatible proxy), anthropic (Anthropic native), google (Gemini native)",
"INKOS_LLM_PROVIDER=google",
"INKOS_LLM_BASE_URL=https://generativelanguage.googleapis.com/v1beta",
"INKOS_LLM_API_KEY=",
"INKOS_LLM_MODEL=",
"INKOS_LLM_MODEL=gemini-2.5-flash",
"",
"# Optional parameters (defaults shown):",
"# INKOS_LLM_TEMPERATURE=0.7",
Expand All @@ -107,9 +108,9 @@ export const initCommand = new Command("init")
"",
"# Anthropic example:",
"# INKOS_LLM_PROVIDER=anthropic",
"# INKOS_LLM_PROVIDER=anthropic",
"# INKOS_LLM_BASE_URL=",
"# INKOS_LLM_MODEL=",
"# INKOS_LLM_BASE_URL=https://api.anthropic.com",
"# INKOS_LLM_API_KEY=",
"# INKOS_LLM_MODEL=claude-sonnet-4-20250514",
].join("\n"),
"utf-8",
);
Expand Down Expand Up @@ -137,8 +138,9 @@ export const initCommand = new Command("init")
log("Next steps:");
if (name) log(` cd ${name}`);
log(" # Option 1: Set global config (recommended, one-time):");
log(" inkos config set-global --provider openai --base-url <your-api-url> --api-key <your-key> --model <your-model>");
log(" inkos config set-global --provider google --base-url https://generativelanguage.googleapis.com/v1beta --api-key <your-key> --model gemini-2.5-flash");
log(" # Option 2: Edit .env for this project only");
log(" # Note: gemini-2.5-flash is the recommended stable default; Gemini 3.x preview models were validated but remain preview choices.");
log("");
log(exampleCreate);
}
Expand Down
34 changes: 34 additions & 0 deletions packages/core/src/__tests__/config-loader.test.ts
Original file line number Diff line number Diff line change
Expand Up @@ -13,6 +13,7 @@ const ENV_KEYS = [
"INKOS_LLM_MAX_TOKENS",
"INKOS_LLM_THINKING_BUDGET",
"INKOS_LLM_API_FORMAT",
"INKOS_LLM_STREAM",
] as const;

describe("loadProjectConfig local provider auth", () => {
Expand Down Expand Up @@ -77,4 +78,37 @@ describe("loadProjectConfig local provider auth", () => {
await writeFile(join(root, ".env"), "", "utf-8");
await expect(loadProjectConfig(root)).rejects.toThrow(/INKOS_LLM_API_KEY not set/i);
});

it("loads google native config from project env overrides", async () => {
root = await mkdtemp(join(tmpdir(), "inkos-config-loader-google-"));
for (const key of ENV_KEYS) {
previousEnv.set(key, process.env[key]);
process.env[key] = "";
}

await writeFile(join(root, "inkos.json"), JSON.stringify({
name: "google-project",
version: "0.1.0",
llm: {
provider: "openai",
baseUrl: "https://api.openai.com/v1",
model: "gpt-5.4",
},
}, null, 2), "utf-8");
await writeFile(join(root, ".env"), [
"INKOS_LLM_PROVIDER=google",
"INKOS_LLM_BASE_URL=https://generativelanguage.googleapis.com/v1beta",
"INKOS_LLM_API_KEY=test-google-key",
"INKOS_LLM_MODEL=gemini-2.5-flash",
"INKOS_LLM_STREAM=false",
"",
].join("\n"), "utf-8");

const config = await loadProjectConfig(root);

expect(config.llm.provider).toBe("google");
expect(config.llm.baseUrl).toBe("https://generativelanguage.googleapis.com/v1beta");
expect(config.llm.apiKey).toBe("test-google-key");
expect(config.llm.model).toBe("gemini-2.5-flash");
});
});
10 changes: 10 additions & 0 deletions packages/core/src/__tests__/models.test.ts
Original file line number Diff line number Diff line change
Expand Up @@ -390,6 +390,16 @@ describe("LLMConfigSchema", () => {
expect(result.provider).toBe("openai");
});

it("accepts google as a native provider", () => {
const result = LLMConfigSchema.parse({
provider: "google",
baseUrl: "https://generativelanguage.googleapis.com/v1beta",
apiKey: "AIza...",
model: "gemini-2.5-pro",
});
expect(result.provider).toBe("google");
});

it("rejects invalid provider", () => {
expect(() =>
LLMConfigSchema.parse({
Expand Down
Loading