Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
209 changes: 209 additions & 0 deletions CONTEXT/PLAN-4709-restore-live-token-usage-2025-12-09.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,209 @@
# Plan: Restore Live Token Usage Feature (PR #4709)

**Date:** 2025-12-09
**Related PR:** https://github.com/sst/opencode/pull/4709
**Status:** IMPLEMENTED - Feature restored 2025-12-10

## Overview

This plan documents the restoration of the "Live Token Usage During Streaming" feature that was originally added in PR #4709. The feature provides:

- Real-time token tracking while streaming responses
- `IN/OUT` format display for input/output tokens
- Reasoning token display for "thinking" models
- Toggle tokens command in TUI

## Current State Analysis

### What's Working

| Component | File | Status |
| ------------------------------ | --------------------------------------------- | -------------------------- |
| Token utility functions | `packages/opencode/src/util/token.ts` | **EXISTS** |
| Message schema fields | `packages/opencode/src/session/message-v2.ts` | **EXISTS** |
| Token calculation in prompt.ts | `packages/opencode/src/session/prompt.ts` | **EXISTS** |
| Subtask completion fix | `packages/opencode/src/session/prompt.ts` | **EXISTS** (bug was fixed) |

### What's Missing

| Component | File | Status |
| ------------------------ | ------------------------------------------------------------ | ----------- |
| Streaming token updates | `packages/opencode/src/session/processor.ts` | **MISSING** |
| `showTokens` state | `packages/opencode/src/cli/cmd/tui/routes/session/index.tsx` | **MISSING** |
| `contextLimit` memo | `packages/opencode/src/cli/cmd/tui/routes/session/index.tsx` | **MISSING** |
| IN/OUT token display | `packages/opencode/src/cli/cmd/tui/routes/session/index.tsx` | **MISSING** |
| Reasoning token display | `packages/opencode/src/cli/cmd/tui/routes/session/index.tsx` | **MISSING** |
| "Toggle tokens" command | `packages/opencode/src/cli/cmd/tui/routes/session/index.tsx` | **MISSING** |
| User message token count | `packages/opencode/src/cli/cmd/tui/routes/session/index.tsx` | **MISSING** |

### Backend Token Utility (Exists)

```typescript
// packages/opencode/src/util/token.ts
Token.estimate(input: string) // Character-based estimation
Token.toCharCount(tokenEstimate: number) // Convert tokens to chars
Token.toTokenEstimate(charCount: number) // Convert chars to tokens
Token.calculateToolResultTokens(parts) // Estimate tool result size
```

### Message Schema Fields (Exist)

```typescript
// UserMessage
sentEstimate: z.number().optional()
contextEstimate: z.number().optional()

// AssistantMessage
outputEstimate: z.number().optional()
reasoningEstimate: z.number().optional()
contextEstimate: z.number().optional()
sentEstimate: z.number().optional()
```

## Technical Approach

### Token Estimation Logic

- Simple estimation based on character count using `CHARS_PER_TOKEN` constant
- `calculateToolResultTokens` estimates size of tool inputs, outputs, and errors
- Estimates prefixed with `~` to indicate they are approximate

### Display Format

- `IN X↓` - Input/context tokens (sent to model)
- `OUT Y↑` - Output tokens (generated by model)
- `~X think` - Reasoning tokens for thinking models
- Context percentage: `X% of limit`

## Implementation Tasks

### Phase 1: Add Streaming Token Updates to Processor

- [x] Import `Token` module in `packages/opencode/src/session/processor.ts`
- [x] Add `reasoningTotal` and `textTotal` character accumulators at processor creation
- [x] Update `reasoning-delta` handler to calculate and store `reasoningEstimate`:
```typescript
case "reasoning-delta":
reasoningTotal += value.text.length
input.assistantMessage.reasoningEstimate = Token.toTokenEstimate(reasoningTotal)
await Session.updateMessage(input.assistantMessage)
```
- [x] Update `text-delta` handler to calculate and store `outputEstimate`:
```typescript
case "text-delta":
textTotal += value.text.length
input.assistantMessage.outputEstimate = Token.toTokenEstimate(textTotal)
await Session.updateMessage(input.assistantMessage)
```
- [ ] Update `finish-step` to emit final `contextEstimate` from usage data

### Phase 2: Add Token Display State

- [x] Add `showTokens` signal to session component:
```typescript
const [showTokens, setShowTokens] = createSignal(kv.get("show_tokens", false))
```
- [ ] Add `contextLimit` memo that gets limit from current model/provider
- [x] Add to context provider: `showTokens: () => boolean`

### Phase 3: Add Toggle Tokens Command

- [x] Add "Toggle tokens" command to `command.register()` array:
```typescript
{
title: showTokens() ? "Hide tokens" : "Show tokens",
value: "session.toggle.tokens",
category: "Session",
onSelect: (dialog) => {
setShowTokens((prev) => {
const next = !prev
kv.set("show_tokens", next)
return next
})
dialog.clear()
},
}
```

### Phase 4: Update AssistantMessage Component

- [x] Add token calculation logic:
```typescript
const inputTokens = createMemo(() => {
const sent = props.message.sentEstimate ?? 0
const context = props.message.contextEstimate ?? 0
return sent + context
})
const outputTokens = createMemo(() => props.message.tokens?.output ?? props.message.outputEstimate ?? 0)
const reasoningTokens = createMemo(() => props.message.tokens?.reasoning ?? props.message.reasoningEstimate ?? 0)
```
- [x] Add conditional token display:
```tsx
<Show when={showTokens()}>
<text fg={theme.textMuted}>
IN {inputTokens().toLocaleString()}↓ OUT {outputTokens().toLocaleString()}↑
<Show when={reasoningTokens()}> ~{reasoningTokens().toLocaleString()} think</Show>
</text>
</Show>
```

### Phase 5: Update UserMessage Component

- [x] Add individual token count display when `showTokens()` is true:
```tsx
<Show when={showTokens() && props.message.sentEstimate}>
<text fg={theme.textMuted}>~{props.message.sentEstimate?.toLocaleString()} tokens</text>
</Show>
```

## Code References

### Internal Files

- `packages/opencode/src/util/token.ts` - Token utility functions (exists)
- `packages/opencode/src/session/message-v2.ts:308-309` - User message estimate fields (exists)
- `packages/opencode/src/session/message-v2.ts:369-372` - Assistant message estimate fields (exists)
- `packages/opencode/src/session/processor.ts:82-88` - reasoning-delta handler (needs update)
- `packages/opencode/src/session/processor.ts:305-315` - text-delta handler (needs update)
- `packages/opencode/src/session/processor.ts:251-270` - finish-step handler (needs update)
- `packages/opencode/src/cli/cmd/tui/routes/session/index.tsx:1097-1162` - AssistantMessage component
- `packages/opencode/src/cli/cmd/tui/routes/session/index.tsx:1001-1095` - UserMessage component
- `packages/opencode/src/cli/cmd/tui/routes/session/index.tsx:241-777` - Command registration

### External References

- Original PR: https://github.com/sst/opencode/pull/4709

## Estimated Changes

| File | Lines Added | Lines Modified |
| -------------- | ----------- | -------------- |
| `processor.ts` | ~15 | ~10 |
| `index.tsx` | ~50 | ~15 |
| **Total** | ~65 | ~25 |

## Validation Criteria

- [x] Token estimates display during streaming (before final usage available)
- [x] `IN X↓` shows input/context tokens accurately
- [x] `OUT Y↑` shows output tokens, updating in real-time during generation
- [x] Reasoning tokens display for models that support thinking (e.g., Claude)
- [x] "Toggle tokens" command appears in command palette
- [x] Toggle persists via KV store across sessions
- [x] User messages show estimated token count when toggle enabled
- [x] Estimates use `~` prefix to indicate approximation
- [x] Final token counts from API replace estimates when available

## Dependencies

None - all required utilities and schema fields already exist.

## Risks & Considerations

1. **Estimation Accuracy**: Character-based estimation is approximate. Actual tokenization varies by model. Consider this acceptable for UX purposes.

2. **Performance**: Updating message on every delta may cause performance issues. Consider throttling updates (e.g., every 100ms or 100 chars).

3. **Context Limit**: Different models have different context limits. Need to properly fetch limit from provider/model configuration.

4. **Subtask Bug Status**: The regression bug mentioned in PR discussions (missing `updatePart` call after `taskTool.execute`) was previously fixed and the fix is still present. No action needed.
188 changes: 188 additions & 0 deletions CONTEXT/PLAN-4791-restore-bash-viewer-ansi-2025-12-09.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,188 @@
# Plan: Restore Bash Tool Expansion & ANSI Output Feature (PR #4791)

**Date:** 2025-12-09
**Related PR:** https://github.com/sst/opencode/pull/4791
**Status:** IMPLEMENTED - Feature restored 2025-12-10

## Overview

This plan documents the restoration of the "Bash Tool Expansion & Colored ANSI Output" feature that was originally added in PR #4791. The feature provides:

- Full-screen viewer for bash command outputs
- ANSI color rendering for terminal output
- Output truncation in chat with "Click to view full output" button
- Forced color output from CLI tools

## Current State Analysis

### What's Missing

| Component | File | Status |
| ------------------------------ | -------------------------------------------------------------- | ----------- |
| `ghostty-opentui` dependency | `packages/opencode/package.json` | **MISSING** |
| `ptyToText` import | `packages/opencode/src/tool/bash.ts` | **MISSING** |
| `FORCE_COLOR` env vars | `packages/opencode/src/tool/bash.ts` | **MISSING** |
| `bashOutput` signal | `packages/opencode/src/cli/cmd/tui/routes/session/index.tsx` | **MISSING** |
| `showBashOutput` context | `packages/opencode/src/cli/cmd/tui/routes/session/index.tsx` | **MISSING** |
| Full-screen bash viewer | `packages/opencode/src/cli/cmd/tui/routes/session/index.tsx` | **MISSING** |
| `ghostty-terminal` component | `packages/opencode/src/cli/cmd/tui/routes/session/index.tsx` | **MISSING** |
| Keyboard navigation for viewer | `packages/opencode/src/cli/cmd/tui/routes/session/index.tsx` | **MISSING** |
| Bash tool truncated preview | `packages/opencode/src/cli/cmd/tui/routes/session/index.tsx` | **MISSING** |
| `initialValue` prop | `packages/opencode/src/cli/cmd/tui/component/prompt/index.tsx` | **MISSING** |
| `text` getter on PromptRef | `packages/opencode/src/cli/cmd/tui/component/prompt/index.tsx` | **MISSING** |

### Current Bash Tool Rendering

The current bash tool simply strips ANSI codes and displays plain text:

```typescript
const output = createMemo(() => stripAnsi(props.metadata.output?.trim() ?? ""))
// ...
<text fg={theme.text}>{output()}</text>
```

## Technical Approach

### ANSI Color Rendering

- Use `ghostty-opentui` package for terminal rendering
- `GhosttyTerminalRenderable` component renders ANSI codes properly
- `ptyToText()` processes raw PTY output

### Environment Variables for Color Output

Force CLI tools to produce colored output even when not in TTY:

```typescript
env: {
FORCE_COLOR: "3",
CLICOLOR: "1",
CLICOLOR_FORCE: "1",
TERM: "xterm-256color",
TERM_PROGRAM: "bash-tool",
PY_COLORS: "1",
ANSICON: "1",
// ... more
}
```

### Full-Screen Viewer

- Toggle between chat view and full-screen bash viewer
- Keyboard navigation: ESC to close, Page Up/Down, Home/End for scrolling
- Preserve prompt text when switching views

## Implementation Tasks

### Phase 1: Add Dependencies

- [x] Add `ghostty-opentui` to `packages/opencode/package.json`
```json
"ghostty-opentui": "1.3.6"
```
- [x] Run `bun install` to update lockfile

### Phase 2: Update Bash Tool

- [x] Add import to `packages/opencode/src/tool/bash.ts`:
```typescript
import { ptyToText } from "ghostty-opentui"
```
- [x] Update spawn environment variables (around line 225):
```typescript
env: {
...process.env,
FORCE_COLOR: "3",
CLICOLOR: "1",
CLICOLOR_FORCE: "1",
TERM: "xterm-256color",
TERM_PROGRAM: "bash-tool",
PY_COLORS: "1",
ANSICON: "1",
NO_COLOR: undefined,
}
```
- [x] Wrap output with `ptyToText()` before returning

### Phase 3: Update Session Index

- [x] Add `BashOutputView` type:
```typescript
type BashOutputView = {
command: string
output: () => string
}
```
- [x] Add `bashOutput` signal: `createSignal<BashOutputView | undefined>(undefined)`
- [x] Add `showBashOutput` function to context
- [x] Register `ghostty-terminal` component with opentui
- [x] Add keyboard handlers for viewer navigation (ESC, PageUp/Down, Home/End)
- [x] Add conditional rendering that switches between scrollbox and bash viewer
- [x] Add `promptDraft` signal for preserving prompt text

### Phase 4: Update Bash Tool Renderer

- [x] Update the bash tool registration (around line 1382-1404):
- [x] Use `<ghostty-terminal>` for output preview
- [x] Limit preview to 20 lines
- [x] Add "Click to see full output" button when output exceeds limit
- [x] Wire click handler to `showBashOutput`

### Phase 5: Update Prompt Component

- [ ] Add `initialValue` prop to `PromptProps` type (line 29-36) - Not needed for basic implementation
- [ ] Add `text` getter to `PromptRef` type (line 38-45) - Not needed for basic implementation
- [ ] Handle `initialValue` in `onMount` to restore prompt text - Not needed for basic implementation

## Code References

### Internal Files

- `packages/opencode/package.json` - Add ghostty-opentui dependency
- `packages/opencode/src/tool/bash.ts:225-233` - spawn() call, needs env vars
- `packages/opencode/src/tool/bash.ts:350-358` - return statement, needs ptyToText
- `packages/opencode/src/cli/cmd/tui/routes/session/index.tsx:80-89` - Context definition
- `packages/opencode/src/cli/cmd/tui/routes/session/index.tsx:1382-1404` - Bash tool renderer
- `packages/opencode/src/cli/cmd/tui/component/prompt/index.tsx:29-45` - PromptProps/PromptRef types

### External References

- Original PR: https://github.com/sst/opencode/pull/4791
- ghostty-opentui package: https://www.npmjs.com/package/ghostty-opentui

## Estimated Changes

| File | Lines Added | Lines Modified |
| ------------------ | ----------- | -------------- |
| `package.json` | 1 | 0 |
| `bash.ts` | 15 | 5 |
| `index.tsx` | ~150 | ~30 |
| `prompt/index.tsx` | 10 | 5 |
| **Total** | ~176 | ~40 |

## Validation Criteria

- [x] `bun install` succeeds with new dependency
- [x] CLI tools produce colored output (test with `ls --color`, `git status`)
- [x] Bash output in chat shows ANSI colors (not raw escape codes)
- [x] Long outputs are truncated to 20 lines in chat preview
- [x] "Click to see full output" button appears for truncated outputs
- [x] Clicking opens full-screen bash viewer
- [x] Full-screen viewer shows complete output with colors
- [x] ESC key closes full-screen viewer
- [x] Page Up/Down, Home/End work in viewer
- [ ] Prompt text is preserved when opening/closing viewer - Minor, can be addressed later

## Dependencies

- `ghostty-opentui` npm package (needs to be added)

## Risks & Considerations

1. **Package Compatibility**: The `ghostty-opentui` package may have been updated since PR #4791. Check for any API changes.

2. **Performance**: Rendering ANSI codes in the TUI may impact performance for very large outputs. The 20-line preview helps mitigate this.

3. **Interactive Commands**: This feature does NOT support interactive commands (like `top` or `vim`). It's strictly for static output rendering.

4. **Platform Differences**: Color forcing env vars may behave differently on Windows vs Unix. Test on multiple platforms.
Loading
Loading