From ada7ef58cc395a432057e443391eef8a72292ca4 Mon Sep 17 00:00:00 2001 From: Max Yankov Date: Mon, 17 Nov 2025 13:49:34 -0300 Subject: [PATCH 1/2] Add text, markdown, and chat output formats MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Implement multiple text-based output formats for Claude Code transcripts, providing alternatives to HTML for documentation and terminal viewing. New Features: - Text format: Verbose output with timestamps, token usage, and full details - Markdown format: Same as text with markdown heading hierarchy - Chat format: Compact conversation flow mimicking Claude Code UI - Uses symbols: > for user, ⏺ for assistant/tools, ⎿ for results - Truncates long outputs at 10 lines with "… +N lines" indicator Architecture: - Created content_extractor.py for shared content parsing logic - Eliminates duplication between HTML and text rendering pipelines - Both renderer.py and text_renderer.py use common extraction layer CLI: - Added --format option: html (default), text, markdown, chat - Examples: - claude-code-log dir/ --format text -o output.txt - claude-code-log dir/ --format markdown -o output.md - claude-code-log file.jsonl --format chat 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude --- CLAUDE.md | 27 +- README.md | 33 ++ claude_code_log/cli.py | 44 +- claude_code_log/content_extractor.py | 179 +++++++++ claude_code_log/converter.py | 129 ++++++ claude_code_log/renderer.py | 119 +++--- claude_code_log/text_renderer.py | 426 ++++++++++++++++++++ test/test_text_rendering.py | 576 +++++++++++++++++++++++++++ 8 files changed, 1462 insertions(+), 71 deletions(-) create mode 100644 claude_code_log/content_extractor.py create mode 100644 claude_code_log/text_renderer.py create mode 100644 test/test_text_rendering.py diff --git a/CLAUDE.md b/CLAUDE.md index 0e193d92..8cbc37f2 100644 --- a/CLAUDE.md +++ b/CLAUDE.md @@ -1,13 +1,15 @@ # Claude Code Log -A Python CLI tool that converts Claude transcript JSONL files into readable HTML format. +A Python CLI tool that converts Claude Code transcript JSONL files into readable HTML, text, markdown, or chat format. ## Project Overview -This tool processes Claude Code transcript files (stored as JSONL) and generates clean, minimalist HTML pages with comprehensive session navigation and token usage tracking. It's designed to create a readable log of your Claude interactions with rich metadata and easy navigation. +This tool processes Claude Code transcript files (stored as JSONL) and generates clean, readable output in multiple formats with comprehensive session navigation and token usage tracking. It supports HTML for browser viewing, verbose text for detailed analysis, markdown for documentation, and compact chat format for quick conversation review. ## Key Features +- **Multiple Output Formats**: Generate HTML, plain text, markdown, or compact chat format from transcript files +- **Stdin Piping Support**: Pipe JSONL data directly for use in CI/CD pipelines and automation - **Interactive TUI (Terminal User Interface)**: Browse and manage Claude Code sessions with real-time navigation, summaries, and quick actions for HTML export and session resuming - **Project Hierarchy Processing**: Process entire `~/.claude/projects/` directory with linked index page - **Individual Session Files**: Generate separate HTML files for each session with navigation links @@ -113,10 +115,31 @@ claude-code-log /path/to/directory --from-date "yesterday" --to-date "today" claude-code-log /path/to/directory --from-date "3 days ago" --to-date "yesterday" ``` +### Text, Markdown, and Chat Output + +Generate non-HTML formats for documentation or quick review: + +```bash +# Verbose text format (timestamps, tokens, full tool details) +claude-code-log /path/to/directory --format text -o output.txt + +# Markdown format (for documentation) +claude-code-log /path/to/directory --format markdown -o output.md + +# Compact chat format (clean conversation, like Claude Code UI) +claude-code-log /path/to/directory --format chat -o chat.txt +``` + +**Format Comparison:** +- **text**: Verbose with timestamps, token usage, working directories +- **markdown**: Same as text with markdown heading hierarchy +- **chat**: Compact conversation flow with tool symbols (⏺ for tool use, ⎿ for results) + ## File Structure - `claude_code_log/parser.py` - Data extraction and parsing from JSONL files - `claude_code_log/renderer.py` - HTML generation and template rendering +- `claude_code_log/text_renderer.py` - Plain text and markdown rendering - `claude_code_log/converter.py` - High-level conversion orchestration - `claude_code_log/cli.py` - Command-line interface with project discovery - `claude_code_log/models.py` - Pydantic models for transcript data structures diff --git a/README.md b/README.md index 65c7ca7a..a2703f50 100644 --- a/README.md +++ b/README.md @@ -28,6 +28,7 @@ uvx claude-code-log@latest --open-browser ## Key Features +- **Multiple Output Formats**: Generate HTML, plain text, or markdown output from transcript files - **Interactive TUI (Terminal User Interface)**: Browse and manage Claude Code sessions with real-time navigation, summaries, and quick actions for HTML export and session resuming - **Project Hierarchy Processing**: Process entire `~/.claude/projects/` directory with linked index page - **Individual Session Files**: Generate separate HTML files for each session with navigation links @@ -136,10 +137,42 @@ claude-code-log /path/to/directory --from-date "yesterday" --to-date "today" claude-code-log /path/to/directory --from-date "3 days ago" --to-date "yesterday" ``` +### Text and Markdown Output + +Convert transcripts to plain text or markdown format for documentation or terminal viewing: + +```bash +# Generate plain text output (verbose with timestamps, token usage) +claude-code-log /path/to/directory --format text -o output.txt + +# Generate markdown output +claude-code-log /path/to/directory --format markdown -o output.md + +# Generate compact chat format (clean conversation flow) +claude-code-log /path/to/directory --format chat -o chat.txt + +# Single file with chat format (most readable) +claude-code-log transcript.jsonl --format chat +``` + +**Format Comparison:** + +- **text**: Verbose format with timestamps, token usage, working directories, and full tool details +- **markdown**: Same as text but with markdown heading hierarchy for better document integration +- **chat**: Compact format mimicking Claude Code UI - clean conversation flow with tool use symbols (⏺) and truncated results (⎿) + +**All Format Features:** +- Session headers with IDs and summaries (text/markdown only) +- User and assistant message separation +- Tool use and tool result rendering +- Thinking content blocks (text/markdown only) +- Chat format: clean, minimal output perfect for quick review + ## File Structure - `claude_code_log/parser.py` - Data extraction and parsing from JSONL files - `claude_code_log/renderer.py` - HTML generation and template rendering +- `claude_code_log/text_renderer.py` - Plain text and markdown rendering - `claude_code_log/converter.py` - High-level conversion orchestration - `claude_code_log/cli.py` - Command-line interface with project discovery - `claude_code_log/models.py` - Pydantic models for transcript data structures diff --git a/claude_code_log/cli.py b/claude_code_log/cli.py index 6274acd3..7bdae75a 100644 --- a/claude_code_log/cli.py +++ b/claude_code_log/cli.py @@ -10,7 +10,11 @@ import click from git import Repo, InvalidGitRepositoryError -from .converter import convert_jsonl_to_html, process_projects_hierarchy +from .converter import ( + convert_jsonl_to_html, + convert_jsonl_to_output, + process_projects_hierarchy, +) from .cache import CacheManager, get_library_version @@ -338,12 +342,20 @@ def _clear_html_files(input_path: Path, all_projects: bool) -> None: "-o", "--output", type=click.Path(path_type=Path), - help="Output HTML file path (default: input file with .html extension or combined_transcripts.html for directories)", + help="Output file path (default: input file with appropriate extension based on format)", +) +@click.option( + "-f", + "--format", + "output_format", + type=click.Choice(["html", "text", "markdown", "chat"], case_sensitive=False), + default="html", + help="Output format: html, text, markdown, or chat (default: html)", ) @click.option( "--open-browser", is_flag=True, - help="Open the generated HTML file in the default browser", + help="Open the generated HTML file in the default browser (only works with HTML format)", ) @click.option( "--from-date", @@ -358,12 +370,12 @@ def _clear_html_files(input_path: Path, all_projects: bool) -> None: @click.option( "--all-projects", is_flag=True, - help="Process all projects in ~/.claude/projects/ hierarchy and create linked HTML files", + help="Process all projects in ~/.claude/projects/ hierarchy and create linked files", ) @click.option( "--no-individual-sessions", is_flag=True, - help="Skip generating individual session HTML files (only create combined transcript)", + help="Skip generating individual session files (only create combined transcript)", ) @click.option( "--no-cache", @@ -388,6 +400,7 @@ def _clear_html_files(input_path: Path, all_projects: bool) -> None: def main( input_path: Optional[Path], output: Optional[Path], + output_format: str, open_browser: bool, from_date: Optional[str], to_date: Optional[str], @@ -398,7 +411,7 @@ def main( clear_html: bool, tui: bool, ) -> None: - """Convert Claude transcript JSONL files to HTML. + """Convert Claude transcript JSONL files to HTML, text, or markdown. INPUT_PATH: Path to a Claude transcript JSONL file, directory containing JSONL files, or project path to convert. If not provided, defaults to ~/.claude/projects/ and --all-projects is used. """ @@ -406,7 +419,19 @@ def main( logging.basicConfig(level=logging.WARNING, format="%(levelname)s: %(message)s") try: - # Handle TUI mode + # Validate incompatible options + if output_format.lower() != "html" and tui: + click.echo("Error: TUI mode only works with HTML format", err=True) + sys.exit(1) + + if output_format.lower() != "html" and open_browser: + click.echo("Warning: --open-browser only works with HTML format", err=True) + + if output_format.lower() != "html" and all_projects: + click.echo("Error: --all-projects only works with HTML format", err=True) + sys.exit(1) + + # Handle TUI mode (HTML only) if tui: # Handle default case for TUI - use ~/.claude/projects if no input path if input_path is None: @@ -571,11 +596,12 @@ def main( f"Neither {input_path} nor {claude_path} exists" ) - output_path = convert_jsonl_to_html( + output_path = convert_jsonl_to_output( input_path, output, from_date, to_date, + output_format, not no_individual_sessions, not no_cache, ) @@ -583,7 +609,7 @@ def main( click.echo(f"Successfully converted {input_path} to {output_path}") else: jsonl_count = len(list(input_path.glob("*.jsonl"))) - if not no_individual_sessions: + if output_format.lower() == "html" and not no_individual_sessions: session_files = list(input_path.glob("session-*.html")) click.echo( f"Successfully combined {jsonl_count} transcript files from {input_path} to {output_path} and generated {len(session_files)} individual session files" diff --git a/claude_code_log/content_extractor.py b/claude_code_log/content_extractor.py new file mode 100644 index 00000000..995503fa --- /dev/null +++ b/claude_code_log/content_extractor.py @@ -0,0 +1,179 @@ +#!/usr/bin/env python3 +"""Extract data from ContentItem objects without formatting. + +This module provides shared content extraction logic used by both HTML and text renderers. +It separates data extraction from presentation formatting. +""" + +import json +from typing import Any, Dict, List, Union, Optional +from dataclasses import dataclass + +from .models import ( + ContentItem, + TextContent, + ToolUseContent, + ToolResultContent, + ThinkingContent, + ImageContent, +) + + +@dataclass +class ExtractedText: + """Extracted text content.""" + + text: str + + +@dataclass +class ExtractedThinking: + """Extracted thinking content.""" + + thinking: str + signature: Optional[str] = None + + +@dataclass +class ExtractedToolUse: + """Extracted tool use content.""" + + name: str + id: str + input: Dict[str, Any] + + +@dataclass +class ExtractedToolResult: + """Extracted tool result content.""" + + tool_use_id: str + is_error: bool + content: Union[str, List[Dict[str, Any]]] + + +@dataclass +class ExtractedImage: + """Extracted image content.""" + + media_type: str + data: str + + +# Union type for all extracted content +ExtractedContent = Union[ + ExtractedText, + ExtractedThinking, + ExtractedToolUse, + ExtractedToolResult, + ExtractedImage, +] + + +def extract_content_data(content: ContentItem) -> Optional[ExtractedContent]: + """Extract raw data from ContentItem without any formatting. + + Args: + content: A ContentItem object (TextContent, ToolUseContent, etc.) + + Returns: + Extracted data as a dataclass, or None if content type is unknown + """ + # Handle TextContent + if isinstance(content, TextContent) or ( + hasattr(content, "type") and getattr(content, "type") == "text" + ): + text = getattr(content, "text", str(content)) + return ExtractedText(text=text) + + # Handle ThinkingContent + elif isinstance(content, ThinkingContent) or ( + hasattr(content, "type") and getattr(content, "type") == "thinking" + ): + thinking_text = getattr(content, "thinking", "") + signature = getattr(content, "signature", None) + return ExtractedThinking(thinking=thinking_text, signature=signature) + + # Handle ToolUseContent + elif isinstance(content, ToolUseContent) or ( + hasattr(content, "type") and getattr(content, "type") == "tool_use" + ): + tool_name = getattr(content, "name", "unknown") + tool_id = getattr(content, "id", "") + tool_input = getattr(content, "input", {}) + return ExtractedToolUse(name=tool_name, id=tool_id, input=tool_input) + + # Handle ToolResultContent + elif isinstance(content, ToolResultContent) or ( + hasattr(content, "type") and getattr(content, "type") == "tool_result" + ): + tool_use_id = getattr(content, "tool_use_id", "") + is_error = getattr(content, "is_error", False) + content_data = getattr(content, "content", "") + return ExtractedToolResult( + tool_use_id=tool_use_id, is_error=is_error, content=content_data + ) + + # Handle ImageContent + elif isinstance(content, ImageContent) or ( + hasattr(content, "type") and getattr(content, "type") == "image" + ): + source = getattr(content, "source", {}) + media_type = ( + getattr(source, "media_type", "unknown") + if hasattr(source, "media_type") + else "unknown" + ) + data = getattr(source, "data", "") if hasattr(source, "data") else "" + return ExtractedImage(media_type=media_type, data=data) + + # Unknown content type + return None + + +def format_tool_input_json(tool_input: Dict[str, Any], indent: int = 2) -> str: + """Format tool input as indented JSON string. + + Args: + tool_input: Tool input dictionary + indent: Number of spaces for JSON indentation + + Returns: + Formatted JSON string + """ + return json.dumps(tool_input, indent=indent) + + +def is_text_content(content: ContentItem) -> bool: + """Check if content is TextContent.""" + return isinstance(content, TextContent) or ( + hasattr(content, "type") and getattr(content, "type") == "text" + ) + + +def is_thinking_content(content: ContentItem) -> bool: + """Check if content is ThinkingContent.""" + return isinstance(content, ThinkingContent) or ( + hasattr(content, "type") and getattr(content, "type") == "thinking" + ) + + +def is_tool_use_content(content: ContentItem) -> bool: + """Check if content is ToolUseContent.""" + return isinstance(content, ToolUseContent) or ( + hasattr(content, "type") and getattr(content, "type") == "tool_use" + ) + + +def is_tool_result_content(content: ContentItem) -> bool: + """Check if content is ToolResultContent.""" + return isinstance(content, ToolResultContent) or ( + hasattr(content, "type") and getattr(content, "type") == "tool_result" + ) + + +def is_image_content(content: ContentItem) -> bool: + """Check if content is ImageContent.""" + return isinstance(content, ImageContent) or ( + hasattr(content, "type") and getattr(content, "type") == "image" + ) diff --git a/claude_code_log/converter.py b/claude_code_log/converter.py index 4dad595d..529f611c 100644 --- a/claude_code_log/converter.py +++ b/claude_code_log/converter.py @@ -32,6 +32,135 @@ is_html_outdated, get_project_display_name, ) +from .text_renderer import generate_text, generate_markdown, generate_chat + + +def convert_jsonl_to_output( + input_path: Path, + output_path: Optional[Path] = None, + from_date: Optional[str] = None, + to_date: Optional[str] = None, + output_format: str = "html", + generate_individual_sessions: bool = True, + use_cache: bool = True, + silent: bool = False, +) -> Path: + """Convert JSONL transcript(s) to specified output format. + + Args: + input_path: Path to JSONL file or directory + output_path: Optional output file path + from_date: Optional start date filter + to_date: Optional end date filter + output_format: Output format - "html", "text", or "markdown" + generate_individual_sessions: Whether to generate individual session files (HTML only) + use_cache: Whether to use cache + silent: Whether to suppress output messages + + Returns: + Path to the generated output file + """ + if output_format.lower() == "html": + return convert_jsonl_to_html( + input_path, + output_path, + from_date, + to_date, + generate_individual_sessions, + use_cache, + silent, + ) + else: + # Text or markdown format + if not input_path.exists(): + raise FileNotFoundError(f"Input path not found: {input_path}") + + # Initialize cache manager for directory mode + cache_manager = None + if use_cache and input_path.is_dir(): + try: + library_version = get_library_version() + cache_manager = CacheManager(input_path, library_version) + except Exception as e: + if not silent: + print(f"Warning: Failed to initialize cache manager: {e}") + + # Determine output file extension + if output_format.lower() == "markdown": + extension = ".md" + else: + extension = ".txt" + + if input_path.is_file(): + # Single file mode + if output_path is None: + output_path = input_path.with_suffix(extension) + messages = load_transcript(input_path, silent=silent) + title = f"Claude Transcript - {input_path.stem}" + else: + # Directory mode + if output_path is None: + if output_format.lower() == "markdown": + output_filename = "combined_transcripts.md" + elif output_format.lower() == "chat": + output_filename = "combined_transcripts_chat.txt" + else: + output_filename = "combined_transcripts.txt" + output_path = input_path / output_filename + + # Ensure cache is fresh + if cache_manager: + ensure_fresh_cache( + input_path, cache_manager, from_date, to_date, silent + ) + + # Load messages + messages = load_directory_transcripts( + input_path, cache_manager, from_date, to_date, silent + ) + + # Extract working directories for title + working_directories = extract_working_directories(messages) + project_title = get_project_display_name( + input_path.name, working_directories + ) + title = f"Claude Transcripts - {project_title}" + + # Apply date filtering + messages = filter_messages_by_date(messages, from_date, to_date) + + # Update title with date range if specified + if from_date or to_date: + date_range_parts: List[str] = [] + if from_date: + date_range_parts.append(f"from {from_date}") + if to_date: + date_range_parts.append(f"to {to_date}") + date_range_str = " ".join(date_range_parts) + title += f" ({date_range_str})" + + # Generate text/markdown/chat output + if output_format.lower() == "markdown": + content = generate_markdown(messages, title) + elif output_format.lower() == "chat": + content = generate_chat(messages, title) + else: + content = generate_text(messages, title, format_type="text") + + # Write to file + assert output_path is not None + output_path.write_text(content, encoding="utf-8") + + if not silent: + if input_path.is_file(): + print(f"Successfully converted {input_path} to {output_path}") + else: + jsonl_count = len(list(input_path.glob("*.jsonl"))) + print( + f"Successfully combined {jsonl_count} transcript files from {input_path} to {output_path}" + ) + + return output_path def convert_jsonl_to_html( diff --git a/claude_code_log/renderer.py b/claude_code_log/renderer.py index 1a674919..7e464065 100644 --- a/claude_code_log/renderer.py +++ b/claude_code_log/renderer.py @@ -29,6 +29,13 @@ ImageContent, ) from .parser import extract_text_content +from .content_extractor import ( + extract_content_data, + ExtractedText, + ExtractedThinking, + ExtractedToolUse, + ExtractedToolResult, +) from .utils import ( is_command_message, is_local_command_output, @@ -1302,79 +1309,71 @@ def render_message_content(content: List[ContentItem], message_type: str) -> str Note: This does NOT handle user-specific preprocessing like IDE tags or compacted session summaries. Those should be handled by render_user_message_content. """ - if len(content) == 1 and isinstance(content[0], TextContent): - if message_type == "user": - # User messages are shown as-is in preformatted blocks - escaped_text = escape_html(content[0].text) - return "
" + escaped_text + "
" - else: - # Assistant messages get markdown rendering - return render_markdown(content[0].text) + # Fast path for single text content + if len(content) == 1: + extracted = extract_content_data(content[0]) + if isinstance(extracted, ExtractedText): + if message_type == "user": + # User messages are shown as-is in preformatted blocks + escaped_text = escape_html(extracted.text) + return "
" + escaped_text + "
" + else: + # Assistant messages get markdown rendering + return render_markdown(extracted.text) # content is a list of ContentItem objects rendered_parts: List[str] = [] for item in content: - # Handle both custom and Anthropic types - item_type = getattr(item, "type", None) - - if type(item) is TextContent or ( - hasattr(item, "type") and hasattr(item, "text") and item_type == "text" - ): - # Handle both TextContent and Anthropic TextBlock - text_value = getattr(item, "text", str(item)) + # Extract data from content item + extracted = extract_content_data(item) + + if extracted is None: + continue + + if isinstance(extracted, ExtractedText): if message_type == "user": # User messages are shown as-is in preformatted blocks - escaped_text = escape_html(text_value) + escaped_text = escape_html(extracted.text) rendered_parts.append("
" + escaped_text + "
") else: # Assistant messages get markdown rendering - rendered_parts.append(render_markdown(text_value)) - elif type(item) is ToolUseContent or ( - hasattr(item, "type") and item_type == "tool_use" - ): - # Handle both ToolUseContent and Anthropic ToolUseBlock - # Convert Anthropic type to our format if necessary - if not isinstance(item, ToolUseContent): - # Create a ToolUseContent from Anthropic ToolUseBlock - tool_use_item = ToolUseContent( - type="tool_use", - id=getattr(item, "id", ""), - name=getattr(item, "name", ""), - input=getattr(item, "input", {}), - ) - else: - tool_use_item = item + rendered_parts.append(render_markdown(extracted.text)) + + elif isinstance(extracted, ExtractedToolUse): + # Create ToolUseContent for specialized formatter + tool_use_item = ToolUseContent( + type="tool_use", + id=extracted.id, + name=extracted.name, + input=extracted.input, + ) rendered_parts.append(format_tool_use_content(tool_use_item)) # type: ignore - elif type(item) is ToolResultContent or ( - hasattr(item, "type") and item_type == "tool_result" - ): - # Handle both ToolResultContent and Anthropic types - if not isinstance(item, ToolResultContent): - # Convert from Anthropic type if needed - tool_result_item = ToolResultContent( - type="tool_result", - tool_use_id=getattr(item, "tool_use_id", ""), - content=getattr(item, "content", ""), - is_error=getattr(item, "is_error", False), - ) - else: - tool_result_item = item + + elif isinstance(extracted, ExtractedToolResult): + # Create ToolResultContent for specialized formatter + tool_result_item = ToolResultContent( + type="tool_result", + tool_use_id=extracted.tool_use_id, + content=extracted.content, + is_error=extracted.is_error, + ) rendered_parts.append(format_tool_result_content(tool_result_item)) # type: ignore - elif type(item) is ThinkingContent or ( - hasattr(item, "type") and item_type == "thinking" - ): - # Handle both ThinkingContent and Anthropic ThinkingBlock - if not isinstance(item, ThinkingContent): - # Convert from Anthropic type if needed - thinking_item = ThinkingContent( - type="thinking", thinking=getattr(item, "thinking", str(item)) - ) - else: - thinking_item = item + + elif isinstance(extracted, ExtractedThinking): + # Create ThinkingContent for specialized formatter + thinking_item = ThinkingContent( + type="thinking", + thinking=extracted.thinking, + signature=extracted.signature, + ) rendered_parts.append(format_thinking_content(thinking_item)) # type: ignore - elif type(item) is ImageContent: - rendered_parts.append(format_image_content(item)) # type: ignore + + else: # ExtractedImage + # For images, we still need the original ImageContent structure + # So we'll keep the original item if it's already ImageContent + if isinstance(item, ImageContent): + rendered_parts.append(format_image_content(item)) # type: ignore return "\n".join(rendered_parts) diff --git a/claude_code_log/text_renderer.py b/claude_code_log/text_renderer.py new file mode 100644 index 00000000..7894705e --- /dev/null +++ b/claude_code_log/text_renderer.py @@ -0,0 +1,426 @@ +#!/usr/bin/env python3 +"""Render Claude transcript data to plain text/markdown format.""" + +import json +from typing import List, Dict, Optional + +from .models import ( + TranscriptEntry, + AssistantTranscriptEntry, + UserTranscriptEntry, + SummaryTranscriptEntry, + SystemTranscriptEntry, + ContentItem, + UsageInfo, +) +from .parser import extract_text_content +from .renderer import format_timestamp +from .content_extractor import ( + extract_content_data, + ExtractedText, + ExtractedThinking, + ExtractedToolUse, + ExtractedToolResult, + format_tool_input_json, +) + + +def format_usage_info(usage: Optional[UsageInfo]) -> str: + """Format token usage information.""" + if not usage: + return "" + + parts: List[str] = [] + if usage.input_tokens is not None: + parts.append(f"Input: {usage.input_tokens}") + if usage.output_tokens is not None: + parts.append(f"Output: {usage.output_tokens}") + if usage.cache_creation_input_tokens: + parts.append(f"Cache Creation: {usage.cache_creation_input_tokens}") + if usage.cache_read_input_tokens: + parts.append(f"Cache Read: {usage.cache_read_input_tokens}") + + return " | ".join(parts) if parts else "" + + +def render_text_content(content: ContentItem, indent: int = 0) -> str: + """Render a single content item as plain text.""" + prefix = " " * indent + + # Extract data from content item + extracted = extract_content_data(content) + + if extracted is None: + return f"{prefix}[UNKNOWN CONTENT TYPE: {type(content).__name__}]" + + # Handle text content + if isinstance(extracted, ExtractedText): + lines = extracted.text.split("\n") + return "\n".join(f"{prefix}{line}" for line in lines) + + # Handle thinking content + elif isinstance(extracted, ExtractedThinking): + lines = extracted.thinking.split("\n") + result: List[str] = [f"{prefix}[THINKING]"] + result.extend(f"{prefix} {line}" for line in lines) + return "\n".join(result) + + # Handle tool use + elif isinstance(extracted, ExtractedToolUse): + result: List[str] = [f"{prefix}[TOOL USE: {extracted.name}]"] + if extracted.id: + result.append(f"{prefix} ID: {extracted.id}") + if extracted.input: + # Format input as JSON with indentation + input_json = format_tool_input_json(extracted.input, indent=2) + for line in input_json.split("\n"): + result.append(f"{prefix} {line}") + return "\n".join(result) + + # Handle tool result + elif isinstance(extracted, ExtractedToolResult): + status = "ERROR" if extracted.is_error else "RESULT" + result: List[str] = [f"{prefix}[TOOL {status}]"] + if extracted.tool_use_id: + result.append(f"{prefix} Tool Use ID: {extracted.tool_use_id}") + + # Format content + if isinstance(extracted.content, str): + lines = extracted.content.split("\n") + for line in lines: + result.append(f"{prefix} {line}") + elif isinstance(extracted.content, list): # type: ignore[reportUnnecessaryIsInstance] + # Handle structured content + for item in extracted.content: + if isinstance(item, dict): # type: ignore[reportUnnecessaryIsInstance] + item_json = json.dumps(item, indent=2) + for line in item_json.split("\n"): + result.append(f"{prefix} {line}") + else: + result.append(f"{prefix} {item}") + else: + result.append(f"{prefix} {extracted.content}") + + return "\n".join(result) + + # Handle image content + else: # ExtractedImage + return f"{prefix}[IMAGE: {extracted.media_type}]" + + +def render_message_contents(content_list: List[ContentItem], indent: int = 0) -> str: + """Render a list of content items.""" + if not content_list: + return "" + + parts: List[str] = [] + for content in content_list: + rendered = render_text_content(content, indent) + if rendered: + parts.append(rendered) + + return "\n".join(parts) + + +def render_user_message(message: UserTranscriptEntry, format_type: str = "text") -> str: + """Render a user message in plain text format.""" + lines: List[str] = [] + + # Header + timestamp = format_timestamp(message.timestamp) + if format_type == "markdown": + lines.append(f"### User ({timestamp})") + lines.append("") + else: + lines.append("=" * 80) + lines.append(f"USER | {timestamp}") + if message.cwd: + lines.append(f"Working Directory: {message.cwd}") + lines.append("=" * 80) + + # Content + if hasattr(message.message, "content"): + content = message.message.content + if isinstance(content, str): + lines.append(content) + else: + # Content is List[ContentItem] + lines.append(render_message_contents(content)) + + lines.append("") + return "\n".join(lines) + + +def render_assistant_message( + message: AssistantTranscriptEntry, format_type: str = "text" +) -> str: + """Render an assistant message in plain text format.""" + lines: List[str] = [] + + # Header + timestamp = format_timestamp(message.timestamp) + usage_str = ( + format_usage_info(message.message.usage) + if hasattr(message.message, "usage") + else "" + ) + + if format_type == "markdown": + lines.append(f"### Assistant ({timestamp})") + if usage_str: + lines.append(f"*{usage_str}*") + lines.append("") + else: + lines.append("-" * 80) + lines.append(f"ASSISTANT | {timestamp}") + if usage_str: + lines.append(f"Tokens: {usage_str}") + if message.message.model: + lines.append(f"Model: {message.message.model}") + lines.append("-" * 80) + + # Content + if hasattr(message.message, "content") and message.message.content: + lines.append(render_message_contents(message.message.content)) + + lines.append("") + return "\n".join(lines) + + +def render_summary(message: SummaryTranscriptEntry, format_type: str = "text") -> str: + """Render a session summary.""" + if format_type == "markdown": + return f"**Session Summary:** {message.summary}\n\n" + else: + return f"[SESSION SUMMARY] {message.summary}\n\n" + + +def render_system_message( + message: SystemTranscriptEntry, format_type: str = "text" +) -> str: + """Render a system message.""" + timestamp = format_timestamp(message.timestamp) + level = getattr(message, "level", "info").upper() + + if format_type == "markdown": + return f"*System {level} ({timestamp}):* {message.content}\n\n" + else: + return f"[SYSTEM {level}] {timestamp}: {message.content}\n\n" + + +def generate_text( + messages: List[TranscriptEntry], + title: Optional[str] = None, + format_type: str = "text", + include_summaries: bool = False, + include_system_messages: bool = False, +) -> str: + """Generate plain text or markdown from transcript messages. + + Args: + messages: List of transcript entries to render + title: Optional title for the output + format_type: Output format - "text" or "markdown" + include_summaries: Whether to include session summaries + include_system_messages: Whether to include system messages + + Returns: + Formatted text output + """ + if not title: + title = "Claude Transcript" + + lines: List[str] = [] + + # Add title + if format_type == "markdown": + lines.append(f"# {title}") + lines.append("") + else: + lines.append("=" * 80) + lines.append(title.center(80)) + lines.append("=" * 80) + lines.append("") + + # Group messages by session if needed + session_summaries: Dict[str, str] = {} + uuid_to_session: Dict[str, str] = {} + + # Build mapping from message UUID to session ID for summaries + for message in messages: + if hasattr(message, "uuid") and hasattr(message, "sessionId"): + message_uuid = getattr(message, "uuid", "") + session_id = getattr(message, "sessionId", "") + if ( + message_uuid + and session_id + and isinstance(message, AssistantTranscriptEntry) + ): + uuid_to_session[message_uuid] = session_id + + # Map summaries to sessions + if include_summaries: + for message in messages: + if isinstance(message, SummaryTranscriptEntry): + leaf_uuid = message.leafUuid + if leaf_uuid in uuid_to_session: + session_summaries[uuid_to_session[leaf_uuid]] = message.summary + + # Track current session for summary insertion + current_session = None + session_started = False + + # Render messages + for message in messages: + # Handle session changes + if hasattr(message, "sessionId"): + session_id = getattr(message, "sessionId", "") + if session_id and session_id != current_session: + current_session = session_id + session_started = True + + # Add session separator + if format_type == "markdown": + lines.append(f"## Session: {session_id[:8]}...") + if session_id in session_summaries: + lines.append(f"**Summary:** {session_summaries[session_id]}") + lines.append("") + else: + lines.append("\n" + "#" * 80) + lines.append(f"# SESSION: {session_id}") + if session_id in session_summaries: + lines.append(f"# Summary: {session_summaries[session_id]}") + lines.append("#" * 80) + lines.append("") + + # Render message based on type + if isinstance(message, UserTranscriptEntry): + lines.append(render_user_message(message, format_type)) + elif isinstance(message, AssistantTranscriptEntry): + lines.append(render_assistant_message(message, format_type)) + elif isinstance(message, SummaryTranscriptEntry): + # Skip summaries if not including them or if we already showed it in session header + if include_summaries and not session_started: + lines.append(render_summary(message, format_type)) + elif isinstance(message, SystemTranscriptEntry): + if include_system_messages: + lines.append(render_system_message(message, format_type)) + else: + # QueueOperationTranscriptEntry - skip in text output + pass + + if session_started: + session_started = False + + return "\n".join(lines) + + +def generate_markdown( + messages: List[TranscriptEntry], title: Optional[str] = None +) -> str: + """Generate markdown format output (convenience wrapper).""" + return generate_text( + messages, title, format_type="markdown", include_summaries=True + ) + + +def _truncate_lines(text: str, max_lines: int = 10) -> str: + """Truncate text to maximum number of lines.""" + lines_list = text.split("\n") + if len(lines_list) <= max_lines: + return text + + truncated = "\n".join(lines_list[:max_lines]) + remaining = len(lines_list) - max_lines + return f"{truncated}\n… +{remaining} lines" + + +def generate_chat(messages: List[TranscriptEntry], title: Optional[str] = None) -> str: + """Generate compact chat format output - clean conversation flow with tool use. + + Args: + messages: List of transcript entries to render + title: Optional title (not used in chat format for cleaner output) + + Returns: + Formatted chat-style text output + """ + lines: List[str] = [] + + for message in messages: + # Render user and assistant messages for conversation flow + if isinstance(message, UserTranscriptEntry): + # Check for tool results first + has_tool_result = False + if hasattr(message.message, "content") and isinstance( + message.message.content, list + ): + for item in message.message.content: + extracted = extract_content_data(item) + + if isinstance(extracted, ExtractedToolResult): + has_tool_result = True + # Show tool result with truncated output + if isinstance(extracted.content, str): + truncated = _truncate_lines(extracted.content, 10) + # Indent each line of the result + indented_lines: List[str] = [] + for line in truncated.split("\n"): + indented_lines.append(f" {line}") + lines.append(f" ⎿ {indented_lines[0]}") + lines.extend(indented_lines[1:]) + lines.append("") + + # If no tool result, show user message + if not has_tool_result: + if hasattr(message.message, "content"): + content = message.message.content + if isinstance(content, str): + text = content + else: + # Extract text from content list + text = extract_text_content(content) + + if text: + lines.append(f"> {text}") + lines.append("") + + elif isinstance(message, AssistantTranscriptEntry): + # Show assistant text and tool use + if hasattr(message.message, "content") and message.message.content: + text_parts: List[str] = [] + tool_parts: List[str] = [] + + for item in message.message.content: + extracted = extract_content_data(item) + + if isinstance(extracted, ExtractedText): + if extracted.text: + text_parts.append(extracted.text) + + elif isinstance(extracted, ExtractedToolUse): + # Show tool use compactly + # Format tool use on one line or truncated + if extracted.input: + input_str = json.dumps( + extracted.input, separators=(",", ":") + ) + if len(input_str) > 100: + input_str = input_str[:100] + "…" + tool_parts.append(f"⏺ {extracted.name}({input_str})") + else: + tool_parts.append(f"⏺ {extracted.name}()") + + # Output assistant message + if text_parts or tool_parts: + if text_parts: + combined_text = "\n".join(text_parts) + lines.append(f"⏺ {combined_text}") + if tool_parts: + for tool_line in tool_parts: + lines.append(tool_line) + lines.append("") + + # Skip summaries, system messages, and queue operations for clean chat flow + + return "\n".join(lines) diff --git a/test/test_text_rendering.py b/test/test_text_rendering.py new file mode 100644 index 00000000..f22a4278 --- /dev/null +++ b/test/test_text_rendering.py @@ -0,0 +1,576 @@ +#!/usr/bin/env python3 +"""Test cases for text and markdown rendering.""" + +import json +import tempfile +from pathlib import Path +from claude_code_log.parser import load_transcript +from claude_code_log.text_renderer import ( + generate_text, + generate_markdown, + render_text_content, + format_usage_info, +) +from claude_code_log.models import TextContent, ToolUseContent, UsageInfo + + +def test_text_rendering_basic(): + """Test basic plain text rendering of user and assistant messages.""" + user_message = { + "type": "user", + "timestamp": "2025-06-11T22:45:17.436Z", + "parentUuid": None, + "isSidechain": False, + "userType": "human", + "cwd": "/tmp", + "sessionId": "test_session", + "version": "1.0.0", + "uuid": "test_msg_001", + "message": { + "role": "user", + "content": [{"type": "text", "text": "Hello, can you help me?"}], + }, + } + + assistant_message = { + "type": "assistant", + "timestamp": "2025-06-11T22:45:18.436Z", + "parentUuid": None, + "isSidechain": False, + "userType": "human", + "cwd": "/tmp", + "sessionId": "test_session", + "version": "1.0.0", + "uuid": "test_msg_002", + "message": { + "id": "msg_001", + "type": "message", + "role": "assistant", + "model": "claude-3-5-sonnet-20241022", + "content": [{"type": "text", "text": "Of course! How can I assist you?"}], + "stop_reason": "end_turn", + "usage": { + "input_tokens": 100, + "output_tokens": 50, + "cache_creation_input_tokens": 0, + "cache_read_input_tokens": 0, + }, + }, + } + + # Create temp file with messages + with tempfile.NamedTemporaryFile(mode="w", suffix=".jsonl", delete=False) as f: + f.write(json.dumps(user_message) + "\n") + f.write(json.dumps(assistant_message) + "\n") + f.flush() + test_file_path = Path(f.name) + + try: + messages = load_transcript(test_file_path) + assert len(messages) == 2, f"Expected 2 messages, got {len(messages)}" + + # Generate plain text + text_output = generate_text(messages, "Test Transcript", format_type="text") + + # Verify basic structure + assert "Test Transcript" in text_output, "Title should be in output" + assert "USER" in text_output, "USER label should be in output" + assert "ASSISTANT" in text_output, "ASSISTANT label should be in output" + assert "Hello, can you help me?" in text_output, ( + "User message content should be in output" + ) + assert "Of course! How can I assist you?" in text_output, ( + "Assistant message content should be in output" + ) + assert "Tokens: Input: 100 | Output: 50" in text_output, ( + "Token usage should be in output" + ) + + print("✓ Test passed: Basic text rendering works") + + finally: + test_file_path.unlink() + + +def test_markdown_rendering_basic(): + """Test basic markdown rendering.""" + user_message = { + "type": "user", + "timestamp": "2025-06-11T22:45:17.436Z", + "parentUuid": None, + "isSidechain": False, + "userType": "human", + "cwd": "/tmp", + "sessionId": "test_session", + "version": "1.0.0", + "uuid": "test_msg_001", + "message": { + "role": "user", + "content": [{"type": "text", "text": "What is 2+2?"}], + }, + } + + assistant_message = { + "type": "assistant", + "timestamp": "2025-06-11T22:45:18.436Z", + "parentUuid": None, + "isSidechain": False, + "userType": "human", + "cwd": "/tmp", + "sessionId": "test_session", + "version": "1.0.0", + "uuid": "test_msg_002", + "message": { + "id": "msg_001", + "type": "message", + "role": "assistant", + "model": "claude-3-5-sonnet-20241022", + "content": [{"type": "text", "text": "2+2 equals 4."}], + "stop_reason": "end_turn", + }, + } + + with tempfile.NamedTemporaryFile(mode="w", suffix=".jsonl", delete=False) as f: + f.write(json.dumps(user_message) + "\n") + f.write(json.dumps(assistant_message) + "\n") + f.flush() + test_file_path = Path(f.name) + + try: + messages = load_transcript(test_file_path) + + # Generate markdown + markdown_output = generate_markdown(messages, "Test Transcript") + + # Verify markdown structure + assert "# Test Transcript" in markdown_output, "Title should be H1 in markdown" + assert "### User" in markdown_output, "User should be H3 in markdown" + assert "### Assistant" in markdown_output, "Assistant should be H3 in markdown" + assert "What is 2+2?" in markdown_output, "User message should be in output" + assert "2+2 equals 4." in markdown_output, ( + "Assistant message should be in output" + ) + + print("✓ Test passed: Basic markdown rendering works") + + finally: + test_file_path.unlink() + + +def test_tool_use_rendering(): + """Test rendering of tool use messages in text format.""" + assistant_message = { + "type": "assistant", + "timestamp": "2025-06-11T22:45:18.436Z", + "parentUuid": None, + "isSidechain": False, + "userType": "human", + "cwd": "/tmp", + "sessionId": "test_session", + "version": "1.0.0", + "uuid": "test_msg_001", + "message": { + "id": "msg_001", + "type": "message", + "role": "assistant", + "model": "claude-3-5-sonnet-20241022", + "content": [ + { + "type": "tool_use", + "id": "tool_001", + "name": "Read", + "input": {"file_path": "/tmp/test.txt"}, + } + ], + "stop_reason": "tool_use", + }, + } + + with tempfile.NamedTemporaryFile(mode="w", suffix=".jsonl", delete=False) as f: + f.write(json.dumps(assistant_message) + "\n") + f.flush() + test_file_path = Path(f.name) + + try: + messages = load_transcript(test_file_path) + + # Generate plain text + text_output = generate_text(messages, "Test Transcript", format_type="text") + + # Verify tool use rendering + assert "[TOOL USE: Read]" in text_output, "Tool use should be labeled" + assert "ID: tool_001" in text_output, "Tool ID should be in output" + assert '"/tmp/test.txt"' in text_output or "/tmp/test.txt" in text_output, ( + "Tool input should be in output" + ) + + print("✓ Test passed: Tool use rendering works") + + finally: + test_file_path.unlink() + + +def test_format_usage_info(): + """Test token usage formatting.""" + # Test with all fields + usage = UsageInfo( + input_tokens=100, + output_tokens=50, + cache_creation_input_tokens=20, + cache_read_input_tokens=30, + ) + formatted = format_usage_info(usage) + assert "Input: 100" in formatted + assert "Output: 50" in formatted + assert "Cache Creation: 20" in formatted + assert "Cache Read: 30" in formatted + + # Test with minimal fields + usage_minimal = UsageInfo(input_tokens=100, output_tokens=50) + formatted_minimal = format_usage_info(usage_minimal) + assert "Input: 100" in formatted_minimal + assert "Output: 50" in formatted_minimal + assert "Cache Creation" not in formatted_minimal + + # Test with None + formatted_none = format_usage_info(None) + assert formatted_none == "" + + print("✓ Test passed: Usage info formatting works") + + +def test_render_text_content(): + """Test individual content item rendering.""" + # Test text content + text_item = TextContent(type="text", text="Hello world") + rendered = render_text_content(text_item) + assert "Hello world" in rendered + + # Test tool use content + tool_item = ToolUseContent( + type="tool_use", + id="tool_123", + name="TestTool", + input={"param": "value"}, + ) + rendered_tool = render_text_content(tool_item) + assert "[TOOL USE: TestTool]" in rendered_tool + assert "tool_123" in rendered_tool + + print("✓ Test passed: Individual content rendering works") + + +def test_session_summaries(): + """Test that session summaries are included in text output.""" + user_message = { + "type": "user", + "timestamp": "2025-06-11T22:45:17.436Z", + "parentUuid": None, + "isSidechain": False, + "userType": "human", + "cwd": "/tmp", + "sessionId": "test_session", + "version": "1.0.0", + "uuid": "test_msg_001", + "message": { + "role": "user", + "content": [{"type": "text", "text": "Test message"}], + }, + } + + assistant_message = { + "type": "assistant", + "timestamp": "2025-06-11T22:45:18.436Z", + "parentUuid": None, + "isSidechain": False, + "userType": "human", + "cwd": "/tmp", + "sessionId": "test_session", + "version": "1.0.0", + "uuid": "test_msg_002", + "message": { + "id": "msg_001", + "type": "message", + "role": "assistant", + "model": "claude-3-5-sonnet-20241022", + "content": [{"type": "text", "text": "Response"}], + "stop_reason": "end_turn", + }, + } + + summary_message = { + "type": "summary", + "summary": "Testing summary feature", + "leafUuid": "test_msg_002", + } + + with tempfile.NamedTemporaryFile(mode="w", suffix=".jsonl", delete=False) as f: + f.write(json.dumps(user_message) + "\n") + f.write(json.dumps(assistant_message) + "\n") + f.write(json.dumps(summary_message) + "\n") + f.flush() + test_file_path = Path(f.name) + + try: + messages = load_transcript(test_file_path) + + # Generate markdown (includes summaries by default) + markdown_output = generate_markdown(messages, "Test Transcript") + + # Verify summary is in session header + assert "Testing summary feature" in markdown_output, ( + "Summary should be in markdown output" + ) + + print("✓ Test passed: Session summaries are included") + + finally: + test_file_path.unlink() + + +def test_chat_format_basic(): + """Test compact chat format rendering.""" + user_message = { + "type": "user", + "timestamp": "2025-06-11T22:45:17.436Z", + "parentUuid": None, + "isSidechain": False, + "userType": "human", + "cwd": "/tmp", + "sessionId": "test_session", + "version": "1.0.0", + "uuid": "test_msg_001", + "message": { + "role": "user", + "content": [{"type": "text", "text": "Hello, can you help me?"}], + }, + } + + assistant_message = { + "type": "assistant", + "timestamp": "2025-06-11T22:45:18.436Z", + "parentUuid": None, + "isSidechain": False, + "userType": "human", + "cwd": "/tmp", + "sessionId": "test_session", + "version": "1.0.0", + "uuid": "test_msg_002", + "message": { + "id": "msg_001", + "type": "message", + "role": "assistant", + "model": "claude-3-5-sonnet-20241022", + "content": [{"type": "text", "text": "Of course! How can I assist you?"}], + "stop_reason": "end_turn", + }, + } + + with tempfile.NamedTemporaryFile(mode="w", suffix=".jsonl", delete=False) as f: + f.write(json.dumps(user_message) + "\n") + f.write(json.dumps(assistant_message) + "\n") + f.flush() + test_file_path = Path(f.name) + + try: + messages = load_transcript(test_file_path) + + # Import generate_chat + from claude_code_log.text_renderer import generate_chat + + # Generate chat format + chat_output = generate_chat(messages) + + # Verify chat format - clean and simple with new symbols + assert "> Hello, can you help me?" in chat_output, ( + "User message should be prefixed with >" + ) + assert "⏺ Of course! How can I assist you?" in chat_output, ( + "Assistant message should be prefixed with ⏺" + ) + # Should NOT have timestamps or token info or old prefixes + assert "User:" not in chat_output, "Should not have 'User:' prefix" + assert "Assistant:" not in chat_output, "Should not have 'Assistant:' prefix" + assert "2025-06-11" not in chat_output, "Should not have timestamps" + assert "Tokens:" not in chat_output, "Should not have token usage" + assert "====" not in chat_output, "Should not have separator lines" + + print("✓ Test passed: Chat format renders cleanly") + + finally: + test_file_path.unlink() + + +def test_chat_format_with_tool_use(): + """Test chat format with tool use (should show compactly).""" + assistant_message = { + "type": "assistant", + "timestamp": "2025-06-11T22:45:18.436Z", + "parentUuid": None, + "isSidechain": False, + "userType": "human", + "cwd": "/tmp", + "sessionId": "test_session", + "version": "1.0.0", + "uuid": "test_msg_001", + "message": { + "id": "msg_001", + "type": "message", + "role": "assistant", + "model": "claude-3-5-sonnet-20241022", + "content": [ + {"type": "text", "text": "I'll read that file for you."}, + { + "type": "tool_use", + "id": "tool_001", + "name": "Read", + "input": {"file_path": "/tmp/test.txt"}, + }, + ], + "stop_reason": "tool_use", + }, + } + + with tempfile.NamedTemporaryFile(mode="w", suffix=".jsonl", delete=False) as f: + f.write(json.dumps(assistant_message) + "\n") + f.flush() + test_file_path = Path(f.name) + + try: + messages = load_transcript(test_file_path) + + from claude_code_log.text_renderer import generate_chat + + chat_output = generate_chat(messages) + + # Verify tool use is shown compactly with new format + assert "⏺ I'll read that file for you." in chat_output, ( + "Assistant text should be prefixed with ⏺" + ) + assert "⏺ Read(" in chat_output, "Tool use should be shown with ⏺ symbol" + assert "file_path" in chat_output, "Tool input should be in output" + + print("✓ Test passed: Chat format shows tool use compactly") + + finally: + test_file_path.unlink() + + +def test_chat_format_tool_result_truncation(): + """Test chat format with tool result truncation and indentation.""" + assistant_tool_message = { + "type": "assistant", + "timestamp": "2025-06-11T22:45:18.436Z", + "parentUuid": None, + "isSidechain": False, + "userType": "human", + "cwd": "/tmp", + "sessionId": "test_session", + "version": "1.0.0", + "uuid": "test_msg_001", + "message": { + "id": "msg_001", + "type": "message", + "role": "assistant", + "model": "claude-3-5-sonnet-20241022", + "content": [ + { + "type": "tool_use", + "id": "tool_001", + "name": "Bash", + "input": {"command": "ls -la", "description": "List files"}, + } + ], + "stop_reason": "tool_use", + }, + } + + # Create a long multi-line tool result (15 lines) + tool_result_lines = [f"Line {i} of output" for i in range(1, 16)] + tool_result_content = "\n".join(tool_result_lines) + + user_tool_result = { + "type": "user", + "timestamp": "2025-06-11T22:45:19.436Z", + "parentUuid": None, + "isSidechain": False, + "userType": "human", + "cwd": "/tmp", + "sessionId": "test_session", + "version": "1.0.0", + "uuid": "test_msg_002", + "message": { + "role": "user", + "content": [ + { + "type": "tool_result", + "tool_use_id": "tool_001", + "content": tool_result_content, + } + ], + }, + } + + with tempfile.NamedTemporaryFile(mode="w", suffix=".jsonl", delete=False) as f: + f.write(json.dumps(assistant_tool_message) + "\n") + f.write(json.dumps(user_tool_result) + "\n") + f.flush() + test_file_path = Path(f.name) + + try: + messages = load_transcript(test_file_path) + + from claude_code_log.text_renderer import generate_chat + + chat_output = generate_chat(messages) + + # Verify tool use with arguments + assert "⏺ Bash(" in chat_output, "Tool use should have ⏺ symbol" + assert "command" in chat_output, "Tool arguments should be shown" + assert "ls -la" in chat_output, "Tool argument values should be shown" + + # Verify tool result with truncation indicator + assert "⎿" in chat_output, "Tool result should have ⎿ symbol" + assert "Line 1 of output" in chat_output, "First line should be in output" + assert "Line 10 of output" in chat_output, "10th line should be in output" + assert "Line 15 of output" not in chat_output, ( + "Line beyond 10 should not be in output" + ) + assert "… +5 lines" in chat_output, "Truncation indicator should show +5 lines" + + # Verify indentation (all lines after first should be indented) + lines = chat_output.split("\n") + tool_result_start_idx = None + for i, line in enumerate(lines): + if "⎿" in line: + tool_result_start_idx = i + break + + assert tool_result_start_idx is not None, "Tool result should be in output" + + # Check that subsequent lines are indented + for i in range(tool_result_start_idx + 1, tool_result_start_idx + 5): + if i < len(lines) and lines[i].strip(): # Skip empty lines + assert lines[i].startswith(" "), ( + f"Line {i} should be indented with 5 spaces" + ) + + print( + "✓ Test passed: Chat format handles tool result truncation and indentation" + ) + + finally: + test_file_path.unlink() + + +if __name__ == "__main__": + test_text_rendering_basic() + test_markdown_rendering_basic() + test_tool_use_rendering() + test_format_usage_info() + test_render_text_content() + test_session_summaries() + test_chat_format_basic() + test_chat_format_with_tool_use() + test_chat_format_tool_result_truncation() + print("\n✅ All text rendering tests passed!") From 00115a6a4705acdeb59e1fc2f349fe92e0cfc375 Mon Sep 17 00:00:00 2001 From: Max Yankov Date: Mon, 17 Nov 2025 16:39:11 -0300 Subject: [PATCH 2/2] Include chat in output formats list Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com> --- README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/README.md b/README.md index a2703f50..ffeb609c 100644 --- a/README.md +++ b/README.md @@ -28,7 +28,7 @@ uvx claude-code-log@latest --open-browser ## Key Features -- **Multiple Output Formats**: Generate HTML, plain text, or markdown output from transcript files +- **Multiple Output Formats**: Generate HTML, plain text, markdown, or compact chat output from transcript files - **Interactive TUI (Terminal User Interface)**: Browse and manage Claude Code sessions with real-time navigation, summaries, and quick actions for HTML export and session resuming - **Project Hierarchy Processing**: Process entire `~/.claude/projects/` directory with linked index page - **Individual Session Files**: Generate separate HTML files for each session with navigation links