Skip to content

add MockLLMClient for testing without API credentials#11

Merged
falloficarus22 merged 2 commits intomainfrom
mock-llm
Feb 28, 2026
Merged

add MockLLMClient for testing without API credentials#11
falloficarus22 merged 2 commits intomainfrom
mock-llm

Conversation

@falloficarus22
Copy link
Owner

  • The mock client allows running tests without needing real OpenAI API keys
  • Supports configurable responses, latency simulation and call history tracking
  • Can be enabled with --features mock-llm

@falloficarus22
Copy link
Owner Author

@codex review

Copy link

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: 861e0dfdda

ℹ️ About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".

Comment on lines +265 to +269
pub async fn send_message_streaming(
&self,
messages: &[Message],
) -> Result<Pin<Box<dyn Stream<Item = Result<String>> + Send>>> {
self.record_call(messages, true);

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2 Badge Respect streaming toggle in mock streaming API

The set_streaming(false) configuration is never enforced: send_message_streaming does not read streaming_enabled and always records is_streaming as true while returning chunked output. In tests that try to disable streaming to validate non-streaming behavior, this mock will still behave as streaming and can mask regressions in code paths that depend on the toggle.

Useful? React with 👍 / 👎.

Comment on lines +284 to +287
if latency > 0 {
// For simplicity, we don't add per-character latency in this mock
// but the total stream could be delayed if needed
}

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P3 Badge Apply configured latency to streaming mock responses

The streaming path reads latency_ms but the if latency > 0 block is a no-op, so set_latency(...) has no effect for streaming calls. This makes latency- or timeout-sensitive tests unreliable when they use send_message_streaming, because the mock cannot reproduce delayed streaming behavior.

Useful? React with 👍 / 👎.

1. streaming_enabled toggle - Now properly respected (was hardcoded to true)
2. Latency simulation - Implemented for both streaming and non-streaming modes:
   - Streaming: delays between each character chunk
   - Non-streaming: delays before returning the full response
@falloficarus22 falloficarus22 marked this pull request as ready for review February 28, 2026 05:31
@falloficarus22 falloficarus22 merged commit 747073d into main Feb 28, 2026
@falloficarus22 falloficarus22 deleted the mock-llm branch February 28, 2026 05:31
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant