Skip to content

Conversation

@MQ37
Copy link
Contributor

@MQ37 MQ37 commented Jan 8, 2026

No description provided.

@github-actions github-actions bot added the t-ai Issues owned by the AI team. label Jan 8, 2026
@MQ37 MQ37 merged commit 31c3bdd into master Jan 8, 2026
2 checks passed
@MQ37 MQ37 deleted the update-sdk2 branch January 8, 2026 09:53
@MQ37
Copy link
Contributor Author

MQ37 commented Jan 8, 2026

FYI @jirispilka

MQ37 added a commit that referenced this pull request Jan 8, 2026
commit c1c415f
Author: Apify Release Bot <[email protected]>
Date:   Thu Jan 8 09:53:59 2026 +0000

    chore(release): Update changelog, package.json, manifest.json and server.json versions [skip ci]

commit 31c3bdd
Author: Jakub Kopecký <[email protected]>
Date:   Thu Jan 8 10:53:06 2026 +0100

    fix: update @modelcontextprotocol/sdk to version 1.25.2 in package.json and package-lock.json (#385)

commit d1f7dc7
Author: Jakub Kopecký <[email protected]>
Date:   Wed Jan 7 14:03:21 2026 +0100

    fix: update @modelcontextprotocol/sdk to version 1.25.1 in package.json and package-lock.json (#384)

    * fix: update @modelcontextprotocol/sdk to version 1.25.1 in package.json and package-lock.json

    * fix: remove pollInterval from task creation in tool call request

commit 4270b02
Author: Jakub Kopecký <[email protected]>
Date:   Wed Jan 7 12:10:14 2026 +0100

    feat(evals): add llm driven workflow evals with llm as a judge (#383)

    * feat(evals): add llm driven workflow evals with llm as a judge

    Add workflow evaluation system for testing AI agents in multi-turn
    conversations using Apify MCP tools, with LLM-based evaluation.

    Core Components:
    - Multi-turn conversation executor with dynamic tool discovery
    - LLM judge for evaluating agent performance against requirements
    - Isolated MCP server per test (prevents state contamination)
    - OpenRouter integration (agent + judge models)
    - Configurable tool timeout (default: 60s, MCP SDK integration)

    Architecture:
    • MCP server spawned fresh per test → test isolation
    • Tools refreshed after each turn → supports dynamic registration (add-actor)
    • Strict pass/fail → all tests must pass for CI success
    • Raw error propagation → LLM receives MCP SDK errors unchanged

    CLI Usage:
    npm run evals:workflow
    npm run evals:workflow -- --tool-timeout 300 --category search

    CLI Options:
    --tool-timeout <seconds>  Tool call timeout (default: 60)
    --agent-model <model>     Agent model (default: claude-haiku-4.5)
    --judge-model <model>     Judge model (default: grok-4.1-fast)
    --category <name>         Filter by category
    --id <id>                 Run specific test
    --verbose                 Show full conversations

    Environment:
    APIFY_TOKEN - Required for MCP server
    OPENROUTER_API_KEY - Required for LLM calls

    This enables systematic testing of MCP tools, agent tool-calling behavior,
    and automated quality evaluation without manual verification.

    * refactor(evals): extract shared utilities and unify test case format

    This commit refactors the evaluation system to eliminate code duplication
    and standardize test case formats across both tool selection and workflow
    evaluation systems.
    - types.ts: Unified type definitions for test cases and tools
    - config.ts: Shared OpenRouter configuration and environment validation
    - openai-tools.ts: Consolidated tool transformation utilities
    - test-case-loader.ts: Unified test case loading and filtering functions
    - Standardized on 'query' (previously 'prompt' in workflows)
    - Standardized on 'reference' (previously 'requirements' in workflows)
    - Added version tracking to workflows/test-cases.json
    - Maintains backwards compatibility through type exports
    Removed 7 duplicate functions across the codebase:
    - Test case loading (evaluation-utils.ts vs test-cases-loader.ts)
    - Test case filtering (filterById, filterByCategory, filterTestCases)
    - OpenAI tool transformation (transformToolsToOpenAIFormat vs mcpToolsToOpenAiTools)
    - OpenRouter configuration (OPENROUTER_CONFIG duplicated)
    - Environment validation (validateEnvVars duplicated)
    - OPENROUTER_BASE_URL is now optional (defaults to https://openrouter.ai/api/v1)
    - Created Phoenix-specific validation (validatePhoenixEnvVars)
    - Separated concerns between shared and system-specific config
    - Updated 11 existing files to use shared utilities
    - Deleted evals/workflows/convert-mcp-tools.ts (replaced by shared)
    - All imports updated to reference shared modules
    - Reduced config code by ~37%
    - Eliminated 100% of duplicate functions
    - Improved maintainability and consistency
    - No breaking changes to external APIs
    - TypeScript compilation: ✓
    - Project build: ✓
    - All imports verified: ✓

    * feat(evals): add parallel execution and fix linting for workflows

    - Add --concurrency/-c flag to run workflow evals in parallel (default: 4)
    - Add p-limit dependency for concurrency control
    - Enable ESLint for evals/workflows/ and evals/shared/ directories
    - Fix all linting issues (117 errors):
      - Convert interfaces to types per project convention
      - Fix import ordering with simple-import-sort
      - Remove trailing spaces
      - Fix comma-dangle, arrow-parens, operator-linebreak
      - Prefer node: protocol for built-in imports
      - Fix nested ternary in output-formatter.ts
    - Add logWithPrefix() helper for prefixed live output
    - Extract runSingleTest() function from main evaluation loop
    - Remove empty line after test completion in output

    Breaking changes: None (all changes backward compatible)

    Usage:
      npm run evals:workflow -- -c 10  # Run 10 tests in parallel
      npm run evals:workflow -- -c 1   # Sequential mode

    * feat(evals): use structured output for judge LLM and fix test filtering

    - Refactor judge to use OpenAI's structured output (JSON schema) for robust evaluation
    - Replace fragile text parsing with guaranteed JSON validation
    - Fix test case filtering to support wildcard patterns (--category) and regex (--id)
    - Add responseFormat parameter to LLM client for structured outputs
    - Update judge prompt to remove manual format instructions
    - Add test case for weather MCP Actor

    * feat(evals): MCP instructions, test tracking, and expanded test coverage

commit 6dd3b10
Author: Apify Release Bot <[email protected]>
Date:   Tue Jan 6 14:28:55 2026 +0000

    chore(release): Update changelog, package.json, manifest.json and server.json versions [skip ci]

commit eaeb57b
Author: Jiří Spilka <[email protected]>
Date:   Tue Jan 6 15:27:51 2026 +0100

    fix: Improve README for clarity and MCP clients info at the top (#382)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

t-ai Issues owned by the AI team.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant