-
Notifications
You must be signed in to change notification settings - Fork 92
fix: update @modelcontextprotocol/sdk to version 1.25.2 in package.json and package-lock.json #385
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Merged
Conversation
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
…on and package-lock.json
Contributor
Author
|
FYI @jirispilka |
MQ37
added a commit
that referenced
this pull request
Jan 8, 2026
commit c1c415f Author: Apify Release Bot <[email protected]> Date: Thu Jan 8 09:53:59 2026 +0000 chore(release): Update changelog, package.json, manifest.json and server.json versions [skip ci] commit 31c3bdd Author: Jakub Kopecký <[email protected]> Date: Thu Jan 8 10:53:06 2026 +0100 fix: update @modelcontextprotocol/sdk to version 1.25.2 in package.json and package-lock.json (#385) commit d1f7dc7 Author: Jakub Kopecký <[email protected]> Date: Wed Jan 7 14:03:21 2026 +0100 fix: update @modelcontextprotocol/sdk to version 1.25.1 in package.json and package-lock.json (#384) * fix: update @modelcontextprotocol/sdk to version 1.25.1 in package.json and package-lock.json * fix: remove pollInterval from task creation in tool call request commit 4270b02 Author: Jakub Kopecký <[email protected]> Date: Wed Jan 7 12:10:14 2026 +0100 feat(evals): add llm driven workflow evals with llm as a judge (#383) * feat(evals): add llm driven workflow evals with llm as a judge Add workflow evaluation system for testing AI agents in multi-turn conversations using Apify MCP tools, with LLM-based evaluation. Core Components: - Multi-turn conversation executor with dynamic tool discovery - LLM judge for evaluating agent performance against requirements - Isolated MCP server per test (prevents state contamination) - OpenRouter integration (agent + judge models) - Configurable tool timeout (default: 60s, MCP SDK integration) Architecture: • MCP server spawned fresh per test → test isolation • Tools refreshed after each turn → supports dynamic registration (add-actor) • Strict pass/fail → all tests must pass for CI success • Raw error propagation → LLM receives MCP SDK errors unchanged CLI Usage: npm run evals:workflow npm run evals:workflow -- --tool-timeout 300 --category search CLI Options: --tool-timeout <seconds> Tool call timeout (default: 60) --agent-model <model> Agent model (default: claude-haiku-4.5) --judge-model <model> Judge model (default: grok-4.1-fast) --category <name> Filter by category --id <id> Run specific test --verbose Show full conversations Environment: APIFY_TOKEN - Required for MCP server OPENROUTER_API_KEY - Required for LLM calls This enables systematic testing of MCP tools, agent tool-calling behavior, and automated quality evaluation without manual verification. * refactor(evals): extract shared utilities and unify test case format This commit refactors the evaluation system to eliminate code duplication and standardize test case formats across both tool selection and workflow evaluation systems. - types.ts: Unified type definitions for test cases and tools - config.ts: Shared OpenRouter configuration and environment validation - openai-tools.ts: Consolidated tool transformation utilities - test-case-loader.ts: Unified test case loading and filtering functions - Standardized on 'query' (previously 'prompt' in workflows) - Standardized on 'reference' (previously 'requirements' in workflows) - Added version tracking to workflows/test-cases.json - Maintains backwards compatibility through type exports Removed 7 duplicate functions across the codebase: - Test case loading (evaluation-utils.ts vs test-cases-loader.ts) - Test case filtering (filterById, filterByCategory, filterTestCases) - OpenAI tool transformation (transformToolsToOpenAIFormat vs mcpToolsToOpenAiTools) - OpenRouter configuration (OPENROUTER_CONFIG duplicated) - Environment validation (validateEnvVars duplicated) - OPENROUTER_BASE_URL is now optional (defaults to https://openrouter.ai/api/v1) - Created Phoenix-specific validation (validatePhoenixEnvVars) - Separated concerns between shared and system-specific config - Updated 11 existing files to use shared utilities - Deleted evals/workflows/convert-mcp-tools.ts (replaced by shared) - All imports updated to reference shared modules - Reduced config code by ~37% - Eliminated 100% of duplicate functions - Improved maintainability and consistency - No breaking changes to external APIs - TypeScript compilation: ✓ - Project build: ✓ - All imports verified: ✓ * feat(evals): add parallel execution and fix linting for workflows - Add --concurrency/-c flag to run workflow evals in parallel (default: 4) - Add p-limit dependency for concurrency control - Enable ESLint for evals/workflows/ and evals/shared/ directories - Fix all linting issues (117 errors): - Convert interfaces to types per project convention - Fix import ordering with simple-import-sort - Remove trailing spaces - Fix comma-dangle, arrow-parens, operator-linebreak - Prefer node: protocol for built-in imports - Fix nested ternary in output-formatter.ts - Add logWithPrefix() helper for prefixed live output - Extract runSingleTest() function from main evaluation loop - Remove empty line after test completion in output Breaking changes: None (all changes backward compatible) Usage: npm run evals:workflow -- -c 10 # Run 10 tests in parallel npm run evals:workflow -- -c 1 # Sequential mode * feat(evals): use structured output for judge LLM and fix test filtering - Refactor judge to use OpenAI's structured output (JSON schema) for robust evaluation - Replace fragile text parsing with guaranteed JSON validation - Fix test case filtering to support wildcard patterns (--category) and regex (--id) - Add responseFormat parameter to LLM client for structured outputs - Update judge prompt to remove manual format instructions - Add test case for weather MCP Actor * feat(evals): MCP instructions, test tracking, and expanded test coverage commit 6dd3b10 Author: Apify Release Bot <[email protected]> Date: Tue Jan 6 14:28:55 2026 +0000 chore(release): Update changelog, package.json, manifest.json and server.json versions [skip ci] commit eaeb57b Author: Jiří Spilka <[email protected]> Date: Tue Jan 6 15:27:51 2026 +0100 fix: Improve README for clarity and MCP clients info at the top (#382)
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
No description provided.