Skip to content

feat: support all model providers in JSON agent configuration#2109

Draft
Unshure wants to merge 2 commits intomainfrom
agent-tasks/1064
Draft

feat: support all model providers in JSON agent configuration#2109
Unshure wants to merge 2 commits intomainfrom
agent-tasks/1064

Conversation

@Unshure
Copy link
Copy Markdown
Member

@Unshure Unshure commented Apr 10, 2026

Motivation

The experimental agent_config.py feature currently only accepts a simple string for the model field, which is always interpreted as a Bedrock model_id. This limits JSON-based agent configuration to a single provider, preventing use cases like agent-builder tools and the use_agent tool from working with any of the SDK's 12 model providers.

Resolves #1064

Public API Changes

The model field in agent JSON configuration now supports two formats:

# Before: string only (Bedrock model_id)
config = {"model": "us.anthropic.claude-sonnet-4-20250514-v1:0"}
agent = config_to_agent(config)

# After: string still works (backward compatible)
agent = config_to_agent(config)

# After: object format for any provider
config = {
    "model": {
        "provider": "openai",
        "model_id": "gpt-4o",
        "client_args": {"api_key": "$OPENAI_API_KEY"}
    }
}
agent = config_to_agent(config)

Environment variable references ($VAR or ${VAR}) in model config values are resolved automatically before provider instantiation, enabling secure configuration without embedding secrets.

All 12 SDK providers are supported: bedrock, anthropic, openai, gemini, ollama, litellm, mistral, llamaapi, llamacpp, sagemaker, writer, openai_responses. Each provider's constructor parameters are correctly routed — for example, boto_client_config dicts are converted to BotocoreConfig objects for Bedrock/SageMaker, Ollama's client_args maps to ollama_client_args, and Mistral's api_key is extracted as a separate parameter.

All provider imports are lazy to avoid requiring optional dependencies that aren't installed. Non-serializable parameters (boto_session, client, gemini_tools) cannot be specified from JSON and are documented as such.

Use Cases

  • Agent-builder tools: Create agents from JSON templates with any model provider, not just Bedrock
  • Multi-provider configs: Store agent configurations in files that use OpenAI, Anthropic, or local models
  • Secure credential handling: Reference API keys via environment variables in JSON configs rather than hardcoding them

@codecov
Copy link
Copy Markdown

codecov bot commented Apr 10, 2026

Codecov Report

❌ Patch coverage is 93.45794% with 7 lines in your changes missing coverage. Please review.

Files with missing lines Patch % Lines
src/strands/models/sagemaker.py 76.92% 2 Missing and 1 partial ⚠️
src/strands/models/llamacpp.py 80.00% 0 Missing and 2 partials ⚠️
src/strands/models/mistral.py 83.33% 0 Missing and 2 partials ⚠️

📢 Thoughts on this report? Let us know!

_ENV_VAR_PATTERN = re.compile(r"^\$\{([^}]+)\}$|^\$([A-Za-z_][A-Za-z0-9_]*)$")

# Provider name to factory function mapping — populated at module level, lazy imports at call time
PROVIDER_MAP: dict[str, str] = {
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Issue: PROVIDER_MAP maps provider names to string function names, which are then resolved via globals()[factory_name] on line 438. This pattern is fragile — it breaks silently if a function is renamed or removed, and bypasses static analysis tools and IDE navigation.

Suggestion: Map directly to function references instead:

PROVIDER_MAP: dict[str, Callable[[dict[str, Any]], Any]] = {
    "bedrock": _create_bedrock_model,
    "anthropic": _create_anthropic_model,
    ...
}

This requires moving PROVIDER_MAP below the factory function definitions, but it gives you type safety, IDE go-to-definition, and eliminates the globals() lookup.

Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This has been addressed in the latest revision. PROVIDER_MAP now maps to class name strings resolved via getattr(models, class_name), which leverages the existing lazy __getattr__ pattern in models/__init__.py. The factory logic has been moved to from_dict classmethods on each model class. This is a clean approach that works well with the lazy-loading architecture.

return factory_fn(config)


def config_to_agent(config: str | dict[str, Any], **kwargs: dict[str, Any]) -> Any:
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Issue: The type annotation for **kwargs is dict[str, Any], but **kwargs already captures keyword arguments as a dict — the annotation should be the value type, not the full dict type.

Suggestion:

def config_to_agent(config: str | dict[str, Any], **kwargs: Any) -> Any:

Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fixed in the latest revision — **kwargs: Any is now used correctly.

import jsonschema
from jsonschema import ValidationError

logger = logging.getLogger(__name__)
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Issue: logging is imported and logger is defined but never used anywhere in the module.

Suggestion: Either add logging calls (e.g., in _create_model_from_config for debugging provider instantiation) or remove the import and logger definition. Per the style guide, a useful log would be:

logger.debug("provider=<%s> | creating model from config", provider)

Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Addressed in the latest revision — the unused logging import has been removed.

if client_args is not None:
kwargs["client_args"] = client_args
kwargs.update(config)
return AnthropicModel(**kwargs)
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Issue: 7 of the 12 factory functions (_create_anthropic_model, _create_openai_model, _create_gemini_model, _create_litellm_model, _create_llamaapi_model, _create_writer_model, _create_openai_responses_model) share identical logic: pop client_args, build kwargs, update with remaining config, call constructor.

Suggestion: Extract a shared helper to eliminate the duplication:

def _create_client_args_model(model_cls: type, config: dict[str, Any]) -> Any:
    """Common factory for providers that accept client_args + **model_config."""
    client_args = config.pop("client_args", None)
    kwargs: dict[str, Any] = {}
    if client_args is not None:
        kwargs["client_args"] = client_args
    kwargs.update(config)
    return model_cls(**kwargs)

Then each common provider becomes a one-liner with just the import and call. The unique providers (bedrock, ollama, mistral, llamacpp, sagemaker) keep their custom logic.

Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Addressed in the latest revision. The common client_args pattern is now a default Model.from_dict classmethod on the base class, which 7 providers inherit. Only providers with non-standard constructors (Bedrock, Ollama, Mistral, LlamaCpp, SageMaker) override it. This eliminates the duplication completely.

if "boto_client_config" in config:
raw = config.pop("boto_client_config")
kwargs["boto_client_config"] = BotocoreConfig(**raw) if isinstance(raw, dict) else raw
return SageMakerAIModel(**kwargs)
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Issue: The SageMaker factory silently drops any extra config keys after popping endpoint_config, payload_config, and boto_client_config. For example, if a user passes model_id (a common pattern for other providers), it's silently ignored. All other factories pass remaining config through via kwargs.update(config).

Suggestion: Either pass remaining config to the constructor (if SageMaker accepts **kwargs) or explicitly warn/raise on unexpected keys so users get feedback when their config is wrong:

if config:
    logger.warning("provider=<sagemaker> | ignoring unsupported config keys: %s", list(config.keys()))

Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Addressed in the latest revision. SageMakerAIModel.from_dict now raises ValueError with a clear message when unexpected config keys are present.

_VALIDATOR = jsonschema.Draft7Validator(AGENT_CONFIG_SCHEMA)

# Pattern for matching environment variable references
_ENV_VAR_PATTERN = re.compile(r"^\$\{([^}]+)\}$|^\$([A-Za-z_][A-Za-z0-9_]*)$")
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Issue: The regex ^\$\{([^}]+)\}$|^\$([A-Za-z_][A-Za-z0-9_]*)$ only matches full-string env var references (anchored with ^ and $). This means "prefix-$VAR-suffix" won't be resolved, which may surprise users coming from shell-like environments.

Suggestion: This is a reasonable design choice for security and simplicity, but it should be explicitly documented — either in the module docstring or as a comment near the pattern. Something like:

# Only full-string env var references are resolved (no inline interpolation).
# "prefix-$VAR" is NOT resolved; use the object format to construct values programmatically.

Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Addressed in the latest revision — there is now an inline comment above _ENV_VAR_PATTERN documenting this behavior.

@github-actions
Copy link
Copy Markdown

Assessment: Comment

Solid implementation that extends the experimental JSON agent configuration to support all 12 SDK model providers with backward compatibility. The main areas for improvement are reducing code duplication in the provider factory functions and replacing the fragile globals() dispatch pattern.

Review Categories
  • Code Duplication: 7 of 12 factory functions are identical — a shared helper would significantly reduce boilerplate and make adding future providers trivial.
  • Dispatch Pattern: PROVIDER_MAP stores string names resolved via globals(), which is fragile and opaque to static analysis. Direct function references are more idiomatic and safer.
  • Silent Failures: The SageMaker factory silently discards unrecognized config keys, unlike all other factories that pass them through. This inconsistency could silently swallow user config errors.
  • Unused Code: logging import and logger are defined but never called — should either add debug logging for provider instantiation or remove.
  • Type Annotation: **kwargs: dict[str, Any] on config_to_agent is incorrect — should be **kwargs: Any.
  • API Review: This PR modifies experimental public API surface (new object model format). Consider adding the needs-api-review label per the API bar raising process.
  • Documentation PR: The PR adds a new public configuration format (object model). Since this is within the experimental module, a docs PR is suggested but non-blocking. Consider adding a "Documentation PR" section to the PR description.

Good backward compatibility preservation, thorough test coverage, and clean lazy-import pattern across all providers.

- Fix **kwargs type annotation (dict[str, Any] -> Any) in config_to_agent
- Add defensive copy in all from_dict methods to avoid mutating caller's dict
- Raise ValueError on unsupported config keys in SageMaker from_dict
- Improve _create_model_from_dict return type to Model
- Document env var pattern full-string-only matching
call_kwargs = mock_init.call_args[1]
assert isinstance(call_kwargs["boto_client_config"], BotocoreConfig)

def test_default_from_dict_client_args_pattern(self):
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Issue: Dead code — the first with patch.object(BedrockModel, "__init__", ...) block on line 597 exits its context before anything useful happens, and mock_init is immediately reassigned. This appears to be a leftover from earlier refactoring.

Suggestion: Remove lines 595–598 (the BedrockModel import and its unused patch block).

kwargs["boto_client_config"] = BotocoreConfig(**raw) if isinstance(raw, dict) else raw
if config:
unexpected = ", ".join(sorted(config.keys()))
raise ValueError(f"Unsupported SageMaker config keys: {unexpected}")
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Issue: The new ValueError for unexpected SageMaker config keys is a good addition, but there's no test covering this error path.

Suggestion: Add a test (in the appropriate model test file) like:

def test_sagemaker_from_dict_rejects_unexpected_keys(self):
    with pytest.raises(ValueError, match="Unsupported SageMaker config keys"):
        SageMakerAIModel.from_dict({
            "endpoint_config": {},
            "payload_config": {},
            "model_id": "unexpected",
        })

# =============================================================================


class TestModelFromConfig:
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Issue: TestModelFromConfig (lines 446–625) tests from_dict methods on model classes (BedrockModel, OllamaModel, MistralModel, LlamaCppModel, SageMakerAIModel, AnthropicModel) but lives in test_agent_config.py. Per AGENTS.md, unit tests should mirror the src/strands/ structure — these tests exercise model class methods, not agent_config functionality.

Suggestion: Move these tests to the corresponding model test files (tests/strands/models/test_bedrock.py, tests/strands/models/test_ollama.py, etc.). The TestCreateModelFromConfig class (lines 380–438) correctly belongs here since it tests the dispatch logic in agent_config.py.

@github-actions
Copy link
Copy Markdown

Assessment: Comment

The rework since the last review is excellent — all 6 prior issues have been addressed. Moving factory logic to from_dict classmethods on each model class is the right design: it keeps provider-specific knowledge co-located with the provider, the base Model.from_dict eliminates duplication for the 7 common providers, and agent_config.py is now a clean thin orchestration layer.

Review Details
  • Previous review items: All 6 addressed — factory duplication eliminated via base class from_dict, unused logging removed, **kwargs type fixed, env var pattern documented, SageMaker validates unexpected keys, dispatch uses established lazy __getattr__ pattern.
  • Test placement: TestModelFromConfig tests from_dict methods on individual model classes but lives in test_agent_config.py — these should be moved to their respective model test files per the project's test mirroring convention.
  • Missing test: SageMakerAIModel.from_dict now raises on unexpected keys, but this error path lacks a test.
  • Dead code: Unused BedrockModel import and patch block in test_default_from_dict_client_args_pattern.

Clean architecture, solid backward compatibility, and thorough test coverage overall.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Projects

None yet

Development

Successfully merging this pull request may close these issues.

[FEATURE] Support Every model provider in Agent Json feature

1 participant