feat: add kwargs support for completion calls
#13
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Summary
This PR adds support for passing additional kwargs through the entire RLM stack to underlying LM client APIs, enabling fine-grained control over completion parameters like
max_tokens,temperature,reasoning_effort, and provider-specific options.Key Changes
1. Enhanced Client Interface (
rlm/clients/)BaseLMabstract methods to acceptmodeland**kwargsparametersOpenAIClient.completion()andOpenAIClient.acompletion()to accept and forward kwargs to OpenAI API_track_cost()to safely handle missing usage data withgetattr()fallbacks2. Core Communication Updates (
rlm/core/)LMRequestdataclass with optionalkwargsfield for passing parameters through socket/HTTP protocolsend_lm_request_batched()to accept and forward kwargsLMHandlerandLMRequestHandlerto unpack and pass kwargs to client completion callsRLM.completion()to accept kwargs and propagate them through:_run_iteration()for main completions_default_answer()for fallback completions_fallback_answer()for max-depth fallback3. Environment Integration (
rlm/environments/)_llm_query()and_llm_query_batched()to accept and forward kwargs via socketllm_query()andllm_query_batched()to include kwargs in HTTP payloads4. Testing & Examples
tests/clients/_openai.pywith basic tests demonstrating:reasoning_effort="high")MockLMimplementations in tests and examples to match newBaseLMsignatureBackward Compatibility
All changes are fully backward compatible:
**kwargsparameters default to empty, so existing code works unchangedmodelparameter formalizes existing pattern used by all client implementationsExample Usage
Open questions