feat(agent): add MiniMax as first-class LLM provider with M2.7 default#1185
feat(agent): add MiniMax as first-class LLM provider with M2.7 default#1185octo-patch wants to merge 2 commits intotrycua:mainfrom
Conversation
Add MiniMaxAdapter for litellm that routes minimax/ prefixed models to the MiniMax Cloud API (api.minimax.io/v1). Supports MiniMax-M2.5 and MiniMax-M2.5-highspeed models with 204K context window. - MiniMaxAdapter with temperature clamping, MINIMAX_API_KEY env var, streaming support, and OpenAI-compatible API routing - Registered in litellm custom_provider_map alongside existing adapters - 42 unit tests covering adapter init, model normalization, API key resolution, temperature clamping, parameter building, completion, streaming, and ComputerAgent integration - 3 integration tests with live MiniMax API validation - Updated example.py, pyproject.toml, and README.md
|
Someone is attempting to deploy a commit to the Cua Team on Vercel. A member of the Team first needs to authorize it. |
📝 WalkthroughWalkthroughThis PR adds MiniMax model support to the agent library through a new MiniMaxAdapter that integrates with litellm's custom provider system. It includes adapter implementation with streaming and async support, comprehensive unit and integration tests, and documentation updates with example usage. Changes
Sequence DiagramsequenceDiagram
actor Client
participant ComputerAgent
participant MiniMaxAdapter
participant litellm
participant MiniMax API
Client->>ComputerAgent: completion(model="minimax/...", messages=[...])
ComputerAgent->>MiniMaxAdapter: completion(model="minimax/...", messages=[...])
MiniMaxAdapter->>MiniMaxAdapter: Normalize model name (strip minimax/ prefix)
MiniMaxAdapter->>MiniMaxAdapter: Clamp temperature to [0, 1]
MiniMaxAdapter->>MiniMaxAdapter: Build request params with headers and api_base
MiniMaxAdapter->>litellm: completion(model="openai/...", messages=[...], api_base=..., api_key=...)
litellm->>MiniMax API: POST /v1/chat/completions
MiniMax API-->>litellm: Response with choices
litellm-->>MiniMaxAdapter: ModelResponse
MiniMaxAdapter-->>ComputerAgent: ModelResponse
ComputerAgent-->>Client: ModelResponse
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~20 minutes Poem
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
📝 Coding Plan
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 3
🧹 Nitpick comments (1)
libs/python/agent/tests/test_minimax_integration.py (1)
15-18: Keep the live MiniMax suite explicit opt-in.Using only
MINIMAX_API_KEYas the gate means any shell or CI job that already has the secret exported will hit the real API on a plainpytestrun. That makes the default suite slower, network-flaky, and potentially billable.🔧 Suggested gate
pytestmark = pytest.mark.skipif( - not os.environ.get("MINIMAX_API_KEY"), - reason="MINIMAX_API_KEY not set", + not os.environ.get("MINIMAX_API_KEY") + or os.environ.get("RUN_LIVE_MINIMAX_TESTS") != "1", + reason="Set MINIMAX_API_KEY and RUN_LIVE_MINIMAX_TESTS=1 to run live MiniMax tests", )🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@libs/python/agent/tests/test_minimax_integration.py` around lines 15 - 18, Tests currently run against live MiniMax when only MINIMAX_API_KEY is present; change the pytest skip condition in the pytestmark block to require an explicit opt-in (e.g., require both MINIMAX_API_KEY and a new opt-in env var like RUN_MINIMAX or MINIMAX_INTEGRATION set to "1") so the suite only exercises the real API when both are present; update the skipif expression that defines pytestmark and the test module docstring or comment to explain the required opt-in env var.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@libs/python/agent/agent/adapters/minimax_adapter.py`:
- Around line 50-66: _build_params currently always uses self.base_url so
per-call or agent-level api_base overrides don't propagate; update _build_params
to prefer an api_base passed in kwargs (e.g., resolved_api_base =
kwargs.get("api_base") or fallback to self.base_url) and set params["api_base"]
to that resolved value so ComputerAgent(api_base=...) and agent.run(...,
api_base=...) take effect; reference the _build_params method and ensure you
read from kwargs rather than hardcoding self.base_url.
In `@libs/python/agent/tests/test_minimax_adapter.py`:
- Around line 18-24: The tests rely on MINIMAX_API_BASE from the process
environment; to make them hermetic, update the default-path tests (e.g.,
test_default_base_url and the later block around lines 136-149) to either clear
the environment variable that exports MINIMAX_API_BASE before constructing
MiniMaxAdapter() or explicitly pass base_url=MINIMAX_API_BASE when constructing
MiniMaxAdapter; ensure you reference the MiniMaxAdapter constructor and the
MINIMAX_API_BASE constant when making the change so the adapter’s default-path
behavior is tested independent of process env.
- Around line 234-245: The test test_completion_calls_litellm should assert the
normalized model passed to the mocked completion via keyword args rather than
falling back to a truthy default; update the assertion to inspect
mock_completion.call_args[1]["model"] (or use call_kwargs =
mock_completion.call_args and assert call_kwargs[1]["model"] ==
"openai/MiniMax-M2.5") so it fails if MiniMaxAdapter.completion does not
normalize "minimax/MiniMax-M2.5" to "openai/MiniMax-M2.5"; keep the existing
mock setup and call to adapter.completion and only change the final assertion to
explicitly check the kwargs model value.
---
Nitpick comments:
In `@libs/python/agent/tests/test_minimax_integration.py`:
- Around line 15-18: Tests currently run against live MiniMax when only
MINIMAX_API_KEY is present; change the pytest skip condition in the pytestmark
block to require an explicit opt-in (e.g., require both MINIMAX_API_KEY and a
new opt-in env var like RUN_MINIMAX or MINIMAX_INTEGRATION set to "1") so the
suite only exercises the real API when both are present; update the skipif
expression that defines pytestmark and the test module docstring or comment to
explain the required opt-in env var.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: ae9bc494-4f64-4d90-85b1-0c43c3b41d2f
📒 Files selected for processing (8)
libs/python/agent/README.mdlibs/python/agent/agent/adapters/__init__.pylibs/python/agent/agent/adapters/minimax_adapter.pylibs/python/agent/agent/agent.pylibs/python/agent/example.pylibs/python/agent/pyproject.tomllibs/python/agent/tests/test_minimax_adapter.pylibs/python/agent/tests/test_minimax_integration.py
| def _build_params(self, kwargs: dict, stream: bool = False) -> dict: | ||
| """Build parameters for the inner litellm call.""" | ||
| model = self._normalize_model(kwargs.get("model", "")) | ||
| api_key = self._resolve_api_key(kwargs) | ||
|
|
||
| self._clamp_temperature(kwargs) | ||
|
|
||
| extra_headers = {} | ||
| if "extra_headers" in kwargs: | ||
| extra_headers.update(kwargs.pop("extra_headers")) | ||
| extra_headers["Authorization"] = f"Bearer {api_key}" | ||
|
|
||
| params = { | ||
| "model": f"openai/{model}", | ||
| "messages": kwargs.get("messages", []), | ||
| "api_base": self.base_url, | ||
| "api_key": api_key, |
There was a problem hiding this comment.
Propagate api_base overrides into the MiniMax request params.
ComputerAgent exposes api_base as a public override and threads it through in libs/python/agent/agent/agent.py Lines 214-215 and 979-985, but _build_params() always sends self.base_url. That makes ComputerAgent(api_base=...) and agent.run(..., api_base=...) no-ops for MiniMax, so proxy/custom-endpoint setups still hit the default MiniMax base URL.
🔧 Proposed fix
def _build_params(self, kwargs: dict, stream: bool = False) -> dict:
+ kwargs = dict(kwargs)
model = self._normalize_model(kwargs.get("model", ""))
api_key = self._resolve_api_key(kwargs)
+ api_base = kwargs.get("api_base") or self.base_url
self._clamp_temperature(kwargs)
@@
params = {
"model": f"openai/{model}",
"messages": kwargs.get("messages", []),
- "api_base": self.base_url,
+ "api_base": api_base,
"api_key": api_key,
"extra_headers": extra_headers,
"stream": stream,
}🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@libs/python/agent/agent/adapters/minimax_adapter.py` around lines 50 - 66,
_build_params currently always uses self.base_url so per-call or agent-level
api_base overrides don't propagate; update _build_params to prefer an api_base
passed in kwargs (e.g., resolved_api_base = kwargs.get("api_base") or fallback
to self.base_url) and set params["api_base"] to that resolved value so
ComputerAgent(api_base=...) and agent.run(..., api_base=...) take effect;
reference the _build_params method and ensure you read from kwargs rather than
hardcoding self.base_url.
| def test_default_base_url(self): | ||
| adapter = MiniMaxAdapter() | ||
| assert adapter.base_url == MINIMAX_API_BASE | ||
|
|
||
| def test_custom_base_url(self): | ||
| adapter = MiniMaxAdapter(base_url="https://custom.minimax.io/v1") | ||
| assert adapter.base_url == "https://custom.minimax.io/v1" |
There was a problem hiding this comment.
Isolate the “default base URL” assertions from the process environment.
These tests construct MiniMaxAdapter() / MiniMaxAdapter(api_key=...) and then assert against MINIMAX_API_BASE. If MINIMAX_API_BASE is already exported in the test process, the adapter intentionally prefers that env var and these unit tests fail for the wrong reason. Clear the env in the default-path tests or pass base_url=MINIMAX_API_BASE explicitly.
Also applies to: 136-149
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@libs/python/agent/tests/test_minimax_adapter.py` around lines 18 - 24, The
tests rely on MINIMAX_API_BASE from the process environment; to make them
hermetic, update the default-path tests (e.g., test_default_base_url and the
later block around lines 136-149) to either clear the environment variable that
exports MINIMAX_API_BASE before constructing MiniMaxAdapter() or explicitly pass
base_url=MINIMAX_API_BASE when constructing MiniMaxAdapter; ensure you reference
the MiniMaxAdapter constructor and the MINIMAX_API_BASE constant when making the
change so the adapter’s default-path behavior is tested independent of process
env.
| @patch("agent.adapters.minimax_adapter.completion") | ||
| def test_completion_calls_litellm(self, mock_completion): | ||
| mock_completion.return_value = MagicMock() | ||
| adapter = MiniMaxAdapter(api_key="test-key") | ||
| adapter.completion( | ||
| model="minimax/MiniMax-M2.5", | ||
| messages=[{"role": "user", "content": "Hi"}], | ||
| ) | ||
| mock_completion.assert_called_once() | ||
| call_kwargs = mock_completion.call_args | ||
| assert call_kwargs[1]["model"] == "openai/MiniMax-M2.5" or call_kwargs[0][0] if call_kwargs[0] else True | ||
|
|
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
python - <<'PY'
from unittest.mock import MagicMock
m = MagicMock()
m(model="openai/MiniMax-M2.5")
call_args = m.call_args
expr = (call_args[1]["model"] == "openai/MiniMax-M2.5" or call_args[0][0]) if call_args[0] else True
print("args:", call_args[0])
print("kwargs:", call_args[1])
print("current_assertion_result:", expr)
PYRepository: trycua/cua
Length of output: 134
🏁 Script executed:
cd libs/python/agent/tests && ls -la test_minimax_adapter.py && wc -l test_minimax_adapter.pyRepository: trycua/cua
Length of output: 155
🏁 Script executed:
sed -n '234,245p' libs/python/agent/tests/test_minimax_adapter.pyRepository: trycua/cua
Length of output: 620
🏁 Script executed:
fd "minimax_adapter.py" libs/python/agent --type f | grep -v testRepository: trycua/cua
Length of output: 105
🏁 Script executed:
cat -n libs/python/agent/agent/adapters/minimax_adapter.pyRepository: trycua/cua
Length of output: 5197
Tighten the sync completion assertion.
The current assertion collapses to True because adapter.completion() calls the mock with keyword arguments only, leaving call_args[0] empty. This masks potential regressions in model normalization.
Fix
mock_completion.assert_called_once()
- call_kwargs = mock_completion.call_args
- assert call_kwargs[1]["model"] == "openai/MiniMax-M2.5" or call_kwargs[0][0] if call_kwargs[0] else True
+ assert mock_completion.call_args.kwargs["model"] == "openai/MiniMax-M2.5"🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@libs/python/agent/tests/test_minimax_adapter.py` around lines 234 - 245, The
test test_completion_calls_litellm should assert the normalized model passed to
the mocked completion via keyword args rather than falling back to a truthy
default; update the assertion to inspect mock_completion.call_args[1]["model"]
(or use call_kwargs = mock_completion.call_args and assert
call_kwargs[1]["model"] == "openai/MiniMax-M2.5") so it fails if
MiniMaxAdapter.completion does not normalize "minimax/MiniMax-M2.5" to
"openai/MiniMax-M2.5"; keep the existing mock setup and call to
adapter.completion and only change the final assertion to explicitly check the
kwargs model value.
Codecov Report❌ Patch coverage is
📢 Thoughts on this report? Let us know! |
- Add MiniMax-M2.7 and MiniMax-M2.7-highspeed to model list - Update documentation to recommend M2.7 as default - Keep all previous models (M2.5, M2.5-highspeed) as alternatives - Add unit tests for M2.7 model routing and normalization - Add integration tests for M2.7 and M2.7-highspeed
Summary
Add MiniMax as a first-class LLM provider in the CUA agent framework via a custom litellm adapter.
Changes
MiniMaxAdaptercustom litellm handler with OpenAI-compatible routingminimax/prefix in the provider mapWhy
MiniMax-M2.7 is the latest flagship model with enhanced reasoning and coding capabilities. Adding it as a first-class provider enables CUA users to leverage MiniMax models for computer-use agent workflows.
Testing