Skip to content

feat(agent): add MiniMax as first-class LLM provider with M2.7 default#1185

Open
octo-patch wants to merge 2 commits intotrycua:mainfrom
octo-patch:feature/add-minimax-provider
Open

feat(agent): add MiniMax as first-class LLM provider with M2.7 default#1185
octo-patch wants to merge 2 commits intotrycua:mainfrom
octo-patch:feature/add-minimax-provider

Conversation

@octo-patch
Copy link

@octo-patch octo-patch commented Mar 17, 2026

Summary

Add MiniMax as a first-class LLM provider in the CUA agent framework via a custom litellm adapter.

Changes

  • Add MiniMaxAdapter custom litellm handler with OpenAI-compatible routing
  • Register minimax/ prefix in the provider map
  • Support MiniMax-M2.7 (default), MiniMax-M2.7-highspeed, MiniMax-M2.5, MiniMax-M2.5-highspeed models
  • Temperature clamping to MiniMax range [0, 1.0]
  • API key resolution via constructor, kwargs, or MINIMAX_API_KEY env var
  • Full streaming support (sync and async)
  • 42 unit tests + 5 integration tests
  • Update README, example.py with model options

Why

MiniMax-M2.7 is the latest flagship model with enhanced reasoning and coding capabilities. Adding it as a first-class provider enables CUA users to leverage MiniMax models for computer-use agent workflows.

Testing

  • Unit tests cover adapter init, model normalization, API key resolution, temperature clamping, parameter building, completion routing, streaming, and ComputerAgent integration
  • Integration tests verify real API calls with M2.5 and M2.7 models

Add MiniMaxAdapter for litellm that routes minimax/ prefixed models
to the MiniMax Cloud API (api.minimax.io/v1). Supports MiniMax-M2.5
and MiniMax-M2.5-highspeed models with 204K context window.

- MiniMaxAdapter with temperature clamping, MINIMAX_API_KEY env var,
  streaming support, and OpenAI-compatible API routing
- Registered in litellm custom_provider_map alongside existing adapters
- 42 unit tests covering adapter init, model normalization, API key
  resolution, temperature clamping, parameter building, completion,
  streaming, and ComputerAgent integration
- 3 integration tests with live MiniMax API validation
- Updated example.py, pyproject.toml, and README.md
@vercel
Copy link
Contributor

vercel bot commented Mar 17, 2026

Someone is attempting to deploy a commit to the Cua Team on Vercel.

A member of the Team first needs to authorize it.

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Mar 17, 2026

📝 Walkthrough

Walkthrough

This PR adds MiniMax model support to the agent library through a new MiniMaxAdapter that integrates with litellm's custom provider system. It includes adapter implementation with streaming and async support, comprehensive unit and integration tests, and documentation updates with example usage.

Changes

Cohort / File(s) Summary
Documentation & Examples
libs/python/agent/README.md, libs/python/agent/example.py
Added MiniMax setup documentation with model options table, context window notes, and commented example models (M2.5, highspeed variant).
Adapter Implementation
libs/python/agent/agent/adapters/minimax_adapter.py, libs/python/agent/agent/adapters/__init__.py
New MiniMaxAdapter class with model name normalization, API key resolution, temperature clamping [0,1], parameter building, and completion/acompletion/streaming/astreaming methods. Exported via init.py.
Agent Integration
libs/python/agent/agent/agent.py
Instantiates MiniMaxAdapter and registers it in litellm's custom_provider_map under "minimax" provider during ComputerAgent initialization.
Dependencies
libs/python/agent/pyproject.toml
Added minimax to optional-dependencies for package extras.
Unit Tests
libs/python/agent/tests/test_minimax_adapter.py
Comprehensive test coverage for adapter initialization, model normalization, API key/temperature resolution, parameter construction, completion/streaming behavior, and ComputerAgent integration.
Integration Tests
libs/python/agent/tests/test_minimax_integration.py
Live API tests for sync completion, async completion, and highspeed model variant, skipped unless MINIMAX_API_KEY environment variable is set.

Sequence Diagram

sequenceDiagram
    actor Client
    participant ComputerAgent
    participant MiniMaxAdapter
    participant litellm
    participant MiniMax API

    Client->>ComputerAgent: completion(model="minimax/...", messages=[...])
    ComputerAgent->>MiniMaxAdapter: completion(model="minimax/...", messages=[...])
    MiniMaxAdapter->>MiniMaxAdapter: Normalize model name (strip minimax/ prefix)
    MiniMaxAdapter->>MiniMaxAdapter: Clamp temperature to [0, 1]
    MiniMaxAdapter->>MiniMaxAdapter: Build request params with headers and api_base
    MiniMaxAdapter->>litellm: completion(model="openai/...", messages=[...], api_base=..., api_key=...)
    litellm->>MiniMax API: POST /v1/chat/completions
    MiniMax API-->>litellm: Response with choices
    litellm-->>MiniMaxAdapter: ModelResponse
    MiniMaxAdapter-->>ComputerAgent: ModelResponse
    ComputerAgent-->>Client: ModelResponse
Loading

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~20 minutes

Poem

🐰 Hops of joy for MiniMax's embrace,
A adapter swift, with streaming grace,
Temperature clamped, parameters blessed,
Async and sync both pass the test,
Now larger contexts hop through our race! 🚀✨

🚥 Pre-merge checks | ✅ 2 | ❌ 1

❌ Failed checks (1 warning)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 29.82% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (2 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title accurately describes the main change: adding MiniMax as a first-class LLM provider with the M2.7 default model.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
📝 Coding Plan
  • Generate coding plan for human review comments

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

🧹 Nitpick comments (1)
libs/python/agent/tests/test_minimax_integration.py (1)

15-18: Keep the live MiniMax suite explicit opt-in.

Using only MINIMAX_API_KEY as the gate means any shell or CI job that already has the secret exported will hit the real API on a plain pytest run. That makes the default suite slower, network-flaky, and potentially billable.

🔧 Suggested gate
 pytestmark = pytest.mark.skipif(
-    not os.environ.get("MINIMAX_API_KEY"),
-    reason="MINIMAX_API_KEY not set",
+    not os.environ.get("MINIMAX_API_KEY")
+    or os.environ.get("RUN_LIVE_MINIMAX_TESTS") != "1",
+    reason="Set MINIMAX_API_KEY and RUN_LIVE_MINIMAX_TESTS=1 to run live MiniMax tests",
 )
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@libs/python/agent/tests/test_minimax_integration.py` around lines 15 - 18,
Tests currently run against live MiniMax when only MINIMAX_API_KEY is present;
change the pytest skip condition in the pytestmark block to require an explicit
opt-in (e.g., require both MINIMAX_API_KEY and a new opt-in env var like
RUN_MINIMAX or MINIMAX_INTEGRATION set to "1") so the suite only exercises the
real API when both are present; update the skipif expression that defines
pytestmark and the test module docstring or comment to explain the required
opt-in env var.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@libs/python/agent/agent/adapters/minimax_adapter.py`:
- Around line 50-66: _build_params currently always uses self.base_url so
per-call or agent-level api_base overrides don't propagate; update _build_params
to prefer an api_base passed in kwargs (e.g., resolved_api_base =
kwargs.get("api_base") or fallback to self.base_url) and set params["api_base"]
to that resolved value so ComputerAgent(api_base=...) and agent.run(...,
api_base=...) take effect; reference the _build_params method and ensure you
read from kwargs rather than hardcoding self.base_url.

In `@libs/python/agent/tests/test_minimax_adapter.py`:
- Around line 18-24: The tests rely on MINIMAX_API_BASE from the process
environment; to make them hermetic, update the default-path tests (e.g.,
test_default_base_url and the later block around lines 136-149) to either clear
the environment variable that exports MINIMAX_API_BASE before constructing
MiniMaxAdapter() or explicitly pass base_url=MINIMAX_API_BASE when constructing
MiniMaxAdapter; ensure you reference the MiniMaxAdapter constructor and the
MINIMAX_API_BASE constant when making the change so the adapter’s default-path
behavior is tested independent of process env.
- Around line 234-245: The test test_completion_calls_litellm should assert the
normalized model passed to the mocked completion via keyword args rather than
falling back to a truthy default; update the assertion to inspect
mock_completion.call_args[1]["model"] (or use call_kwargs =
mock_completion.call_args and assert call_kwargs[1]["model"] ==
"openai/MiniMax-M2.5") so it fails if MiniMaxAdapter.completion does not
normalize "minimax/MiniMax-M2.5" to "openai/MiniMax-M2.5"; keep the existing
mock setup and call to adapter.completion and only change the final assertion to
explicitly check the kwargs model value.

---

Nitpick comments:
In `@libs/python/agent/tests/test_minimax_integration.py`:
- Around line 15-18: Tests currently run against live MiniMax when only
MINIMAX_API_KEY is present; change the pytest skip condition in the pytestmark
block to require an explicit opt-in (e.g., require both MINIMAX_API_KEY and a
new opt-in env var like RUN_MINIMAX or MINIMAX_INTEGRATION set to "1") so the
suite only exercises the real API when both are present; update the skipif
expression that defines pytestmark and the test module docstring or comment to
explain the required opt-in env var.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Run ID: ae9bc494-4f64-4d90-85b1-0c43c3b41d2f

📥 Commits

Reviewing files that changed from the base of the PR and between 5464de7 and 93a297e.

📒 Files selected for processing (8)
  • libs/python/agent/README.md
  • libs/python/agent/agent/adapters/__init__.py
  • libs/python/agent/agent/adapters/minimax_adapter.py
  • libs/python/agent/agent/agent.py
  • libs/python/agent/example.py
  • libs/python/agent/pyproject.toml
  • libs/python/agent/tests/test_minimax_adapter.py
  • libs/python/agent/tests/test_minimax_integration.py

Comment on lines +50 to +66
def _build_params(self, kwargs: dict, stream: bool = False) -> dict:
"""Build parameters for the inner litellm call."""
model = self._normalize_model(kwargs.get("model", ""))
api_key = self._resolve_api_key(kwargs)

self._clamp_temperature(kwargs)

extra_headers = {}
if "extra_headers" in kwargs:
extra_headers.update(kwargs.pop("extra_headers"))
extra_headers["Authorization"] = f"Bearer {api_key}"

params = {
"model": f"openai/{model}",
"messages": kwargs.get("messages", []),
"api_base": self.base_url,
"api_key": api_key,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Propagate api_base overrides into the MiniMax request params.

ComputerAgent exposes api_base as a public override and threads it through in libs/python/agent/agent/agent.py Lines 214-215 and 979-985, but _build_params() always sends self.base_url. That makes ComputerAgent(api_base=...) and agent.run(..., api_base=...) no-ops for MiniMax, so proxy/custom-endpoint setups still hit the default MiniMax base URL.

🔧 Proposed fix
     def _build_params(self, kwargs: dict, stream: bool = False) -> dict:
+        kwargs = dict(kwargs)
         model = self._normalize_model(kwargs.get("model", ""))
         api_key = self._resolve_api_key(kwargs)
+        api_base = kwargs.get("api_base") or self.base_url
 
         self._clamp_temperature(kwargs)
@@
         params = {
             "model": f"openai/{model}",
             "messages": kwargs.get("messages", []),
-            "api_base": self.base_url,
+            "api_base": api_base,
             "api_key": api_key,
             "extra_headers": extra_headers,
             "stream": stream,
         }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@libs/python/agent/agent/adapters/minimax_adapter.py` around lines 50 - 66,
_build_params currently always uses self.base_url so per-call or agent-level
api_base overrides don't propagate; update _build_params to prefer an api_base
passed in kwargs (e.g., resolved_api_base = kwargs.get("api_base") or fallback
to self.base_url) and set params["api_base"] to that resolved value so
ComputerAgent(api_base=...) and agent.run(..., api_base=...) take effect;
reference the _build_params method and ensure you read from kwargs rather than
hardcoding self.base_url.

Comment on lines +18 to +24
def test_default_base_url(self):
adapter = MiniMaxAdapter()
assert adapter.base_url == MINIMAX_API_BASE

def test_custom_base_url(self):
adapter = MiniMaxAdapter(base_url="https://custom.minimax.io/v1")
assert adapter.base_url == "https://custom.minimax.io/v1"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Isolate the “default base URL” assertions from the process environment.

These tests construct MiniMaxAdapter() / MiniMaxAdapter(api_key=...) and then assert against MINIMAX_API_BASE. If MINIMAX_API_BASE is already exported in the test process, the adapter intentionally prefers that env var and these unit tests fail for the wrong reason. Clear the env in the default-path tests or pass base_url=MINIMAX_API_BASE explicitly.

Also applies to: 136-149

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@libs/python/agent/tests/test_minimax_adapter.py` around lines 18 - 24, The
tests rely on MINIMAX_API_BASE from the process environment; to make them
hermetic, update the default-path tests (e.g., test_default_base_url and the
later block around lines 136-149) to either clear the environment variable that
exports MINIMAX_API_BASE before constructing MiniMaxAdapter() or explicitly pass
base_url=MINIMAX_API_BASE when constructing MiniMaxAdapter; ensure you reference
the MiniMaxAdapter constructor and the MINIMAX_API_BASE constant when making the
change so the adapter’s default-path behavior is tested independent of process
env.

Comment on lines +234 to +245
@patch("agent.adapters.minimax_adapter.completion")
def test_completion_calls_litellm(self, mock_completion):
mock_completion.return_value = MagicMock()
adapter = MiniMaxAdapter(api_key="test-key")
adapter.completion(
model="minimax/MiniMax-M2.5",
messages=[{"role": "user", "content": "Hi"}],
)
mock_completion.assert_called_once()
call_kwargs = mock_completion.call_args
assert call_kwargs[1]["model"] == "openai/MiniMax-M2.5" or call_kwargs[0][0] if call_kwargs[0] else True

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
python - <<'PY'
from unittest.mock import MagicMock

m = MagicMock()
m(model="openai/MiniMax-M2.5")
call_args = m.call_args
expr = (call_args[1]["model"] == "openai/MiniMax-M2.5" or call_args[0][0]) if call_args[0] else True

print("args:", call_args[0])
print("kwargs:", call_args[1])
print("current_assertion_result:", expr)
PY

Repository: trycua/cua

Length of output: 134


🏁 Script executed:

cd libs/python/agent/tests && ls -la test_minimax_adapter.py && wc -l test_minimax_adapter.py

Repository: trycua/cua

Length of output: 155


🏁 Script executed:

sed -n '234,245p' libs/python/agent/tests/test_minimax_adapter.py

Repository: trycua/cua

Length of output: 620


🏁 Script executed:

fd "minimax_adapter.py" libs/python/agent --type f | grep -v test

Repository: trycua/cua

Length of output: 105


🏁 Script executed:

cat -n libs/python/agent/agent/adapters/minimax_adapter.py

Repository: trycua/cua

Length of output: 5197


Tighten the sync completion assertion.

The current assertion collapses to True because adapter.completion() calls the mock with keyword arguments only, leaving call_args[0] empty. This masks potential regressions in model normalization.

Fix
         mock_completion.assert_called_once()
-        call_kwargs = mock_completion.call_args
-        assert call_kwargs[1]["model"] == "openai/MiniMax-M2.5" or call_kwargs[0][0] if call_kwargs[0] else True
+        assert mock_completion.call_args.kwargs["model"] == "openai/MiniMax-M2.5"
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@libs/python/agent/tests/test_minimax_adapter.py` around lines 234 - 245, The
test test_completion_calls_litellm should assert the normalized model passed to
the mocked completion via keyword args rather than falling back to a truthy
default; update the assertion to inspect mock_completion.call_args[1]["model"]
(or use call_kwargs = mock_completion.call_args and assert
call_kwargs[1]["model"] == "openai/MiniMax-M2.5") so it fails if
MiniMaxAdapter.completion does not normalize "minimax/MiniMax-M2.5" to
"openai/MiniMax-M2.5"; keep the existing mock setup and call to
adapter.completion and only change the final assertion to explicitly check the
kwargs model value.

@sentry
Copy link

sentry bot commented Mar 18, 2026

Codecov Report

❌ Patch coverage is 96.26168% with 12 lines in your changes missing coverage. Please review.

Files with missing lines Patch % Lines
...ibs/python/agent/tests/test_minimax_integration.py 47.36% 10 Missing ⚠️
...ibs/python/agent/agent/adapters/minimax_adapter.py 96.87% 2 Missing ⚠️

📢 Thoughts on this report? Let us know!

- Add MiniMax-M2.7 and MiniMax-M2.7-highspeed to model list
- Update documentation to recommend M2.7 as default
- Keep all previous models (M2.5, M2.5-highspeed) as alternatives
- Add unit tests for M2.7 model routing and normalization
- Add integration tests for M2.7 and M2.7-highspeed
@octo-patch octo-patch changed the title feat(agent): add MiniMax as first-class LLM provider feat(agent): add MiniMax as first-class LLM provider with M2.7 default Mar 18, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant