Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 1 addition & 0 deletions docs/integrations/llms/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,4 +6,5 @@ This section provides tutorials on incorporating alternative large language mode
:maxdepth: 1
:hidden:

minimax
```
106 changes: 106 additions & 0 deletions docs/integrations/llms/minimax.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,106 @@
# MiniMax

[MiniMax](https://www.minimaxi.com/) provides large language models accessible through an OpenAI-compatible API. This guide shows how to integrate MiniMax models into your prompt flow.

## Prerequisites

- A MiniMax API key from [MiniMax Platform](https://platform.minimaxi.com/)

## Available Models

| Model | Context Window | Description |
|-------|---------------|-------------|
| `MiniMax-M2.5` | 204K tokens | General-purpose model |
| `MiniMax-M2.5-highspeed` | 204K tokens | Faster variant for lower latency |

## Setup Connection

MiniMax uses an OpenAI-compatible API, so you can use prompt flow's built-in `OpenAIConnection` with a custom `base_url`.

### Using CLI

```bash
pf connection create -f minimax.yml --set api_key=<your-minimax-api-key>
```

Where `minimax.yml` contains:

```yaml
$schema: https://azuremlschemas.azureedge.net/promptflow/latest/OpenAIConnection.schema.json
name: minimax_connection
type: open_ai
api_key: "<user-input>"
base_url: "https://api.minimax.io/v1"
```

### Using Python SDK

```python
from promptflow.core import OpenAIModelConfiguration

config = OpenAIModelConfiguration(
model="MiniMax-M2.5",
base_url="https://api.minimax.io/v1",
api_key="your-minimax-api-key",
)
```

### Using Environment Variables

```bash
export MINIMAX_API_KEY="your-api-key"
```

```python
import os
from promptflow.core import OpenAIModelConfiguration

config = OpenAIModelConfiguration(
model="MiniMax-M2.5",
base_url="https://api.minimax.io/v1",
api_key=os.environ["MINIMAX_API_KEY"],
)
```

## Use in a Prompty File

```yaml
---
name: MiniMax Chat
model:
api: chat
configuration:
type: openai
model: MiniMax-M2.5
base_url: https://api.minimax.io/v1
parameters:
temperature: 0.7
max_tokens: 1024
---
```

## Use in a Flex Flow

```python
from promptflow.core import OpenAIModelConfiguration, Prompty
from promptflow.tracing import trace

config = OpenAIModelConfiguration(
model="MiniMax-M2.5",
base_url="https://api.minimax.io/v1",
api_key="your-api-key",
)

prompty = Prompty.load(source="chat.prompty", model={"configuration": config})
result = prompty(question="Hello!")
```

## Notes

- **Temperature**: MiniMax accepts temperature values in the range `[0.0, 1.0]`.
- **Context window**: Both MiniMax-M2.5 models support up to 204K tokens.
- **Streaming**: Supported via the standard OpenAI streaming interface.

## Example

See the complete working example at [examples/flex-flows/chat-with-minimax](../../examples/flex-flows/chat-with-minimax/).
5 changes: 5 additions & 0 deletions examples/connections/minimax.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@
$schema: https://azuremlschemas.azureedge.net/promptflow/latest/OpenAIConnection.schema.json
name: minimax_connection
type: open_ai
api_key: "<user-input>" # Your MiniMax API key from https://platform.minimaxi.com
base_url: "https://api.minimax.io/v1"
58 changes: 58 additions & 0 deletions examples/flex-flows/chat-with-minimax/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,58 @@
# Chat with MiniMax

This example demonstrates how to use [MiniMax](https://www.minimaxi.com/) as an LLM provider in prompt flow. MiniMax provides an OpenAI-compatible API, so it integrates seamlessly with prompt flow's existing OpenAI connection type.

## Prerequisites

- Install prompt flow: `pip install promptflow promptflow-tools`
- A MiniMax API key from [MiniMax Platform](https://platform.minimaxi.com/)

## Available Models

| Model | Description |
|-------|-------------|
| `MiniMax-M2.5` | General-purpose model with 204K context window |
| `MiniMax-M2.5-highspeed` | Faster variant optimized for lower latency |

## Setup

### Option 1: Using environment variable

```bash
export MINIMAX_API_KEY="your-api-key-here"
```

### Option 2: Using a prompt flow connection

Create a MiniMax connection using the OpenAI connection type with a custom base URL:

```bash
pf connection create -f ../../connections/minimax.yml --set api_key=<your-api-key>
```

## Run the example

### Run directly

```bash
export MINIMAX_API_KEY="your-api-key-here"
python flow.py
```

### Run as a flow

```bash
pf flow test --flow . --inputs question="What is Prompt flow?"
```

### Run with batch data

```bash
pf run create --flow . --data data.jsonl --column-mapping question='${data.question}' --stream
```

## Notes

- MiniMax's API is OpenAI-compatible, so it works with prompt flow's `OpenAIConnection` by setting `base_url` to `https://api.minimax.io/v1`.
- Temperature values are accepted in the range `[0.0, 1.0]`.
- The `MiniMax-M2.5` model supports a 204K context window, suitable for long-document analysis.
31 changes: 31 additions & 0 deletions examples/flex-flows/chat-with-minimax/chat.prompty
Original file line number Diff line number Diff line change
@@ -0,0 +1,31 @@
---
name: MiniMax Chat
model:
api: chat
configuration:
type: openai
model: MiniMax-M2.5
base_url: https://api.minimax.io/v1
parameters:
temperature: 0.7
max_tokens: 1024
inputs:
question:
type: string
chat_history:
type: list
sample:
question: "What is Prompt flow?"
chat_history: []
---

system:
You are a helpful assistant.

{% for item in chat_history %}
{{item.role}}:
{{item.content}}
{% endfor %}

user:
{{question}}
3 changes: 3 additions & 0 deletions examples/flex-flows/chat-with-minimax/data.jsonl
Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
{"question": "What is Prompt flow?"}
{"question": "How do I create a flow?"}
{"question": "What are the benefits of using Prompt flow?"}
12 changes: 12 additions & 0 deletions examples/flex-flows/chat-with-minimax/flow.flex.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,12 @@
$schema: https://azuremlschemas.azureedge.net/promptflow/latest/Flow.schema.json
entry: flow:ChatFlow
sample:
inputs:
question: What is Prompt flow?
init:
model_config:
connection: minimax_connection
model: MiniMax-M2.5
max_total_token: 4096
environment:
python_requirements_txt: requirements.txt
98 changes: 98 additions & 0 deletions examples/flex-flows/chat-with-minimax/flow.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,98 @@
import os
from pathlib import Path

from promptflow.tracing import trace
from promptflow.core import OpenAIModelConfiguration, Prompty

BASE_DIR = Path(__file__).absolute().parent

# MiniMax API base URL (OpenAI-compatible)
MINIMAX_BASE_URL = "https://api.minimax.io/v1"

# Available MiniMax models
MINIMAX_MODELS = {
"MiniMax-M2.5": "General-purpose model with 204K context window",
"MiniMax-M2.5-highspeed": "Faster variant optimized for lower latency",
}


def _clamp_temperature(temperature: float) -> float:
"""Clamp temperature to MiniMax's accepted range [0.0, 1.0]."""
return max(0.0, min(1.0, temperature))


class ChatFlow:
"""A chat flow powered by MiniMax's LLM via the OpenAI-compatible API.

MiniMax provides an OpenAI-compatible API endpoint, so it works seamlessly
with prompt flow's OpenAI connection type by setting the base_url.
"""

def __init__(
self,
model_config: OpenAIModelConfiguration,
max_total_token: int = 4096,
):
self.model_config = model_config
self.max_total_token = max_total_token

@trace
def __call__(
self,
question: str = "What is Prompt flow?",
chat_history: list = None,
) -> str:
"""Flow entry function."""
prompty = Prompty.load(
source=BASE_DIR / "chat.prompty",
model={"configuration": self.model_config},
)

chat_history = chat_history or []
while len(chat_history) > 0:
token_count = prompty.estimate_token_count(
question=question, chat_history=chat_history
)
if token_count > self.max_total_token:
chat_history = chat_history[1:]
else:
break

output = prompty(question=question, chat_history=chat_history)
return output


def get_minimax_config(
model: str = "MiniMax-M2.5",
api_key: str = None,
) -> OpenAIModelConfiguration:
"""Create an OpenAIModelConfiguration pre-configured for MiniMax.

Args:
model: MiniMax model name. Options: MiniMax-M2.5, MiniMax-M2.5-highspeed.
api_key: MiniMax API key. Falls back to MINIMAX_API_KEY env var.

Returns:
OpenAIModelConfiguration configured for MiniMax.
"""
api_key = api_key or os.environ.get("MINIMAX_API_KEY")
if not api_key:
raise ValueError(
"MiniMax API key is required. Set MINIMAX_API_KEY environment variable "
"or pass api_key parameter."
)
return OpenAIModelConfiguration(
model=model,
base_url=MINIMAX_BASE_URL,
api_key=api_key,
)


if __name__ == "__main__":
from promptflow.tracing import start_trace

start_trace()
config = get_minimax_config(model="MiniMax-M2.5")
flow = ChatFlow(config)
result = flow("What is Prompt flow?", [])
print(result)
4 changes: 4 additions & 0 deletions examples/flex-flows/chat-with-minimax/requirements.txt
Original file line number Diff line number Diff line change
@@ -0,0 +1,4 @@
promptflow
promptflow-tools
openai>=1.0.0
python-dotenv
Empty file added tests/minimax/__init__.py
Empty file.
Loading
Loading