Skip to content

feat: add MiniMax prompt enhancer node#466

Open
Octopus (octo-patch) wants to merge 1 commit intoLightricks:masterfrom
octo-patch:feature/add-minimax-prompt-enhancer
Open

feat: add MiniMax prompt enhancer node#466
Octopus (octo-patch) wants to merge 1 commit intoLightricks:masterfrom
octo-patch:feature/add-minimax-prompt-enhancer

Conversation

@octo-patch
Copy link
Copy Markdown

Summary

  • Add MiniMaxPromptEnhancer ComfyUI node that calls the MiniMax chat completions API to cinematically enhance LTX-Video generation prompts
  • Supports both T2V (text-to-video) and I2V (image-to-video) cinematic enhancement modes, reusing the existing system prompts from prompt_enhancer_utils.py
  • Supports MiniMax-M2.7 (default) and MiniMax-M2.7-highspeed models
  • Automatically strips <think>…</think> reasoning tokens that MiniMax-M2.7 may emit
  • An API-based alternative to the local LLM prompt enhancer — no local GPU / VRAM required

Why

The existing LTXVPromptEnhancerLoader + LTXVPromptEnhancer nodes require downloading and running Llama 3.2 locally (several GB, significant VRAM). This new node lets users get high-quality cinematic prompt enhancement via the MiniMax API without any local model loading, similar in spirit to how GemmaAPITextEncode provides an API alternative to local Gemma.

Usage

  1. Set your MINIMAX_API_KEY (get one at https://platform.minimax.io/)
  2. Add the 🅛🅣🅧 MiniMax Prompt Enhancer node to your ComfyUI workflow
  3. Connect its enhanced_prompt output to a CLIP text encode node

API Reference

Test plan

  • 20 unit tests pass (python -m pytest tests/)
  • Integration test against MiniMax API verified locally
  • Node appears in ComfyUI under api node/text/Lightricks category
  • Registered in both NODE_CLASS_MAPPINGS and nodes_registry.py

- Add MiniMaxPromptEnhancer ComfyUI node that calls MiniMax chat
  completions API to cinematically enhance video generation prompts
- Add MINIMAX_API_KEY environment variable support via api_key input
- Support both T2V and I2V cinematic prompt enhancement modes
- Support MiniMax-M2.7 and MiniMax-M2.7-highspeed models
- Strip <think>…</think> reasoning tokens from MiniMax-M2.7 responses
- Add unit tests (20 tests, all passing)
- Add pyproject.toml with pytest configuration

API reference:
- https://platform.minimax.io/docs/api-reference/text-openai-api
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant