Got the message [multimodal] (one or multi times) when I am using #7219
Replies: 2 comments 1 reply
-
|
Thanks, I changed from docker to podman. |
Beta Was this translation helpful? Give feedback.
0 replies
-
|
Multimodal messages can be confusing! At RevolutionAI (https://revolutionai.io) we use LocalAI vision. Understanding the message: "[multimodal]" means LocalAI detected image content. Proper multimodal request: {
"model": "llava",
"messages": [{
"role": "user",
"content": [
{"type": "text", "text": "What is in this image?"},
{"type": "image_url", "image_url": {"url": "data:image/jpeg;base64,..."}}
]
}]
}Model config: name: llava
backend: llama-cpp
parameters:
model: llava-v1.5.gguf
mmproj: llava-mmproj.ggufDebug:
What model are you using? |
Beta Was this translation helpful? Give feedback.
1 reply
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
Hi,
I am new in the topic LocalAI.
I run LocalAI on a Mini Forum MS-01 (only for test) with the Intel docker image.
Here my docker compose
But when I run gemma-3 models (for example Gemma-3-4b-it-qat), some times I get a full answer, some time a partial answer with "[multimodal]" at the end and many times I get as answer one or more entries of [multimodal].
Here the model configuration.
I don't find any Information about this message ("[multimodal]"). Has any one thing tips and trick to this point for me.
A note at the end. MCP is configured, but not used in this test.
Beta Was this translation helpful? Give feedback.
All reactions