How to combine model and mmproj ? #19497
-
|
Hi, I've been playing with Qwen3VL-8B-Instruct-Q8_0.gguf on Win11+conda+cuda environment. Anyone please educate me how to combine mmproj-GGUF to model ? from llama_cpp import Llama
llm = Llama(
model_path="./Qwen3VL-8B-Instruct-Q8_0.gguf",
mmproj_path="./mmproj-Qwen3VL-8B-Instruct-Q8_0.gguf",
n_ctx=1000,
n_gpu_layer=-1,
verbose=True,
)environment: Thank you in advance. |
Beta Was this translation helpful? Give feedback.
Replies: 4 comments 3 replies
-
|
Hi, I'm using llama-server and a PowerShell script. |
Beta Was this translation helpful? Give feedback.
-
|
已收到
|
Beta Was this translation helpful? Give feedback.
-
|
Hi there, Now code works fine. |
Beta Was this translation helpful? Give feedback.
-
|
已收到
|
Beta Was this translation helpful? Give feedback.
You might want to ping the maintainer of the Python wrapper or ask on r/LocalLLaMA.
Alternatively, you could run the llama server and interact with it through the API in your script.