-
Notifications
You must be signed in to change notification settings - Fork 92
Open
Description
Which version of LM Studio?
Version 0.4.4+1 (0.4.4+1)
Which operating system?
macOS 26.2 (25C56)
Sorry if I submitted the issue to the wrong place!
Moved from lmstudio-ai/lmstudio-bug-tracker#1490
What is the bug?
When loading mxfp8 model with multi_modal_projector,
If shards don't have multi_modal_projector.*.biases key,
will fail to load model,
even though multimodal_projector_bias: false was specified in config.json.
Logs
For the mxfp8
2026-02-07 23:56:40 [DEBUG]
[model_kit][INFO]: Loading model from /Users/foxtr0t/.lmstudio/models/Foxtr0t/Ministral-3-14B-Reasoning-2512-mlx-mxfp8...
2026-02-07 23:56:45 [DEBUG]
[fix_mistral_pre_tokenizer][INFO]: Detected mistral model. Checking if tokenizer needs fixing...
2026-02-07 23:56:45 [DEBUG]
[fix_mistral_pre_tokenizer][INFO]: Tokenizer is of type <class 'transformers.tokenization_utils_tokenizers.TokenizersBackend'>. Skipping fix.
2026-02-07 23:56:46 [DEBUG]
ValueError: Missing 3 parameters:
multi_modal_projector.linear_1.biases,
multi_modal_projector.linear_2.biases,
multi_modal_projector.patch_merger.merging_layer.biases.
At:
/Users/foxtr0t/.lmstudio/extensions/backends/vendor/_amphibian/app-mlx-generate-mac14-arm64@13/lib/python3.11/site-packages/mlx/nn/layers/base.py(191): load_weights
/Users/foxtr0t/.lmstudio/extensions/backends/vendor/_amphibian/app-mlx-generate-mac14-arm64@13/lib/python3.11/site-packages/mlx_engine/model_kit/vision_add_ons/load_utils.py(182): prepare_components
/Users/foxtr0t/.lmstudio/extensions/backends/vendor/_amphibian/app-mlx-generate-mac14-arm64@13/lib/python3.11/site-packages/mlx_engine/model_kit/vision_add_ons/load_utils.py(251): load_vision_addon
/Users/foxtr0t/.lmstudio/extensions/backends/vendor/_amphibian/app-mlx-generate-mac14-arm64@13/lib/python3.11/site-packages/mlx_engine/model_kit/vision_add_ons/mistral3.py(45): __init__
/Users/foxtr0t/.lmstudio/extensions/backends/vendor/_amphibian/app-mlx-generate-mac14-arm64@13/lib/python3.11/site-packages/mlx_engine/model_kit/model_kit.py(114): _full_model_init
/Users/foxtr0t/.lmstudio/extensions/backends/vendor/_amphibian/app-mlx-generate-mac14-arm64@13/lib/python3.11/site-packages/mlx_engine/model_kit/model_kit.py(129): __init__
/Users/foxtr0t/.lmstudio/extensions/backends/vendor/_amphibian/app-mlx-generate-mac14-arm64@13/lib/python3.11/site-packages/mlx_engine/generate.py(119): load_model
2026-02-07 23:56:46 [DEBUG]
lmstudio-llama-cpp: failed to load model. Error: Error when loading model: ValueError: Missing 3 parameters:
multi_modal_projector.linear_1.biases,
multi_modal_projector.linear_2.biases,
multi_modal_projector.patch_merger.merging_layer.biases.
To Reproduce
Steps to reproduce the behavior:
- Convert the model using
mlx-vlm.
from mlx_vlm import convert
hf_path = "mistralai/Ministral-3-14B-Reasoning-2512"
upload_repo = "Foxtr0t/Ministral-3-14B-Reasoning-2512-mlx-mxfp8"
mlx_path = upload_repo.split('/')[-1]
convert(hf_path, mlx_path, quantize=True, q_group_size=32, q_bits=8, q_mode="mxfp8")- Copy to LM Studio model directory (~/.lmstudio/models/Foxtr0t/Ministral-3-14B-Reasoning-2512-mlx-mxfp8)
- Load model.
- Complain missing parameters, but those shouldn't exist in the first place except
affineormultimodal_projector_bias: true?
The Points?
affine works fine because the default mode is affine
Other formats might break, just like mxfp8, if the multimodal_projector_bias is quantized.
Proposed fix: #271
Reactions are currently unavailable
Metadata
Metadata
Assignees
Labels
No labels