You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
comfy_kitchen inside the local ComfyUI virtual environment
~/ComfyUI/.venv/lib/python3.12/site-packages/comfy_kitchen/backends/eager/quantization.py
Why both patches were necessary in my setup:
one failure path happened in ComfyUI core inside stochastic_rounding(...)
another failure path happened inside comfy_kitchen in dequantize_per_tensor_fp8(...)
The logic of the workaround is simple:
if the tensor is on MPS
move the FP8-related operation to CPU
perform the conversion or dequantization there
move the resulting tensor back to the original device if needed
After patching both files, the official LTX 2.3 workflow worked again correctly on my Apple Silicon Mac with MPS enabled.
More importantly, in my case this was not limited to a single workflow. After applying both patches, all FP8 models I personally tested in ComfyUI were able to launch and run without triggering the Float8_e4m3fn MPS backend crash. I cannot guarantee identical results for every environment, but on my machine this consistently resolved the FP8-on-MPS failure path.
Because updates can overwrite the modified files again, I wrapped the workaround into a small Python patcher script so I can re-apply it quickly after future ComfyUI updates.
Important notes:
this is not an official upstream fix
this is a local workaround
it assumes the ComfyUI Desktop app path on macOS
it also assumes a local ComfyUI venv layout similar to ~/ComfyUI/.venv/...
users should review and adjust paths if their installation layout is different
Save the script as something like patch_comfy_mps_fp8.py
Close ComfyUI
Run the script with Python
Re-open ComfyUI
Re-run your FP8 workflow
If anyone else on Apple Silicon tests this successfully on additional FP8 models or workflows, it would be useful to compare results. patch_comfy_mps_fp8.py
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
Working Apple Silicon / macOS workaround for ComfyUI FP8 MPS crash after updates
I’m sharing a working local workaround for Apple Silicon users running ComfyUI on macOS with the MPS backend and hitting this error:
TypeError: Trying to convert Float8_e4m3fn to the MPS backend but it does not have support for that dtype.
In my case, the issue returned after a ComfyUI update because the files I had previously patched were overwritten again.
What fixed it for me was patching two separate locations:
ComfyUI core
/Applications/ComfyUI.app/Contents/Resources/ComfyUI/comfy/float.py
comfy_kitchen inside the local ComfyUI virtual environment
~/ComfyUI/.venv/lib/python3.12/site-packages/comfy_kitchen/backends/eager/quantization.py
Why both patches were necessary in my setup:
stochastic_rounding(...)comfy_kitchenindequantize_per_tensor_fp8(...)The logic of the workaround is simple:
After patching both files, the official LTX 2.3 workflow worked again correctly on my Apple Silicon Mac with MPS enabled.
More importantly, in my case this was not limited to a single workflow. After applying both patches, all FP8 models I personally tested in ComfyUI were able to launch and run without triggering the Float8_e4m3fn MPS backend crash. I cannot guarantee identical results for every environment, but on my machine this consistently resolved the FP8-on-MPS failure path.
Because updates can overwrite the modified files again, I wrapped the workaround into a small Python patcher script so I can re-apply it quickly after future ComfyUI updates.
Important notes:
~/ComfyUI/.venv/...Sanitized patcher script:
How to use:
Save the script as something like patch_comfy_mps_fp8.py
Close ComfyUI
Run the script with Python
Re-open ComfyUI
Re-run your FP8 workflow
If anyone else on Apple Silicon tests this successfully on additional FP8 models or workflows, it would be useful to compare results.
patch_comfy_mps_fp8.py
Beta Was this translation helpful? Give feedback.
All reactions