-
Notifications
You must be signed in to change notification settings - Fork 91
Description
I know this may be early to suggest this, but there's some impressive tech that allows for massive MoE models to be loaded on computers with a lot less ram, it uses the super fast NVME disk on a Mac in combination with some of its Metal/MLX tech to achieve this in combination with the way MoE models work. Here's an example of a 397B model that works and works reasonably fast (5-7 tok/sec) on an M3 laptop. I just confirmed it works here on my 64GB M3 laptop.
It would be super amazing if you could support this, would allow us to experiment with massive models on computers with a lot less ram to see if it gets better functional results even if the tok/sec isn't as fast as models which fit into the GPU.
https://github.com/danveloper/flash-moe
based on the paper
https://github.com/danveloper/flash-moe/blob/main/paper/flash_moe.pdf