Transformer language models on Apple Silicon, built with MLX.
pip install lmxlabRequires Python 3.12+ and Apple Silicon (M1+). MLX runs on Intel/Linux too, but CPU-only.
import mlx.core as mx
from lmxlab.models.llama import llama_config
from lmxlab.models.base import LanguageModel
config = llama_config(vocab_size=32000, d_model=512, n_heads=8, n_kv_heads=4, n_layers=6)
model = LanguageModel(config)
mx.eval(model.parameters())
tokens = mx.array([[1, 234, 567]])
logits, caches = model(tokens)Architecture variants (GPT, LLaMA, DeepSeek, Gemma, Qwen, Mixtral, etc.) are config factories — same LanguageModel class, different settings.
lmxlab list # Show available architectures
lmxlab info llama --tiny # Config details
lmxlab count deepseek --detail # Parameter breakdownFull API docs at michaelellis003.github.io/lmxlab.
git clone https://github.com/michaelellis003/lmxlab.git
cd lmxlab
uv sync --extra dev
uv run pre-commit install
uv run pytestMIT