Web interface for monitoring and analyzing rLLM training runs in real time. Think of wandb dedicated to rLLM, with powerful features such as episode/trajectory search, observability AI agent and more.
There are two ways to access rLLM UI:
- Cloud — Use our hosted service at ui.rllm-project.com (see below).
- Self-hosted — Run locally from the repository (see below).
- Run
rllm login - Sign up at ui.rllm-project.com
- Copy your API key (shown once at registration) and paste it in terminal (or save it as RLLM_API_KEY in
.env)
That's it. No need to setup the database and other configurations.
Note
The observability AI agent can be enabled by adding your ANTHROPIC_API_KEY in the Settings page in the UI — no extra configuration needed.
git clone https://github.com/rllm-org/rllm-ui.git
cd rllm-ui
cp .env.example .env
# Edit .env — add ANTHROPIC_API_KEY to enable the AI agent (optional)
docker-compose upOpen http://localhost:3000. Data is stored in a Docker volume and persists across restarts.
If your agent traces are in BigQuery, use the BigQuery overlay to mount your GCP credentials:
# Authenticate with GCP (one-time)
gcloud auth application-default login
# Start with BigQuery support
docker-compose -f docker-compose.yml -f docker-compose.bigquery.yml upThen configure your BQ project, dataset, and table in the UI — see the Agent Observability Guide for detailed steps.
Alternatively, for service account key auth, see the comments in docker-compose.bigquery.yml.
# Install dependencies
cd api && pip install -r requirements.txt
cd ../frontend && npm install
# Run (two terminals)
cd api && uvicorn main:app --reload --port 8000
cd frontend && npm run devOpen http://localhost:5173.
- SQLite (default) — No setup required. Created automatically on first run.
- PostgreSQL — Adds full-text search with stemming and relevance ranking. Set
DATABASE_URLin.env:
DATABASE_URL="postgresql://user:pass@localhost:5432/rllm_ui"| Variable | Required | Default | Description |
|---|---|---|---|
ANTHROPIC_API_KEY |
No | — | Enables the built-in observability AI agent |
DATABASE_URL |
No | SQLite | PostgreSQL connection string |
RLLM_UI_URL |
No | http://localhost:3000 |
URL of your rllm-ui server (for training scripts) |
BigQuery project, dataset, and table are configured in the UI (Settings page or on first use). See the Agent Observability Guide.
Regardless of the service (cloud or self-hosted) you use, add ui to your trainer's logger list in your rLLM training script:
trainer.logger="['console','wandb','ui']"If using our cloud service and rLLM CLI, you can run training and eval runs as such:
rllm train [dataset name]
rllm eval [dataset name]If logged in, traces will automatically stream to the UI.
rLLM connects to the UI via the UILogger backend, registered as "ui" in the Tracking class (rllm/utils/tracking.py).
On init, the logger:
- Creates a training session via
POST /api/sessions - Starts a background heartbeat thread (for crash detection)
- Wraps
stdout/stderrwithTeeStreamto capture training logs
During training, the logger sends data over HTTP.
So the overall flow looks like:

