A Tamagotchi-like terminal game with a local LLM "brain" powered by Ollama.
- Real-time pet simulation - Stats decay over time, evolution through life stages, and care-dependent outcomes
- Beautiful Terminal UI - Built with Charm's Bubble Tea framework with colorful stat bars and ASCII art
- LLM-Powered Personality - Your pet speaks with a unique personality using a local Ollama model
- Tool Calling - The LLM can propose events and set moods through a safe, bounded tool system
- Persistent Save - Your pet persists across sessions with automatic saving
- Deterministic Gameplay - Game rules are deterministic; the LLM only adds flavor
- Go 1.21+ - Install Go
- Ollama - Install Ollama
# Clone the repository
git clone https://github.com/danielmerja/TamaLLM.git
cd TamaLLM
# Pull the recommended model
ollama pull qwen3:1.7b
# Build and run
go run ./cmd/tamallmIf you don't have Ollama installed or want to play without AI features:
go run ./cmd/tamallm --no-llm| Key | Action |
|---|---|
Enter / Space |
Select / Open menu |
β/k, β/j |
Navigate menus |
Esc / Backspace |
Go back |
m |
Open action menu |
a |
Toggle LLM auto mode |
? |
Toggle help |
d |
Toggle debug mode |
q |
Quit (auto-saves) |
Press d to toggle debug mode, which displays:
- LLM Enabled/Pending/Auto status - Shows if LLM is active and processing
- TTS status - Shows TTS availability and any errors (e.g., "Available (via python3, voice: M2)" or "Unavailable: supertonic not found")
- Last LLM Input - The action that triggered the LLM request
- Last LLM Output - The pet's message from the LLM
- Pet Said - The current pet message being displayed
- Age and Stage - Pet's current age and life stage
- Care Score - Running average of care quality
- Suggested Action - (Auto mode only) The next recommended action
Your pet evolves through 5 stages based on age and care quality:
- Egg π₯ - Hatches after ~30 seconds
- Baby π£ - Grows to child after ~2 minutes
- Child π₯ - Becomes teenager after ~5 minutes
- Teen π€ - Evolves to adult after ~10 minutes
- Adult π - Final form depends on care quality!
The type of adult your pet becomes depends on how well you cared for them:
- Healthy Adult - Good stats, few sickness events, balanced discipline
- Chubby Adult - Overfed with too many snacks
- Naughty Adult - Low discipline from lack of training
- Negligence Adult - Poor overall care quality
Keep these stats healthy (above 30) to keep your pet happy:
- Hunger π½οΈ - Feed meals and snacks
- Happiness π - Play games and give praise
- Energy β‘ - Let your pet sleep when tired
- Hygiene π - Clean regularly to prevent sickness
- Health β€οΈ - Give medicine when sick
| Action | Effect |
|---|---|
| Feed Meal | +30 Hunger, +5 Happiness, +1 Weight |
| Feed Snack | +10 Hunger, +15 Happiness, chance +1 Weight |
| Give Treat | +20 Happiness, +5 Hunger, high chance +1 Weight |
| Play | +25 Happiness, -20 Energy, -10 Hunger |
| Exercise | +15 Happiness, -25 Energy, -15 Hunger, +5 Health, +3 Discipline |
| Explore | +10 Happiness, -10 Energy, random encounters |
| Train | +8 Discipline, -15 Energy |
| Clean | +40 Hygiene, +5 Happiness |
| Sleep | Restores Energy over time |
| Medicine | Cures sickness, +20 Health |
| Praise | +10 Happiness, slight -2 Discipline |
| Scold | +10 Discipline, -15 Happiness |
Press a or start with --auto to enable LLM-driven automatic care:
- The LLM will automatically perform actions when needed
- It uses the
get_suggested_actiontool to determine the best action - Great for letting your pet care for itself while you watch!
| Variable | Default | Description |
|---|---|---|
OLLAMA_HOST |
http://localhost:11434 |
Ollama server URL |
OLLAMA_MODEL |
qwen3:1.7b |
Model to use |
OLLAMA_TIMEOUT |
10s |
Request timeout |
OLLAMA_THINK |
off |
Thinking mode (off/low/medium/high) |
go run ./cmd/tamallm [flags]
Flags:
--new Start a new game (ignore existing save)
--model NAME Override Ollama model
--host URL Override Ollama host URL
--tick-ms N Simulation tick interval (default: 1000)
--no-llm Run without LLM (use canned messages)
--auto Start with LLM auto mode enabled
--tts Enable Text-to-Speech (requires Supertonic-2)
--tts-voice TTS voice style (M1-M5, F1-F5, default: F3)TamaLLM supports text-to-speech using the Supertonic-2 model from Supertone. This allows your pet to "speak" its messages out loud!
-
Install Python 3.8+ and pip
-
Install the Supertonic package:
pip install supertonic sounddevice soundfile
Using uv (recommended):
uv venv uv pip install supertonic sounddevice soundfile
-
Run with TTS enabled:
go run ./cmd/tamallm --tts
TamaLLM automatically detects and uses
uv runwhen packages are installed viauv pip installin auv venvenvironment.
| Style | Description |
|---|---|
| M1-M5 | Male voices with varying tones |
| F1-F5 | Female voices with varying tones |
Example with custom voice:
go run ./cmd/tamallm --tts --tts-voice M2| Model | Size | Notes |
|---|---|---|
qwen3:1.7b |
~1.5GB | Default, fast, good quality |
qwen3:4b |
~2.5GB | Better responses, still fast |
qwen3:8b |
~5.2GB | High quality, requires more RAM |
TamaLLM/
βββ cmd/tamallm/ # Main entry point
β βββ main.go
βββ internal/
β βββ game/ # Pure game engine
β β βββ state.go # Pet state struct
β β βββ engine.go # Game logic
β β βββ engine_test.go
β βββ tui/ # Terminal UI
β β βββ model.go # Bubble Tea model
β β βββ views.go # View rendering
β βββ llm/ # Ollama integration
β β βββ client.go # API client
β β βββ client_test.go
β βββ tts/ # Text-to-Speech (Supertonic-2)
β β βββ tts.go # TTS client
β β βββ tts_test.go
β βββ storage/ # Save/load
β β βββ storage.go
β β βββ storage_test.go
β βββ util/ # Helpers
β βββ util.go
β βββ util_test.go
βββ go.mod
βββ go.sum
βββ README.md
# Run all tests
go test ./...
# Run with coverage
go test -cover ./...
# Run specific package tests
go test ./internal/game/# Build binary
go build -o tamallm ./cmd/tamallm
# Build with optimizations
go build -ldflags="-s -w" -o tamallm ./cmd/tamallmThe TUI follows Bubble Tea best practices:
- Never blocks
Update()with network I/O - Uses
tea.Cmdfor async work (ticks, LLM calls, saves) - Uses
tea.Tickfor simulation ticks
The LLM integration is:
- Safe - Tool calls are validated and bounded
- Optional - Game works fine without LLM
- Action-driven - LLM can request actions through validated tools
- Interface-driven - Easy to mock for testing
When tool calling is enabled, the LLM can use these tools:
| Tool | Description | Permissions |
|---|---|---|
rand_int(min, max) |
Generate random number | Read-only |
propose_event(type, severity, desc) |
Propose game event | Validated by engine |
set_mood(mood, emoji, intensity) |
Set UI mood display | UI-only |
summarize_state() |
Get state summary | Read-only |
request_action(action) |
Request to perform an action | Validated and executed |
get_valid_actions() |
Get list of currently valid actions | Read-only |
get_suggested_action() |
Get suggested action based on needs | Read-only |
Saves are stored in XDG-compliant locations:
- Linux:
~/.local/share/tamallm/tamallm_save.json - macOS:
~/Library/Application Support/tamallm/tamallm_save.json
Save format is versioned JSON for forward compatibility.
MIT License - see LICENSE file for details.
- Built with Bubble Tea by Charm
- LLM runtime by Ollama
- Inspired by the classic Tamagotchi virtual pets