Scribe is a next-generation component that writes Home Assistant states and events to a TimescaleDB database.
Why Scribe?
Scribe is built differently. Unlike other integrations that rely on synchronous drivers or the default recorder, Scribe uses asyncpg, a high-performance asynchronous PostgreSQL driver. This allows it to handle massive amounts of data without blocking Home Assistant's event loop. It's designed for stability, speed, and efficiency.
- Features
- Installation
- Configuration
- Migration
- Statistics Sensors
- Services
- Dashboard / View
- Troubleshooting
- Technical Data
- License
- 🚀 Async-First Architecture: Built on
asyncpgfor non-blocking, high-throughput writes. - 📦 TimescaleDB Native: Automatically manages Hypertables and Compression Policies.
- 📊 Granular Statistics: Optional sensors for monitoring chunk counts, compression ratios, and I/O performance.
- 🔒 Secure: Full SSL/TLS support.
- 📈 States & Events: Records all state changes and events to
statesandeventstables. - 👥 User Context: Automatically syncs Home Assistant users to the database for rich context.
- 🧩 Entity Metadata: Automatically syncs entity registry (names, platforms, etc.) to the
entitiestable. - 🏠 Area & Device Context: Automatically syncs areas and devices to
areasanddevicestables. - 🔌 Integration Info: Automatically syncs integration config entries to the
integrationstable. - 🎯 Smart Filtering: Include/exclude by domain, entity, entity glob, or attribute.
- ✅ 100% Test Coverage: Robust and reliable.
HACS (Recommended)
- Add this repository as a custom repository in HACS.
- Search for "Scribe" and install.
- Restart Home Assistant.
Manual
- Copy the
custom_components/scribefolder to your Home Assistant'scustom_componentsdirectory. - Restart Home Assistant.
You need a running TimescaleDB instance. I recommend PostgreSQL 17 or 18.
If you are running Home Assistant OS, I recommend using the TimescaleDB Add-on.
# High Availability (Recommended)
docker run -d --name timescaledb -p 5432:5432 -e POSTGRES_PASSWORD=password timescale/timescaledb-ha:pg18
# Standard
docker run -d --name timescaledb -p 5432:5432 -e POSTGRES_PASSWORD=password timescale/timescaledb:pg18Create the database and user:
CREATE DATABASE scribe;
CREATE USER scribe WITH PASSWORD 'password';
GRANT ALL PRIVILEGES ON DATABASE scribe TO scribe;
\c scribe
CREATE EXTENSION IF NOT EXISTS timescaledb;
GRANT ALL ON SCHEMA public TO scribe;scribe:
db_url: postgresql://scribe:password@192.168.1.10:5432/scribeShow Full YAML Configuration
scribe:
db_url: postgresql://scribe:password@192.168.1.10:5432/scribe
db_ssl: false
chunk_time_interval: "7 days"
compress_after: "7 days"
record_states: true
record_events: false
batch_size: 500
flush_interval: 5
max_queue_size: 10000
buffer_on_failure: true
enable_stats_io: false
enable_stats_chunk: false
enable_stats_size: false
stats_chunk_interval: 60
stats_size_interval: 60
include_domains: []
include_entity_globs: []
exclude_domains: []
exclude_entities: []
exclude_entity_globs: []
exclude_attributes: []
# Optional: Disable specific metadata tables (default: true)
enable_table_areas: true
enable_table_devices: true
enable_table_entities: true
enable_table_integrations: true
enable_table_users: trueShow Parameter Reference
| Parameter | Description |
|---|---|
db_url |
Required. The connection string for your TimescaleDB database. |
db_ssl |
Enable SSL/TLS for the database connection. |
chunk_time_interval |
The duration of each data chunk in TimescaleDB. |
compress_after |
How old data should be before it is compressed. |
record_states |
Whether to record state changes. |
record_events |
Whether to record events. |
batch_size |
Number of items to buffer before writing to the database. |
flush_interval |
Maximum time (in seconds) to wait before flushing the buffer. |
max_queue_size |
Maximum number of items to hold in memory before dropping new ones. |
buffer_on_failure |
If true, keeps data in memory if the DB is unreachable (up to max_queue_size). |
enable_stats_io |
Enable real-time writer performance sensors (no DB queries). |
enable_stats_chunk |
Enable chunk count statistics sensors (queries DB). |
enable_stats_size |
Enable storage size statistics sensors (queries DB). |
stats_chunk_interval |
Interval (in minutes) to update chunk statistics. |
stats_size_interval |
Interval (in minutes) to update size statistics. |
include_domains |
List of domains to include. |
include_entities |
List of specific entities to include. |
include_entity_globs |
List of entity patterns to include (e.g. sensor.weather_*). |
exclude_domains |
List of domains to exclude. |
exclude_entities |
List of specific entities to exclude. |
exclude_entity_globs |
List of entity patterns to exclude (e.g. switch.kitchen_*). |
exclude_attributes |
List of attributes to exclude from the attributes column. |
enable_table_areas |
Enable creation and sync of the areas table. |
enable_table_devices |
Enable creation and sync of the devices table. |
enable_table_entities |
Enable creation and sync of the entities table. |
enable_table_integrations |
Enable creation and sync of the integrations table. |
enable_table_users |
Enable creation and sync of the users table. |
Scribe provided helper scripts to backfill data from various sources.
Show InfluxDB Migration Guide
-
Navigate to the
migrationdirectory:cd migration -
Install dependencies:
pip install influxdb-client psycopg2-binary python-dotenv
-
Configure the migration:
cp .env.example .env nano .env # Fill in [InfluxDB Configuration], [Scribe Configuration], and [Migration Settings] -
Run the migration:
python3 influx2scribe.py
Show LTSS Migration Guide
-
Navigate to the
migrationdirectory:cd migration -
Install dependencies:
pip install psycopg2-binary python-dotenv
-
Configure the migration:
cp .env.example .env nano .env # Fill in [LTSS Configuration], [Scribe Configuration], and [Migration Settings] -
Run the migration:
python3 ltss2scribe.py
Enable sensors by setting their flags in your configuration.
Show IO Sensors
Real-time metrics from the writer (no DB queries).
Show Chunk Sensors
Chunk counts (updated every stats_chunk_interval minutes).
Show Size Sensors
Storage usage in bytes (updated every stats_size_interval minutes).
Force an immediate flush of buffered data to the database.
service: scribe.flushExecute a read-only SQL query against the TimescaleDB database.
Parameters:
sql(Required): The SQL query to execute. Must be aSELECTstatement.
Returns: A list of rows, where each row is a dictionary of column names and values.
Example:
service: scribe.query
data:
sql: "SELECT * FROM states ORDER BY time DESC LIMIT 5"
response_variable: query_result- Check Home Assistant logs for errors
- Verify database connection with
psql -U scribe -h host -d scribe - Enable
enable_stats_io: trueto monitor buffer and writes - Check
sensor.scribe_buffer_size- if it's growing, there's a DB issue
- Reduce
max_queue_size - Reduce
flush_intervalfor faster writes - Check
sensor.scribe_buffer_size
You can configure SSL certificates for the database connection. This is useful if your database requires client certificate authentication or if you need to verify the server's certificate.
- SSL Root Certificate: Path to the CA certificate (e.g.,
certs/root.crt). - SSL Certificate File: Path to the client certificate (e.g.,
certs/client.crt). - SSL Key File: Path to the client key (e.g.,
certs/client.key).
Note on Paths: You can use absolute paths or paths relative to your Home Assistant configuration directory. For example, if you create a certs folder in your config directory, you can simply use certs/client.crt.
- Ensure coordinator flags are enabled (
enable_stats_chunk,enable_stats_size) - Check update intervals aren't too long
- View Home Assistant logs for coordinator errors
Please open an issue on GitHub with your logs and configuration. I would be happy to help!
A pre-configured Lovelace view containing all useful Scribe sensors (Database Statistics, Compression Ratios, I/O Performance) is available in this repository.
To add it to your Home Assistant dashboard:
- Open your dashboard and click "Edit Dashboard" (pencil icon).
- Click the + button to add a new View.
- Select YAML Mode (or "Edit in YAML").
- Copy the content of
lovelace_scribe_view.yamland paste it into the editor.
For detailed technical information about the architecture, database schema, and data flow, please refer to the Technical Documentation.
MIT License - See LICENSE file for details