Raspberry Pi time server project with Chrony, PPS, Teensy telemetry collection, dashboard, logging, daily reporting, and backup automation.
- Raspberry Pi 5
- Teensy
- SparkFun GNSS ZED-F9T
- Swift Piksi
- FE-5680A rubidium standard
- TimeHAT
- Cisco IE-2000-4TS-B
- Chrony is currently operating in a stable PPS-only configuration
- PPS is selected as the active Chrony reference source
- Raspberry Pi is serving as a stratum 1 time server when healthy
- ZED-F9T is the primary GNSS/timing truth source
- ZED PPS feeds both Raspberry Pi and Teensy
- Teensy is the timing measurement and telemetry engine
- Piksi is retained only as a PPS comparison reference
- Dashboard ZED/GNSS data is restored through
gpsd-direct.serviceandzed-monitor.service - Teensy telemetry collector service is active
- Teensy Dashboard 2 service runs on port
8082 - Teensy logger service is active
- Cron-based aggregation, plotting, pruning, backup, and email reporting are in place
CURRENT STABLE BASELINE AFTER RECOVERY This is the current known-good operational state and should be treated as the rollback-safe baseline unless I explicitly choose to resume Chrony coarse-time engineering.
Chrony / NTP state chrony is currently operating in a PPS-only configuration for stable production use chrony is selecting PPS as the reference source Raspberry Pi is back at stratum 1 when healthy chrony may briefly show unsynchronised or a network source immediately after restart, then reacquire PPS after a short settling period Current stable chrony intent is operational reliability, not coarse-time experimentation
Important chrony note The original paired configuration using a coarse GNSS source plus PPS was broken because the coarse-time feed into chrony was not being populated in a usable way For now, the stable operational choice is PPS-only chrony Do not casually reintroduce lock NMEA / SHM / SOCK refclock edits unless specifically working on the coarse-time project
Current PPS-only chrony is an operational baseline, not the final architecture; FE and TimeHAT code should be written as additive subsystems that preserve a smooth path to later integrated timing modes.
The current safe rollback baseline is:
- Chrony PPS-only stable
- Dashboard GNSS/ZED feed restored
- ZED coarse-time integration for Chrony is still under development
At present, the production-stable timing path is PPS-only in Chrony.
The attempted paired PPS + ZED coarse-time Chrony configuration is not yet the current baseline and should be treated as an engineering workstream, not the default runtime configuration.
- ZED-F9T PPS -> Raspberry Pi PPS input -> Chrony -> NTP service
- ZED-F9T PPS -> Teensy
- Piksi PPS -> Teensy comparison input
- ZED USB -> Raspberry Pi
zed-monitor->/home/pi/timing/zed_status.jsonsend_zed_to_teensy.py-> UDP bridge -> Teensy- Teensy -> UDP telemetry -> collector
- Collector -> SQLite database
- Dashboard -> reads SQLite-backed API data
- ZED is the single GNSS truth source
- Teensy is the timing measurement source
- Piksi is only a PPS comparison reference
- Dashboard should prefer DB-driven values
- Avoid mixed-source GNSS telemetry
- Do not reintroduce direct Piksi GNSS telemetry into the dashboard
snapshot/= baseline capture of the currently working systemsnapshot/systemd/= active service filessnapshot/chrony/= Chrony configurationsnapshot/scripts/= key support scriptssnapshot/timing/= timing/reporting source codesnapshot/teensy_appliance/= telemetry collector/dashboard sourcesnapshot/teensy_dash2/= active dashboard source
chrony.servicezed-monitor.servicegpsd-direct.servicezed-splitter.servicezed-to-teensy.serviceteensy-collector.serviceteensy-dash2.serviceteensy_logger.serviceser2net.service
- This repository stores source/configuration snapshots, not runtime databases, logs, plots, or virtual environments.
- Always confirm live runtime paths before editing files.
- There may be differences between live runtime files and repo copies.
- Preferred workflow is:
- confirm live file path from systemd/runtime usage
- edit live file if an immediate runtime change is needed
- copy the live file back into the repo path
- regenerate the snapshot
- commit and push
~/time-server/rebuild/dump_state.sh > ~/time-server/system_config/STATE_SNAPSHOT.txt