Interactive timelines showing how compute, energy, coordination, memory, and adoption compound into civilizational acceleration.
The root inventory is plots_manifest.json. Homepage cards, README links, build ordering, dashboard lanes, and validation all read from that manifest.
- AI Compute Timeline
- Adoption Timeline
- Energetic Scaling
- Civilization Scaling
- Energy Leverage
- Model Sizes
- AI Benchmark Progress
- Cost to Train
- Unified Dashboard
Training compute from early electronic computing to frontier AI, with proxies and speculative projections labeled separately.
Hero stat: 10^27+ FLOPs. Data confidence: mixed.
- Interactive: AI Compute Timeline
- Static: PNG | SVG
- Data: ai-compute-timeline/data/ai_milestones.csv
- Metadata: ai-compute-timeline/data/meta.json
- Details: ai-compute-timeline/
Time-to-scale proxies across computing, connectivity, mobile, cloud, and AI paradigms.
Hero stat: 60x faster. Data confidence: mixed.
- Interactive: Adoption Timeline
- Static: PNG | SVG
- Data: adoption-timeline/data/tech_adoption.csv
- Metadata: adoption-timeline/data/meta.json
- Details: adoption-timeline/
Biology, hardware efficiency, AI training compute, and foraging energetics compared with clean source datasets.
Hero stat: 10^6x+ efficiency. Data confidence: mixed.
- Interactive: Energetic Scaling
- Static: PNG | SVG
- Data: energetic-scaling/data/scaling_data.csv
- Metadata: energetic-scaling/data/meta.json
- Details: energetic-scaling/
Five civilizational lanes: energy, coordination, memory, replication, and latency over log-time.
Hero stat: 5 lanes. Data confidence: mixed.
- Interactive: Civilization Scaling
- Static: PNG | SVG
- Data: civilization-scaling/data/civilization_metrics.csv
- Metadata: civilization-scaling/data/meta.json
- Details: civilization-scaling/
Per-person energy command relative to the metabolic baseline, with period anchors labeled explicitly.
Hero stat: 17x body energy. Data confidence: high.
- Interactive: Energy Leverage
- Static: PNG | SVG
- Data: energy-leverage-per-person/data/energy_leverage_datapoints.csv
- Metadata: energy-leverage-per-person/data/meta.json
- Details: energy-leverage-per-person/
Language model parameter counts over time, separating disclosed counts from estimates and unreleased projections.
Hero stat: 1.5B -> 5T params. Data confidence: speculative.
- Interactive: Model Sizes
- Static: PNG | SVG
- Data: model-sizes/data/llm_model_sizes.csv
- Metadata: model-sizes/data/meta.json
- Details: model-sizes/
Benchmark progress against human baselines across knowledge, coding, software engineering, and reasoning tasks.
Hero stat: 4 benchmark lanes. Data confidence: mixed.
- Interactive: AI Benchmark Progress
- Static: PNG | SVG
- Data: ai-benchmark-progress/data/benchmark_data.csv
- Metadata: ai-benchmark-progress/data/meta.json
- Details: ai-benchmark-progress/
Training cost, FLOPs, and capability over time, showing the efficiency paradox at the frontier.
Hero stat: $/FLOP collapse. Data confidence: mixed.
- Interactive: Cost to Train
- Static: PNG | SVG
- Data: cost-to-train/data/training_costs.csv
- Metadata: cost-to-train/data/meta.json
- Details: cost-to-train/
A synchronized overview of the atlas inventory using the same manifest as the homepage, README, build, and validator.
Hero stat: 9 atlas entries. Data confidence: mixed.
- Interactive: Unified Dashboard
- Data: plots_manifest.json
- Metadata: plots_manifest.json
- Details: dashboard/
ai-compute-timeline/data/ai_milestones.csvuses normalized fields:year,event,category,value_numeric,value_low,value_high,value_unit,estimate_status,source_id,confidence,display_label,notes.adoption-timeline/data/tech_adoption.csvincludesadoption_metric_type,comparability_level,source_id,confidence, and notes so unlike adoption proxies are not treated as perfectly comparable.- Energetic Scaling keeps comparison-level data in
scaling_data.csvand splits clean source contracts intobiology_neural_scaling.csv,hardware_efficiency.csv,ai_training_flops.csv, andforaging_lht.csv.
Each plot should follow this structure:
<plot-name>/
├── index.html
├── data/
│ ├── <slug>.csv
│ └── meta.json
├── output/
│ ├── *_interactive.html
│ ├── *_highres.png
│ └── *.svg
├── src/
│ ├── *.py
│ └── *_plotly.py
└── README.md
python -m pip install -r requirements.txt
python build_all.py
python scripts/generate_homepage.py
python scripts/generate_readme_links.py
python scripts/validate_all.py
python scripts/check_links.py
python scripts/check_accessibility_static.py- Create the standard plot directory structure.
- Add data, metadata, generator scripts, output paths, and README.
- Add the entry to
plots_manifest.jsonwithstatus: "draft"until outputs and sources pass validation. - Run the build, generators, validators, link checker, and accessibility checker.
- Change
statusto"published"only when the plot should appear on the homepage and dashboard.
GitHub Pages deploys should run the same validation commands in CI before publishing. A failed build, broken relative link, missing alt text, stale output, or manifest mismatch should block deployment.
MIT