Skip to content

docs: add AGENTS.md files with AI assistant guidelines#4507

Closed
bm1549 wants to merge 9 commits intomainfrom
brian.marks/add-coding-guidelines-claude-md
Closed

docs: add AGENTS.md files with AI assistant guidelines#4507
bm1549 wants to merge 9 commits intomainfrom
brian.marks/add-coding-guidelines-claude-md

Conversation

@bm1549
Copy link
Contributor

@bm1549 bm1549 commented Mar 6, 2026

What does this PR do?

Adds AI assistant context files at three levels of the repository, with a CLAUDE.md
symlink at the root for Claude Code compatibility. Also adds a Code quality section
to CONTRIBUTING.md with the same rules for human contributors.

File Lines Covers
AGENTS.md 134 Repo-wide: commands, naming, constants, env vars, testing, PR template, definition of done
contrib/AGENTS.md 93 Library init, WithX options, API design, new package checklist, per-package testing
ddtrace/AGENTS.md 56 Lock discipline (checklocks), api.txt public surface, checklocks.sh usage

Package-specific conventions live in the subdirectory closest to where they apply.
Agents load the root file always, then the nearest subdirectory file when working in
that area — so only relevant rules are in context for each task.

Motivation

AI coding assistants (Claude Code, GitHub Copilot, Codex, etc.) read AGENTS.md at the
start of every session. Without it, agents repeatedly make the same mistakes that show
up in PR review. Surfacing these rules upfront reduces reviewer burden.

The rules were derived from three sources:

  1. 6 months of PR review history — 383 merged PRs and ~580 review comments were
    analyzed to identify the most common reviewer requests. Top themes: lock discipline,
    naming consistency (noop not nop, fake vs mock), magic number extraction,
    test helper file suffixes, compile-time interface assertions, import grouping.

  2. Codebase inspection — deep read of existing patterns across contrib/,
    ddtrace/, and internal/ to ensure every rule reflects what the code actually does
    (option pattern, env var routing, contrib module structure, copyright headers).

  3. AGENTS.md best-practice research — guidance from Anthropic, OpenAI, and
    community sources on what makes an effective agents file: commands first, under 200
    lines per file, no content inferable from reading the code, progressive disclosure
    via subdirectory files. Leans on findings in Evaluating AGENTS.md.

AGENTS.md is the cross-tool open standard (Linux Foundation). CLAUDE.md is kept as
a symlink for backward compatibility with Claude Code.

Reviewer's Checklist

  • Changed code has unit tests for its functionality at or near 100% coverage. — N/A (documentation only)
  • System-Tests covering this feature have been added and enabled with the va.b.c-dev version tag. — N/A (documentation only)
  • There is a benchmark for any new code, or changes to existing code. — N/A (documentation only)
  • If this interacts with the agent in a new way, a system test has been added. — N/A (documentation only)
  • New code is free of linting errors. You can check this by running make lint locally. — N/A (documentation only)
  • New code doesn't break existing tests. You can check this by running make test locally. — N/A (documentation only)
  • Add an appropriate team label so this PR gets put in the right place for the release notes.
  • All generated files are up to date. You can check this by running make generate locally. — N/A (documentation only)
  • Non-trivial go.mod changes, e.g. adding new modules, are reviewed by @DataDog/dd-trace-go-guild. Make sure all nested modules are up to date by running make fix-modules locally. — N/A (no go.mod changes)

Unsure? Have a question? Request a review!

Adds CLAUDE.md capturing instrumentation library design principles
surfaced during PR review (#4470):

- Never block in New*/Wrap*/Setup* — fetch external data asynchronously
- WithX options are user-facing API only; use unexported setters for
  internal state
- Order public function params from broadest to narrowest scope
- Keep implementation details out of public function signatures
- Avoid repeated type assertions; assign to a local variable
- Extract helpers for near-identical code blocks
- Note input-scale assumptions in normalization algorithms
- Always run make format before committing

Also adds a brief "Instrumentation library design principles" section
to CONTRIBUTING.md with a pointer to CLAUDE.md for human contributors.

Co-Authored-By: Claude Sonnet 4.6 <[email protected]>
@bm1549 bm1549 added the AI Generated Largely based on code generated by an AI or LLM. This label is the same across all dd-trace-* repos label Mar 6, 2026
@bm1549 bm1549 requested a review from genesor March 6, 2026 17:15
@datadog-datadog-prod-us1-2
Copy link

datadog-datadog-prod-us1-2 bot commented Mar 6, 2026

✅ Tests

🎉 All green!

❄️ No new flaky tests detected
🧪 All tests passed

🎯 Code Coverage (details)
Patch Coverage: 100.00%
Overall Coverage: 59.13% (-0.12%)

This comment will be updated automatically if new data arrives.
🔗 Commit SHA: 8cc6580 | Docs | Datadog PR Page | Was this helpful? React with 👍/👎 or give us feedback!

@codecov
Copy link

codecov bot commented Mar 6, 2026

Codecov Report

✅ All modified and coverable lines are covered by tests.
✅ Project coverage is 59.97%. Comparing base (89047df) to head (8cc6580).
⚠️ Report is 1 commits behind head on main.

Additional details and impacted files

see 262 files with indirect coverage changes

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.
  • 📦 JS Bundle Analysis: Save yourself from yourself by tracking and limiting bundle sizes in JS merges.

bm1549 and others added 2 commits March 6, 2026 12:23
Remove rules that are too weak to prevent recurrence at generation time:
- Remove "run make format" (AI can't run tools; already in CONTRIBUTING.md)
- Remove "avoid repeated type assertions" (too subtle; caught at review not generation)
- Remove "input-scale assumptions" (was a reviewer question, not a design rule)

Strengthen remaining rules:
- Reframe parameter naming rule as "public functions must encapsulate their
  preconditions" — addresses the design cause, not just the naming symptom
- Reframe helper extraction as "inject dependencies instead of duplicating bodies"
  — more actionable: accept the constructed dependency, don't duplicate the body

Co-Authored-By: Claude Sonnet 4.6 <[email protected]>
CONTRIBUTING.md is the canonical source for all contributor guidance.
Move the full rules (with code examples) there so they apply to human
contributors and AI assistants alike.

CLAUDE.md is reduced to an operational pointer: repo layout, a summary
link to the CONTRIBUTING.md Code quality section, and a pre-push reminder.

Co-Authored-By: Claude Sonnet 4.6 <[email protected]>
@pr-commenter
Copy link

pr-commenter bot commented Mar 6, 2026

Benchmarks

Benchmark execution time: 2026-03-09 16:18:46

Comparing candidate commit 8cc6580 in PR branch brian.marks/add-coding-guidelines-claude-md with baseline commit 89047df in branch main.

Found 0 performance improvements and 0 performance regressions! Performance is the same for 156 metrics, 8 unstable metrics.

Explanation

This is an A/B test comparing a candidate commit's performance against that of a baseline commit. Performance changes are noted in the tables below as:

  • 🟩 = significantly better candidate vs. baseline
  • 🟥 = significantly worse candidate vs. baseline

We compute a confidence interval (CI) over the relative difference of means between metrics from the candidate and baseline commits, considering the baseline as the reference.

If the CI is entirely outside the configured SIGNIFICANT_IMPACT_THRESHOLD (or the deprecated UNCONFIDENCE_THRESHOLD), the change is considered significant.

Feel free to reach out to #apm-benchmarking-platform on Slack if you have any questions.

More details about the CI and significant changes

You can imagine this CI as a range of values that is likely to contain the true difference of means between the candidate and baseline commits.

CIs of the difference of means are often centered around 0%, because often changes are not that big:

---------------------------------(------|---^--------)-------------------------------->
                              -0.6%    0%  0.3%     +1.2%
                                 |          |        |
         lower bound of the CI --'          |        |
sample mean (center of the CI) -------------'        |
         upper bound of the CI ----------------------'

As described above, a change is considered significant if the CI is entirely outside the configured SIGNIFICANT_IMPACT_THRESHOLD (or the deprecated UNCONFIDENCE_THRESHOLD).

For instance, for an execution time metric, this confidence interval indicates a significantly worse performance:

----------------------------------------|---------|---(---------^---------)---------->
                                       0%        1%  1.3%      2.2%      3.1%
                                                  |   |         |         |
       significant impact threshold --------------'   |         |         |
                      lower bound of CI --------------'         |         |
       sample mean (center of the CI) --------------------------'         |
                      upper bound of CI ----------------------------------'

@bm1549 bm1549 marked this pull request as ready for review March 6, 2026 18:32
@bm1549 bm1549 requested a review from a team as a code owner March 6, 2026 18:32
- Fix Close() guideline: async work should be canceled on Close(), not
  awaited — users should not be blocked by background enrichment
- Fix unexported setter example: SetClusterID → setClusterID

Co-Authored-By: Claude Sonnet 4.6 <[email protected]>
@bm1549 bm1549 requested a review from mtoffl01 March 6, 2026 19:09
@darccio
Copy link
Member

darccio commented Mar 6, 2026

Could we name it as AGENTS.md? We have an item from the APM DCS AI Guild to introduce AGENTS.md and this could be a step forward.

Also, we might consider learning more about using Claude rules for specific knowledge: https://code.claude.com/docs/en/memory#organize-rules-with-claude%2Frules%2F

Per reviewer suggestion, use AGENTS.md as the canonical filename
(aligns with the APM DCS AI Guild initiative to standardize on AGENTS.md).
CLAUDE.md is kept as a symlink for backward compatibility with Claude Code.

Co-Authored-By: Claude Sonnet 4.6 <[email protected]>
@eliottness
Copy link
Contributor

I would hope we can craft this ourselves instead of having an agent do it. This is good start but it is missing a lot of things IMO

@bm1549
Copy link
Contributor Author

bm1549 commented Mar 6, 2026

Done — renamed to AGENTS.md and added a CLAUDE.md symlink for Claude Code backward compatibility. Thanks for the suggestion!

@bm1549
Copy link
Contributor Author

bm1549 commented Mar 6, 2026

I would hope we can craft this ourselves instead of having an agent do it. This is good start but it is missing a lot of things IMO

Hand-crafting is certainly an option, but best practices are evolving so quickly that having the agent generate its own instructions (and humans can confirm) is often a solid workflow. This is especially true if you're using Claude and notice it goes off the rails - you can prompt it with update AGENTS.md so this mistake never happens again and it'll add a clean and clear message there for you

Copy link
Member

@kakkoyun kakkoyun left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for initializing this. DCS AI guild was planning to start this initiative.

I believe we should take a layered approach here. We should have dedicated agents files for contrib, tracer, profiler and instrumentation. They might have different concerns.

This would also solves the ownership problem of these files.

What do you think?

@mtoffl01
Copy link
Contributor

mtoffl01 commented Mar 9, 2026

@kakkoyun I agree. The content here came from feedback I previously gave on a PR for contrib files, so we can start the “layered” approach by having this PR introduce the contrib section. The one thing I wasn’t sure about is whether this guidance should apply to both contrib and instrumentation, or if it only applies to contrib.

@eliottness I also agree there are other ways we could get to a more ideal state, but I still think this is a very good starting point and sets us up well for future iterations and additions.

Replaces the minimal stub with a full AGENTS.md grounded in three
research inputs:
- Analysis of 383 merged PRs and ~580 review comments (past 6 months),
  surfacing the top recurring reviewer requests
- Deep inspection of existing codebase patterns (option pattern,
  env var routing, contrib structure, testing conventions)
- AGENTS.md best-practice research from Anthropic, OpenAI, and
  community sources (commands-first, under 200 lines, no duplication)

Key additions:
- Full commands block (make test/unit vs make test, per-package go test,
  go.work update step, configinverter workflow)
- Lock discipline (Locked suffix, +checklocks annotations, no closures
  in locked regions)
- Naming rules (noop vs nop, fake vs mock, reflect.Pointer)
- Compile-time interface assertion requirement
- Constants rule (no magic numbers)
- Contrib package checklist (module path /v2 in declaration, not
  directory; orchestrion.yml; example_test.go; copyright headers)
- Testing conventions (_test.go suffix on helpers, no bogus tests,
  no test-only fields on prod structs)
- PR template instructions (follow .github/pull_request_template.md,
  ask user if context is missing)
- Definition of done checklist

Co-Authored-By: Claude Sonnet 4.6 <[email protected]>
@bm1549 bm1549 changed the title docs: add CLAUDE.md with coding guidelines for AI assistants docs: add AGENTS.md and CONTRIBUTING.md coding guidelines for AI assistants Mar 9, 2026
… files

Move contrib- and ddtrace-specific conventions out of the root AGENTS.md
into dedicated files agents can load when working in those areas:

- contrib/AGENTS.md: new package checklist, library init (New*/Wrap*
  must not block), WithX option pattern, API parameter ordering,
  per-package test commands, instrumentation/env usage
- ddtrace/AGENTS.md: lock discipline (checklocks, Locked suffix, no
  closures in locked regions), api.txt public surface tracking,
  checklocks.sh invocation

Root AGENTS.md retains repo-wide rules: naming, constants, env var
routing, performance, compile-time assertions, generated files,
imports, testing conventions, PR template, definition of done.

Co-Authored-By: Claude Sonnet 4.6 <[email protected]>
@bm1549 bm1549 requested review from a team as code owners March 9, 2026 14:59
@bm1549 bm1549 changed the title docs: add AGENTS.md and CONTRIBUTING.md coding guidelines for AI assistants docs: add AGENTS.md files with AI assistant guidelines Mar 9, 2026
@bm1549
Copy link
Contributor Author

bm1549 commented Mar 9, 2026

Thanks for initializing this. DCS AI guild was planning to start this initiative.

I believe we should take a layered approach here. We should have dedicated agents files for contrib, tracer, profiler and instrumentation. They might have different concerns.

This would also solves the ownership problem of these files.

What do you think?

@kakkoyun just split out the AGENTS.md files and generalized it a bit more using context from our repo docs, code conventions, recent PRs/comments, and current research from Anthropic/OpenAI

It is worth mentioning that current research does not show material improvements from separate AGENTS.md when the code, instructions, and conventions are similar between packages, but it definitely shouldn't hurt

@github-actions github-actions bot added the apm:ecosystem contrib/* related feature requests or bugs label Mar 9, 2026
@bm1549
Copy link
Contributor Author

bm1549 commented Mar 9, 2026

/rerun

bm1549 and others added 2 commits March 9, 2026 11:57
Co-authored-by: Mikayla Toffler <[email protected]>
Co-authored-by: Mikayla Toffler <[email protected]>
Copy link
Member

@felixge felixge left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

AGENTS.md best-practice research — guidance from Anthropic, OpenAI, and
community sources on what makes an effective agents file: commands first, under 200
lines per file, no content inferable from reading the code, progressive disclosure
via subdirectory files. Leans on findings in Evaluating AGENTS.md.

2026-03-09 Evaluating AGENTS md Are Repository-Level Context Files Helpful for Coding Agents at 21 15 12@2x

The study linked concluded that LLM-generated context files typically have a negative impact. So I'd argue that we should only consider human-written context files for now, which is not what is being proposed here, right?

I'm also worried that introducing AGENTS.md files into our repos will end up being append-only dumps for every issue people encounter, causing agent performance to significantly degrade over time. Who gets to decide what should be added? And more importantly, what gets removed? How do we evaluate these files?

I'd prefer to not introduce these files or to keep them minimal and human-written.

People who want more elaborate default instructions for their agents are free to maintain their own local context files.

@darccio
Copy link
Member

darccio commented Mar 9, 2026

Following up on what @felixge suggests, could a AGENTS.md that points to human-written docs around different matters be a compromise?

I imagine some more detailed files like CONTRIBUTING.md but on specific things: style, patterns, pitfalls, etc. In this way, we can collect the current consensus and detect divergences and quirks among the contributors. LLMs can consume that but it's not focused on them.

@kakkoyun
Copy link
Member

Skimming the paper linked in this PR (https://arxiv.org/pdf/2602.11988): the headline is basically “LLM-generated AGENTS.md tends to make agents slightly worse (-3% success on avg) and more expensive (20%+ cost),” and even dev-written ones only help when they’re minimal (+4% on avg).

+1 to the compromise idea of “AGENTS.md points to human docs.”

Given the paper’s conclusion that extra requirements can make tasks harder (and increase cost), let's keep AGENTS.md as (as suggested already):

  • commands + repo-specific merge blockers / workflows
  • links to CONTRIBUTING.md for style/patterns/examples

On top of that we could rather add skills (e.g. like writing a contrib) and commands, similar to https://github.com/DataDog/dd-trace-go/blob/main/.claude/commands/checklocks.md for well-defined tasks.

@bm1549 bm1549 closed this Mar 10, 2026
@bm1549
Copy link
Contributor Author

bm1549 commented Mar 10, 2026

Seems like this is very hotly contested, so I'll close it for now

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

AI Generated Largely based on code generated by an AI or LLM. This label is the same across all dd-trace-* repos apm:ecosystem contrib/* related feature requests or bugs

Projects

None yet

Development

Successfully merging this pull request may close these issues.

6 participants