You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
When the trace-agent is restarted, a rate of 100% is initially provided by the trace-agent, increasing dramatically the number of traces sampled. A rate could go suddenly from 0.1% to 100% and back to 0.1% when the trace-agent eventually computes the new sampling rate.
In particular it is observed that when the agent restarts, the payload buffering that waits for new container tags breaches its memory limit and we send spans without container tags.
This PR applies a limit of sampling rate increases of x2 every 1s resulting in a x10 completed every 3-4s
1->100% takes 7s
0.1 -> 100% takes 10s
Below is a screen of the before/after of the dd-trace-go implementation with go_span_new using the PR tracer and go_spam_old using the latest release of dd-trace-go and both applications generating 500 traces/s. Notice how the new code does not burst in throughput
Motivation
Reviewer's Checklist
Changed code has unit tests for its functionality at or near 100% coverage.
System-Tests covering this feature have been added and enabled with the va.b.c-dev version tag.
There is a benchmark for any new code, or changes to existing code.
If this interacts with the agent in a new way, a system test has been added.
New code is free of linting errors. You can check this by running make lint locally.
New code doesn't break existing tests. You can check this by running make test locally.
Add an appropriate team label so this PR gets put in the right place for the release notes.
All generated files are up to date. You can check this by running make generate locally.
Non-trivial go.mod changes, e.g. adding new modules, are reviewed by @DataDog/dd-trace-go-guild. Make sure all nested modules are up to date by running make fix-modules locally.
Comparing candidate commit aeb2deb in PR branch raphael/sampling_bump with baseline commit 7a849eb in branch main.
Found 0 performance improvements and 1 performance regressions! Performance is the same for 154 metrics, 9 unstable metrics.
Explanation
This is an A/B test comparing a candidate commit's performance against that of a baseline commit. Performance changes are noted in the tables below as:
🟩 = significantly better candidate vs. baseline
🟥 = significantly worse candidate vs. baseline
We compute a confidence interval (CI) over the relative difference of means between metrics from the candidate and baseline commits, considering the baseline as the reference.
If the CI is entirely outside the configured SIGNIFICANT_IMPACT_THRESHOLD (or the deprecated UNCONFIDENCE_THRESHOLD), the change is considered significant.
Feel free to reach out to #apm-benchmarking-platform on Slack if you have any questions.
More details about the CI and significant changes
You can imagine this CI as a range of values that is likely to contain the true difference of means between the candidate and baseline commits.
CIs of the difference of means are often centered around 0%, because often changes are not that big:
---------------------------------(------|---^--------)-------------------------------->
-0.6% 0% 0.3% +1.2%
| | |
lower bound of the CI --' | |
sample mean (center of the CI) -------------' |
upper bound of the CI ----------------------'
As described above, a change is considered significant if the CI is entirely outside the configured SIGNIFICANT_IMPACT_THRESHOLD (or the deprecated UNCONFIDENCE_THRESHOLD).
For instance, for an execution time metric, this confidence interval indicates a significantly worse performance:
----------------------------------------|---------|---(---------^---------)---------->
0% 1% 1.3% 2.2% 3.1%
| | | |
significant impact threshold --------------' | | |
lower bound of CI --------------' | |
sample mean (center of the CI) --------------------------' |
upper bound of CI ----------------------------------'
scenario:BenchmarkSetTagMetric-25
🟥 execution_time [+1.629ns; +2.307ns] or [+2.742%; +3.884%]
raphaelgavache
changed the title
feat sampling: Cap default sampling rate increases
feat: cap default sampling rate increases
Mar 2, 2026
❌ Patch coverage is 91.66667% with 2 lines in your changes missing coverage. Please review.
✅ Project coverage is 59.99%. Comparing base (7a849eb) to head (f0e85e7). ⚠️ Report is 10 commits behind head on main.
This comment will be updated automatically if new data arrives.
🔗 Commit SHA: aeb2deb | Docs | Datadog PR Page | Was this helpful? React with 👍/👎 or give us feedback!
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
When the trace-agent is restarted, a rate of 100% is initially provided by the trace-agent, increasing dramatically the number of traces sampled. A rate could go suddenly from 0.1% to 100% and back to 0.1% when the trace-agent eventually computes the new sampling rate.
In particular it is observed that when the agent restarts, the payload buffering that waits for new container tags breaches its memory limit and we send spans without container tags.
This PR applies a limit of sampling rate increases of x2 every 1s resulting in a x10 completed every 3-4s
1->100% takes 7s
0.1 -> 100% takes 10s
Matching system-test: DataDog/system-tests#6412
Below is a screen of the before/after of the dd-trace-go implementation with go_span_new using the PR tracer and go_spam_old using the latest release of dd-trace-go and both applications generating 500 traces/s. Notice how the new code does not burst in throughput

Motivation
Reviewer's Checklist
make lintlocally.make testlocally.make generatelocally.make fix-moduleslocally.Unsure? Have a question? Request a review!