Skip to content

fix(ddtrace/tracer): avoid race on TestTracerPrioritySampler#4490

Open
darccio wants to merge 3 commits intomainfrom
dario.castane/ktlo/flaky-testtracer-prioritysampler
Open

fix(ddtrace/tracer): avoid race on TestTracerPrioritySampler#4490
darccio wants to merge 3 commits intomainfrom
dario.castane/ktlo/flaky-testtracer-prioritysampler

Conversation

@darccio
Copy link
Member

@darccio darccio commented Mar 2, 2026

What does this PR do?

Adds a loop to check if priority sampler is updated on TestTracerPrioritySampler as we emulate the rate_by_service of the agent on response.

Motivation

Avoid flakiness like https://github.com/DataDog/dd-trace-go/actions/runs/22556129490/job/65333714361#step:8:302

Reviewer's Checklist

  • Changed code has unit tests for its functionality at or near 100% coverage.
  • New code is free of linting errors. You can check this by running make lint locally.
  • New code doesn't break existing tests. You can check this by running make test locally.
  • Add an appropriate team label so this PR gets put in the right place for the release notes.

Unsure? Have a question? Request a review!

@darccio darccio requested a review from a team as a code owner March 2, 2026 21:03
@codecov
Copy link

codecov bot commented Mar 2, 2026

Codecov Report

✅ All modified and coverable lines are covered by tests.
✅ Project coverage is 55.91%. Comparing base (7a849eb) to head (319d75b).
⚠️ Report is 2 commits behind head on main.

Additional details and impacted files

see 196 files with indirect coverage changes

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.
  • 📦 JS Bundle Analysis: Save yourself from yourself by tracking and limiting bundle sizes in JS merges.

@pr-commenter
Copy link

pr-commenter bot commented Mar 2, 2026

Benchmarks

Benchmark execution time: 2026-03-10 10:34:05

Comparing candidate commit f65d0f8 in PR branch dario.castane/ktlo/flaky-testtracer-prioritysampler with baseline commit 5793394 in branch main.

Found 1 performance improvements and 0 performance regressions! Performance is the same for 155 metrics, 8 unstable metrics.

Explanation

This is an A/B test comparing a candidate commit's performance against that of a baseline commit. Performance changes are noted in the tables below as:

  • 🟩 = significantly better candidate vs. baseline
  • 🟥 = significantly worse candidate vs. baseline

We compute a confidence interval (CI) over the relative difference of means between metrics from the candidate and baseline commits, considering the baseline as the reference.

If the CI is entirely outside the configured SIGNIFICANT_IMPACT_THRESHOLD (or the deprecated UNCONFIDENCE_THRESHOLD), the change is considered significant.

Feel free to reach out to #apm-benchmarking-platform on Slack if you have any questions.

More details about the CI and significant changes

You can imagine this CI as a range of values that is likely to contain the true difference of means between the candidate and baseline commits.

CIs of the difference of means are often centered around 0%, because often changes are not that big:

---------------------------------(------|---^--------)-------------------------------->
                              -0.6%    0%  0.3%     +1.2%
                                 |          |        |
         lower bound of the CI --'          |        |
sample mean (center of the CI) -------------'        |
         upper bound of the CI ----------------------'

As described above, a change is considered significant if the CI is entirely outside the configured SIGNIFICANT_IMPACT_THRESHOLD (or the deprecated UNCONFIDENCE_THRESHOLD).

For instance, for an execution time metric, this confidence interval indicates a significantly worse performance:

----------------------------------------|---------|---(---------^---------)---------->
                                       0%        1%  1.3%      2.2%      3.1%
                                                  |   |         |         |
       significant impact threshold --------------'   |         |         |
                      lower bound of CI --------------'         |         |
       sample mean (center of the CI) --------------------------'         |
                      upper bound of CI ----------------------------------'

scenario:BenchmarkSetTagMetric-25

  • 🟩 execution_time [-2.076ns; -1.438ns] or [-3.386%; -2.346%]

// response asynchronously, so we must poll rather than use a fixed sleep.
timeout := time.After(time.Second * timeMultiplicator)
for {
tr.prioritySampling.mu.RLock()
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: Maybe we should consider adding a prioritySampling.getDefaultRate() function where we guard with mutex.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done in f65d0f8

@darccio darccio requested a review from a team as a code owner March 10, 2026 10:15
@datadog-prod-us1-4
Copy link

datadog-prod-us1-4 bot commented Mar 10, 2026

✅ Tests

🎉 All green!

❄️ No new flaky tests detected
🧪 All tests passed

🎯 Code Coverage (details)
Patch Coverage: 100.00%
Overall Coverage: 59.22%

This comment will be updated automatically if new data arrives.
🔗 Commit SHA: f65d0f8 | Docs | Datadog PR Page | Was this helpful? React with 👍/👎 or give us feedback!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants