Skip to content

fix(datastreams): make TestProcessor/test_service_name_override deterministic#4517

Open
bm1549 wants to merge 2 commits intomainfrom
worktree-agent-a9c8a52f
Open

fix(datastreams): make TestProcessor/test_service_name_override deterministic#4517
bm1549 wants to merge 2 commits intomainfrom
worktree-agent-a9c8a52f

Conversation

@bm1549
Copy link
Contributor

@bm1549 bm1549 commented Mar 9, 2026

What does this PR do?

Fixes a timing-sensitive flake in TestProcessor/test_service_name_override
(internal/datastreams/processor_test.go) by eliminating two bugs in the subtest.

Motivation

The test was flaking in CI: internal/datastreams TestProcessor/test_service_name_override
failed with bucket Start timestamps off by exactly 10 seconds (one bucket width).

Two bugs were causing this:

  1. Wall-clock dependency — the test captured tp = time.Now().Truncate(bucketDuration) then
    asserted that flushed StatsBucket.Start values equalled tp. When CI runners ran slowly,
    the real clock crossed a bucket boundary mid-test, shifting all Start values by one bucket.

  2. Wrong reference in assertions — all four Start assertion sites referenced tp1 (a
    variable from the outer TestProcessor scope) instead of the local tp. This made
    assertions incorrect whenever the two timestamps happened to fall in different buckets.

The fix uses the existing timeSource field on Processor to freeze the processor's clock
to a fixed synthetic time, and corrects the assertion references.

No production code was changed.

Reviewer's Checklist

  • Changed code has unit tests for its functionality at or near 100% coverage. — N/A (test-only fix)
  • System-Tests covering this feature have been added and enabled with the va.b.c-dev version tag. — N/A
  • There is a benchmark for any new code, or changes to existing code. — N/A
  • If this interacts with the agent in a new way, a system test has been added. — N/A
  • New code is free of linting errors. You can check this by running make lint locally. ✓
  • New code doesn't break existing tests. You can check this by running make test locally. ✓ (full internal/datastreams suite passes, test verified across 5 runs)
  • Add an appropriate team label so this PR gets put in the right place for the release notes.
  • All generated files are up to date. You can check this by running make generate locally. — N/A
  • Non-trivial go.mod changes, e.g. adding new modules, are reviewed by @DataDog/dd-trace-go-guild. — N/A

Unsure? Have a question? Request a review!

…ministic

The subtest had two bugs causing a timing-sensitive flake:
1. It captured `tp = time.Now().Truncate(bucketDuration)` from the wall clock,
   making bucket boundaries dependent on real wall-clock time.
2. It asserted against `tp1` (a variable from the outer test scope) instead of
   the local `tp`, causing incorrect bucket start values if the two timestamps
   happened to land in different buckets.

Fix: replace `time.Now()` with a fixed synthetic time injected via the existing
`timeSource` field, and correct the assertions to reference the local `tp`.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
@bm1549 bm1549 added the AI Generated Largely based on code generated by an AI or LLM. This label is the same across all dd-trace-* repos label Mar 9, 2026
@codecov
Copy link

codecov bot commented Mar 9, 2026

Codecov Report

✅ All modified and coverable lines are covered by tests.
✅ Project coverage is 59.96%. Comparing base (cf61946) to head (0c51370).

Additional details and impacted files

see 433 files with indirect coverage changes

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.
  • 📦 JS Bundle Analysis: Save yourself from yourself by tracking and limiting bundle sizes in JS merges.

@datadog-datadog-prod-us1-2
Copy link

datadog-datadog-prod-us1-2 bot commented Mar 9, 2026

✅ Tests

🎉 All green!

❄️ No new flaky tests detected
🧪 All tests passed

🎯 Code Coverage (details)
Patch Coverage: 100.00%
Overall Coverage: 59.15% (+3.41%)

This comment will be updated automatically if new data arrives.
🔗 Commit SHA: 781b03c | Docs | Datadog PR Page | Was this helpful? React with 👍/👎 or give us feedback!

@pr-commenter
Copy link

pr-commenter bot commented Mar 9, 2026

Benchmarks

Benchmark execution time: 2026-03-09 16:50:36

Comparing candidate commit 781b03c in PR branch worktree-agent-a9c8a52f with baseline commit cf61946 in branch main.

Found 0 performance improvements and 0 performance regressions! Performance is the same for 156 metrics, 8 unstable metrics.

Explanation

This is an A/B test comparing a candidate commit's performance against that of a baseline commit. Performance changes are noted in the tables below as:

  • 🟩 = significantly better candidate vs. baseline
  • 🟥 = significantly worse candidate vs. baseline

We compute a confidence interval (CI) over the relative difference of means between metrics from the candidate and baseline commits, considering the baseline as the reference.

If the CI is entirely outside the configured SIGNIFICANT_IMPACT_THRESHOLD (or the deprecated UNCONFIDENCE_THRESHOLD), the change is considered significant.

Feel free to reach out to #apm-benchmarking-platform on Slack if you have any questions.

More details about the CI and significant changes

You can imagine this CI as a range of values that is likely to contain the true difference of means between the candidate and baseline commits.

CIs of the difference of means are often centered around 0%, because often changes are not that big:

---------------------------------(------|---^--------)-------------------------------->
                              -0.6%    0%  0.3%     +1.2%
                                 |          |        |
         lower bound of the CI --'          |        |
sample mean (center of the CI) -------------'        |
         upper bound of the CI ----------------------'

As described above, a change is considered significant if the CI is entirely outside the configured SIGNIFICANT_IMPACT_THRESHOLD (or the deprecated UNCONFIDENCE_THRESHOLD).

For instance, for an execution time metric, this confidence interval indicates a significantly worse performance:

----------------------------------------|---------|---(---------^---------)---------->
                                       0%        1%  1.3%      2.2%      3.1%
                                                  |   |         |         |
       significant impact threshold --------------'   |         |         |
                      lower bound of CI --------------'         |         |
       sample mean (center of the CI) --------------------------'         |
                      upper bound of CI ----------------------------------'

@bm1549 bm1549 marked this pull request as ready for review March 9, 2026 17:03
@bm1549 bm1549 requested a review from a team as a code owner March 9, 2026 17:03
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

AI Generated Largely based on code generated by an AI or LLM. This label is the same across all dd-trace-* repos

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant