Skip to content

fix: recompute FullyApplied condition inside retry loop to prevent st…#7203

Closed
Ady0333 wants to merge 1 commit intokarmada-io:masterfrom
Ady0333:fix/stale-fully-applied-condition
Closed

fix: recompute FullyApplied condition inside retry loop to prevent st…#7203
Ady0333 wants to merge 1 commit intokarmada-io:masterfrom
Ady0333:fix/stale-fully-applied-condition

Conversation

@Ady0333
Copy link
Contributor

@Ady0333 Ady0333 commented Feb 15, 2026

Fix stale FullyApplied condition during concurrent rescheduling

What type of PR is this?

/kind bug


What this PR does / why we need it:

This PR fixes a status inconsistency where the FullyApplied condition could be incorrectly set to True during concurrent rescheduling.

In AggregateResourceBindingWorkStatus and AggregateClusterResourceBindingWorkStatus, the
generateFullyAppliedCondition() result was previously computed outside the
retry.RetryOnConflictUpdateStatus() mutation closure.

Since UpdateStatus() internally re-fetches the latest binding before applying the mutation,
a concurrent scheduler update to spec.clusters could cause the condition to be evaluated
using a stale cluster list, while the status update was applied to a newer binding.

This could result in FullyApplied=True being written even though newly scheduled clusters
had not finished applying.

The fix moves the condition generation inside the mutation closure so it always evaluates
against the freshly re-fetched binding.Spec, ensuring correctness under concurrent updates.


Special notes for your reviewer:

  • The change is minimal and localized to status update logic
  • No behavior change outside concurrent rescheduling scenarios
  • Existing tests pass; no unrelated files are touched

Release-note:
NONE

…ale status on concurrent rescheduling

Signed-off-by: Ady0333 <[email protected]>
Copilot AI review requested due to automatic review settings February 15, 2026 19:30
@karmada-bot karmada-bot added the kind/bug Categorizes issue or PR as related to a bug. label Feb 15, 2026
@karmada-bot
Copy link
Contributor

[APPROVALNOTIFIER] This PR is NOT APPROVED

This pull-request has been approved by:
Once this PR has been reviewed and has the lgtm label, please assign chaunceyjiang for approval. For more information see the Code Review Process.

The full list of commands accepted by this bot can be found here.

Details Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@gemini-code-assist
Copy link

Summary of Changes

Hello @Ady0333, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request addresses a bug where the FullyApplied status condition could become inconsistent during concurrent updates to resource bindings. By relocating the condition generation logic to occur within the retry loop, the system now guarantees that the FullyApplied status is always based on the most current state of the binding's specification, thereby resolving potential race conditions and ensuring accurate status reporting.

Highlights

  • FullyApplied Condition Calculation: The calculation of the FullyApplied condition has been moved inside the retry.RetryOnConflict block for both AggregateResourceBindingWorkStatus and AggregateClusterResourceBindingWorkStatus functions.
  • Concurrent Rescheduling Fix: This change prevents a stale FullyApplied condition from being incorrectly set to True during concurrent rescheduling, by ensuring the condition is always evaluated against the latest binding specification.
Changelog
  • pkg/util/helper/workstatus.go
    • Moved the generateFullyAppliedCondition call into the UpdateStatus closure within the retry.RetryOnConflict block for both AggregateResourceBindingWorkStatus and AggregateClusterResourceBindingWorkStatus.
    • Updated comments to clarify that the fullyAppliedCondition is recomputed inside the retry loop to reflect concurrent spec.clusters changes.
Activity
  • No specific review comments or activities have been recorded for this pull request yet.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

@karmada-bot karmada-bot added the size/S Denotes a PR that changes 10-29 lines, ignoring generated files. label Feb 15, 2026
@Ady0333
Copy link
Contributor Author

Ady0333 commented Feb 15, 2026

@XiShanYongYe-Chang, the pr is ready please review it once you have time.

Happy to make any changes if needed!!

Copy link
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This PR fixes a race condition in status updates where the FullyApplied condition could be incorrectly set to True during concurrent rescheduling operations. The issue occurred because the condition was computed using a stale cluster list before UpdateStatus() internally re-fetched the latest binding spec.

Changes:

  • Move generateFullyAppliedCondition() call inside the mutation closure in AggregateResourceBindingWorkStatus to evaluate against freshly re-fetched binding.Spec
  • Apply identical fix to AggregateClusterResourceBindingWorkStatus for consistency
  • Update comments to clarify why the condition is recomputed inside the closure

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

Copy link

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request correctly fixes a race condition where the FullyApplied condition on resource bindings could become stale due to concurrent updates. Moving the condition generation inside the retry.RetryOnConflict loop is the right approach to solve this. I've kept the suggestions to further improve the robustness of the status update logic by also re-calculating the aggregated statuses within the same retry loop, which would prevent other temporary status inconsistencies.

@codecov-commenter
Copy link

⚠️ Please install the 'codecov app svg image' to ensure uploads and comments are reliably processed by Codecov.

Codecov Report

✅ All modified and coverable lines are covered by tests.
✅ Project coverage is 42.03%. Comparing base (050a7d9) to head (a77cb6f).
❗ Your organization needs to install the Codecov GitHub app to enable full functionality.

Additional details and impacted files
@@            Coverage Diff             @@
##           master    #7203      +/-   ##
==========================================
- Coverage   42.03%   42.03%   -0.01%     
==========================================
  Files         874      874              
  Lines       53454    53454              
==========================================
- Hits        22471    22468       -3     
- Misses      29294    29296       +2     
- Partials     1689     1690       +1     
Flag Coverage Δ
unittests 42.03% <100.00%> (-0.01%) ⬇️

Flags with carried forward coverage won't be shown. Click here to find out more.

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

Copy link
Member

@XiShanYongYe-Chang XiShanYongYe-Chang left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi @Ady0333
Since the same element in the queue will be merged, and concurrent processing will not be performed on the same binding, there should be no such problem in this logic.

@Ady0333
Copy link
Contributor Author

Ady0333 commented Feb 27, 2026

Hello @XiShanYongYe-Chang
Thanks for the clarification.

I see your point, if the informer cache is already updated when the controller processes the event, then the controller should not reach this path with a missing resource. In that case, it seems the NotFound scenario may indeed not occur in practice.

Please let me know if you think this PR should be closed or if there is another situation where handling NotFound would still be useful.

@XiShanYongYe-Chang
Copy link
Member

I think we can close it.

@Ady0333 Ady0333 closed this Feb 27, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

kind/bug Categorizes issue or PR as related to a bug. size/S Denotes a PR that changes 10-29 lines, ignoring generated files.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants