Skip to content

feature(rbg): support gang scheduling in v1alpha2#195

Open
Syspretor wants to merge 3 commits intosgl-project:mainfrom
Syspretor:feature/support-gang-schedule-in-v1alpha2
Open

feature(rbg): support gang scheduling in v1alpha2#195
Syspretor wants to merge 3 commits intosgl-project:mainfrom
Syspretor:feature/support-gang-schedule-in-v1alpha2

Conversation

@Syspretor
Copy link
Collaborator

Ⅰ. Motivation

Ⅱ. Modifications

Ⅲ. Does this pull request fix one issue?

fixes #XXXX

Ⅳ. List the added test cases (unit test/integration test) if any, please explain if no tests are needed.

Ⅴ. Describe how to verify it

VI. Special notes for reviews

Checklist

  • Format your code make fmt.
  • Add unit tests or integration tests.
  • Update the documentation related to the change.

@gemini-code-assist
Copy link

Summary of Changes

Hello, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request introduces a significant feature by adding support for gang scheduling to RoleBasedGroups (RBGs) in the v1alpha2 API. It provides a flexible framework to integrate with different gang scheduling solutions like Kubernetes scheduler-plugins and Volcano. The changes involve refactoring the core logic for managing PodGroups, introducing new configuration options via annotations, and enhancing the reconciliation process to ensure proper gang scheduling behavior and consistency across various workload types. This allows RBGs to better manage co-scheduled workloads, improving resource utilization and job completion rates for distributed applications.

Highlights

  • Gang Scheduling Support: Introduced comprehensive support for gang scheduling within RoleBasedGroups (RBGs) in v1alpha2, allowing users to specify a preferred scheduler plugin (Kubernetes scheduler-plugins or Volcano) via a new command-line flag.
  • Configurable Scheduler Plugins: Implemented an extensible PodGroupManager interface, with concrete implementations for Kubernetes scheduler-plugins and Volcano. This allows the RBGS controller to interact with different gang scheduling mechanisms.
  • New Annotations for Configuration: Added several new annotations to RBGs for enabling gang scheduling (rbg.workloads.x-k8s.io/group-gang-scheduling) and configuring scheduler-specific parameters, such as schedule-timeout for scheduler-plugins and priority-class/queue for Volcano.
  • Enhanced RoleInstance Reconciliation: Modified the RoleInstance reconciliation logic to enforce gang scheduling constraints. This includes preventing new pod creation if orphan pods exist and ensuring atomic recreation of all pods in an instance if an in-place update fails, maintaining the minimum member requirement for the PodGroup.
  • Refactored PodGroup Management: The previous PodGroupScheduler has been refactored into a more modular PodGroupManager interface, which is now injected into various reconcilers (RoleBasedGroupReconciler, DeploymentReconciler, LeaderWorkerSetReconciler, PodReconciler, RoleInstanceSetReconciler, StatefulSetReconciler) to handle gang scheduling label/annotation injection consistently.
  • Helm Chart and CLI Updates: Updated the Helm chart and the main RBGS controller command-line interface to expose the --scheduler-name flag, enabling operators to easily configure the desired gang scheduler.
  • E2E Test Coverage: Added new end-to-end tests to validate the functionality of gang scheduling with both Kubernetes scheduler-plugins and Volcano.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Changelog
  • cmd/rbgs/main.go
    • Imported the new sigs.k8s.io/rbgs/pkg/scheduler package.
    • Added a new command-line flag --scheduler-name to specify the gang scheduling plugin.
    • Updated the NewRoleBasedGroupReconciler call to pass the selected scheduler plugin type.
  • deploy/helm/rbgs/templates/manager.yaml
    • Added the --scheduler-name argument to the manager container in the deployment template, defaulting to scheduler-plugins.
  • deploy/helm/rbgs/values.yaml
    • Introduced a new schedulerName field to configure the gang scheduling plugin, defaulting to scheduler-plugins.
  • internal/controller/workloads/event.go
    • Added a new event constant FailedReconcilePodGroup for recording errors during PodGroup reconciliation.
  • internal/controller/workloads/rolebasedgroup_controller.go
    • Added a podGroupManager field to the RoleBasedGroupReconciler struct.
    • Modified NewRoleBasedGroupReconciler to accept a scheduler.SchedulerPluginType and initialize the podGroupManager.
    • Inserted a new reconciliation step (reconcilePodGroup) for managing the PodGroup resource.
    • Implemented the reconcilePodGroup method to delegate PodGroup creation, update, or deletion to the podGroupManager.
    • Added logic to inject the podGroupManager into workload-specific reconcilers if they implement the PodGroupManagerSetter interface.
  • pkg/constants/annotation.go
    • Defined new annotations for enabling gang scheduling (GangSchedulingAnnotationKey).
    • Added annotations for configuring scheduler-plugins timeout (GangSchedulingScheduleTimeoutSecondsKey).
    • Introduced annotations for configuring Volcano priority class and queue (GangSchedulingVolcanoPriorityClassKey, GangSchedulingVolcanoQueueKey).
    • Added RoleInstanceGangSchedulingAnnotationKey for propagating gang scheduling status to RoleInstances.
  • pkg/reconciler/deploy_reconciler.go
    • Added a podGroupManager field to DeploymentReconciler.
    • Implemented the SetPodGroupManager method to allow injection of the PodGroupManager.
    • Injected the podGroupManager into the PodReconciler during deployment construction.
  • pkg/reconciler/lws_reconciler.go
    • Added a podGroupManager field to LeaderWorkerSetReconciler.
    • Implemented the SetPodGroupManager method.
    • Injected the podGroupManager into PodReconciler instances for leader and worker templates.
  • pkg/reconciler/pod_reconciler.go
    • Added a podGroupManager field to PodReconciler.
    • Implemented the SetPodGroupManager method.
    • Added logic to inject gang-scheduling labels/annotations into pod templates using the podGroupManager.
  • pkg/reconciler/roleinstance/sync/instance_scale.go
    • Imported the constants package.
    • Added checks for gang scheduling enablement to modify pod creation and update behavior.
    • Introduced isGangSchedulingEnabled helper function to determine if gang scheduling constraints are active for a RoleInstance.
  • pkg/reconciler/roleinstanceset/statefulmode/stateful_instance_set_utils.go
    • Propagated the RoleInstanceGangSchedulingAnnotationKey from RoleInstanceSet to RoleInstance.
  • pkg/reconciler/roleinstanceset/statelessmode/core/implement.go
    • Propagated the RoleInstanceGangSchedulingAnnotationKey from RoleInstanceSet to RoleInstance.
  • pkg/reconciler/roleinstanceset_reconciler.go
    • Added a podGroupManager field to RoleInstanceSetReconciler.
    • Implemented the SetPodGroupManager method.
    • Derived and propagated the RoleInstanceGangSchedulingAnnotationKey to RoleInstanceSet annotations.
    • Injected the podGroupManager into PodReconciler instances during role instance template construction.
  • pkg/reconciler/sts_reconciler.go
    • Added a podGroupManager field to StatefulSetReconciler.
    • Implemented the SetPodGroupManager method.
    • Injected the podGroupManager into the PodReconciler during StatefulSet construction.
  • pkg/reconciler/workload_reconciler.go
    • Imported the scheduler package.
    • Defined the PodGroupManagerSetter interface for workload reconcilers that can receive a PodGroupManager.
  • pkg/scheduler/k8s-scheduler-plugin/manager.go
    • Added a new file implementing the PodGroupManager interface for Kubernetes scheduler-plugins.
    • Provided methods for reconciling PodGroup CRs and injecting pod-group.scheduling.sigs.k8s.io/name labels into pod templates.
  • pkg/scheduler/podgroup_manager.go
    • Refactored the previous PodGroupScheduler into a new PodGroupManager interface.
    • Defined SchedulerPluginType enum for scheduler-plugins and volcano.
    • Implemented NewPodGroupManager as a factory function to return the appropriate PodGroupManager implementation based on the plugin type.
    • Updated constants to reference the new plugin-specific constants.
  • pkg/scheduler/podgroup_manager_test.go
    • Updated test cases to use the new PodGroupManager interface and its implementations.
    • Modified test RBG creation to use annotations for gang scheduling configuration instead of direct fields.
    • Adjusted assertions to reflect the new annotation-based configuration.
  • pkg/scheduler/volcano/manager.go
    • Added a new file implementing the PodGroupManager interface for Volcano.
    • Provided methods for reconciling Volcano PodGroup CRs and injecting scheduling.k8s.io/group-name annotations into pod templates.
  • test/e2e/framework/rbg_v2_expect.go
    • Added ExpectWorkloadV2PodTemplateAnnotationContains helper for checking pod template annotations in e2e tests.
  • test/e2e/framework/workloads/workload_v2_expect.go
    • Added ExpectPodTemplateAnnotationContainsV2 to the WorkloadV2EqualChecker interface.
    • Implemented ExpectPodTemplateAnnotationContainsV2 for RoleInstanceSetCheckerV2, LeaderWorkerSetCheckerV2, and DeploymentCheckerV2.
  • test/e2e/testcase/v1alpha2/rbg.go
    • Added new e2e test cases for rbg with kube gang scheduling to verify label injection.
    • Added new e2e test cases for rbg with volcano gang scheduling to verify annotation injection.
  • test/envtest/testutil/setup.go
    • Imported the sigs.k8s.io/rbgs/pkg/scheduler package.
    • Updated SetupRBGController to pass scheduler.KubeSchedulerPlugin when initializing RoleBasedGroupReconciler.
  • test/wrappers/v1alpha2/rbg_wrapper.go
    • Added WithGangScheduling helper to enable Kubernetes scheduler-plugins gang scheduling via annotation.
    • Added WithVolcanoGangScheduling helper to enable Volcano gang scheduling via annotations.
Activity
  • The pull request introduces a new feature, so there is no prior human activity to report on comments or reviews. The changes are directly implemented to support gang scheduling.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for GitHub and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces support for gang scheduling in v1alpha2 by adding a PodGroupManager abstraction for different scheduler plugins like scheduler-plugins and volcano. The changes are well-structured, adding a new --scheduler-name flag and propagating the configuration through the controllers. The implementation correctly handles the PodGroup lifecycle and injects necessary labels/annotations. The changes also extend to RoleInstance controllers to enforce gang scheduling semantics. Overall, this is a solid implementation, but I've found one issue where PodGroup updates for scheduler-plugins do not consider changes to ScheduleTimeoutSeconds, and I've provided a suggestion to fix it.

Comment on lines +144 to +161
if podGroup.Spec.MinMember != int32(rbg.GetGroupSize()) {
updateErr := retry.RetryOnConflict(retry.DefaultRetry, func() error {
if fetchErr := m.client.Get(
ctx, types.NamespacedName{Name: rbg.Name, Namespace: rbg.Namespace}, podGroup,
); fetchErr != nil {
return fetchErr
}
if !utils.CheckOwnerReference(podGroup.OwnerReferences, gvk) {
podGroup.OwnerReferences = append(podGroup.OwnerReferences, *metav1.NewControllerRef(rbg, gvk))
}
podGroup.Spec.MinMember = int32(rbg.GetGroupSize())
return m.client.Update(ctx, podGroup)
})
if updateErr != nil {
logger.Error(updateErr, "update pod group error")
return updateErr
}
}

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

The PodGroup is only updated if MinMember changes. If a user updates the group-gang-scheduling-timeout annotation, the ScheduleTimeoutSeconds field in the PodGroup custom resource will not be updated. The condition should also check for changes in ScheduleTimeoutSeconds.

Suggested change
if podGroup.Spec.MinMember != int32(rbg.GetGroupSize()) {
updateErr := retry.RetryOnConflict(retry.DefaultRetry, func() error {
if fetchErr := m.client.Get(
ctx, types.NamespacedName{Name: rbg.Name, Namespace: rbg.Namespace}, podGroup,
); fetchErr != nil {
return fetchErr
}
if !utils.CheckOwnerReference(podGroup.OwnerReferences, gvk) {
podGroup.OwnerReferences = append(podGroup.OwnerReferences, *metav1.NewControllerRef(rbg, gvk))
}
podGroup.Spec.MinMember = int32(rbg.GetGroupSize())
return m.client.Update(ctx, podGroup)
})
if updateErr != nil {
logger.Error(updateErr, "update pod group error")
return updateErr
}
}
timeout := getScheduleTimeoutSeconds(rbg)
if podGroup.Spec.MinMember != int32(rbg.GetGroupSize()) || podGroup.Spec.ScheduleTimeoutSeconds == nil || *podGroup.Spec.ScheduleTimeoutSeconds != *timeout {
updateErr := retry.RetryOnConflict(retry.DefaultRetry, func() error {
if fetchErr := m.client.Get(
ctx, types.NamespacedName{Name: rbg.Name, Namespace: rbg.Namespace}, podGroup,
); fetchErr != nil {
return fetchErr
}
if !utils.CheckOwnerReference(podGroup.OwnerReferences, gvk) {
podGroup.OwnerReferences = append(podGroup.OwnerReferences, *metav1.NewControllerRef(rbg, gvk))
}
podGroup.Spec.MinMember = int32(rbg.GetGroupSize())
podGroup.Spec.ScheduleTimeoutSeconds = timeout
return m.client.Update(ctx, podGroup)
})
if updateErr != nil {
logger.Error(updateErr, "update pod group error")
return updateErr
}
}

return nil
}

if podGroup.Spec.MinMember != int32(rbg.GetGroupSize()) {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

if the user changes the rbg.workloads.x-k8s.io/group-gang-scheduling-timeout annotation,
the existing PodGroup will never be updated to reflect the new timeout value.

Is it by design?

@cheyang
Copy link
Collaborator

cheyang commented Mar 13, 2026

Pls add comment/docs clarification around the gang-scheduling annotation semantics.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants