documentation(kserve): add upgrade notes for v0.16.x to v0.17.0 migration#3420
documentation(kserve): add upgrade notes for v0.16.x to v0.17.0 migration#3420abdullahpathan22 wants to merge 1 commit intokubeflow:masterfrom
Conversation
There was a problem hiding this comment.
Pull request overview
Adds a new top-level upgrade guide documenting manual steps and known issues when migrating Kubeflow Manifests’ KServe deployment from v0.16.x to v0.17.0.
Changes:
- Introduces
UPGRADING.mdwith KServe v0.16.x → v0.17.0 upgrade notes, including a required pre-delete of a ClusterRoleBinding. - Documents the upstream v0.17.0 tag issue (missing install files) and how this repository sources manifests.
- Notes a webhook certificate SAN fix and provides example apply commands for full vs. component-only upgrades.
| ``` | ||
|
|
||
| **Why this is required:** | ||
| In KServe v0.17.0, the `llmisvc` role bindings were restructured. The old `ClusterRoleBinding` from v0.16.x conflicts with the new ones defined in the v0.17.0 manifests. If you do not delete it manually, the `kubectl apply` (or `kustomize build | kubectl apply`) will fail due to naming conflicts. |
There was a problem hiding this comment.
The text says kubectl apply will fail "due to naming conflicts" if the old ClusterRoleBinding is not deleted. Kubernetes allows updates to existing resources with the same name; the typical failure mode here is an immutable-field change on the existing ClusterRoleBinding (e.g., roleRef/subjects cannot be updated). Please reword this to describe the actual error condition users will hit so the guidance is technically accurate.
| In KServe v0.17.0, the `llmisvc` role bindings were restructured. The old `ClusterRoleBinding` from v0.16.x conflicts with the new ones defined in the v0.17.0 manifests. If you do not delete it manually, the `kubectl apply` (or `kustomize build | kubectl apply`) will fail due to naming conflicts. | |
| In KServe v0.17.0, the `llmisvc` role bindings were restructured. The old `ClusterRoleBinding` from v0.16.x is incompatible with the new ones defined in the v0.17.0 manifests. If you do not delete it manually, the `kubectl apply` (or `kustomize build | kubectl apply`) will fail because Kubernetes does not allow updating immutable fields on an existing `ClusterRoleBinding` (such as `roleRef` or `subjects`). |
857e8a6 to
4447fae
Compare
|
Hello and thank you for the PR. Can you make it minimalistic and fitting in style with the existing upgrading and extending section?. https://github.com/kubeflow/manifests?tab=readme-ov-file#upgrading-and-extending is the place to add aminimalistic version. Please do not add a separate file and sign your commits. |
4447fae to
babbd34
Compare
|
Thanks for the feedback! Working on it |
5a790cb to
f46d319
Compare
|
@juliusvonkohout Please check now i think i have made necessary changes. |
|
You have to sign your commits or the DCO test fails |
Signed-off-by: abdullahpathan22 <abdullahpathan22@users.noreply.github.com>
f46d319 to
de40af0
Compare
|
@juliusvonkohout i have done the needfull. |
|
Thank you |
|
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: juliusvonkohout The full list of commands accepted by this bot can be found here. The pull request process is described here DetailsNeeds approval from an approver in each of these files:
Approvers can indicate their approval by writing |
|
/hold |
Summary
This PR adds upgrade notes for users migrating from KServe v0.16.x to v0.17.0 within Kubeflow manifests.
This is a companion to #3406 which upgrades the KServe manifests to v0.17.0.
What this documents
llmisvc-manager-rolebindingClusterRoleBinding before applying new manifests (breaking change)./install/v0.17.0/directory present in thev0.17.0kserve release kserve/kserve#5255) and how it was handled in this repokubeflownamespace usersRelated
./install/v0.17.0/directory present in thev0.17.0kserve release kserve/kserve#5255Contributor Checklist