Skip to content

Fix logic for multi-node CNI warning.#22779

Open
toddliebenschutz-jones wants to merge 1 commit intokubernetes:masterfrom
toddliebenschutz-jones:multinode-networking-warning
Open

Fix logic for multi-node CNI warning.#22779
toddliebenschutz-jones wants to merge 1 commit intokubernetes:masterfrom
toddliebenschutz-jones:multinode-networking-warning

Conversation

@toddliebenschutz-jones
Copy link
Copy Markdown

Fixes #19665

This PR fixes the logic for the warning Cluster was created without any CNI, adding a node to it might cause broken networking. which should be shown when adding a second node to a single-node cluster created without CNI.

The warning log was added in #9875 in response to this comment thread #9875 (comment). The proposed solution was:

We set and store a cluster config something like CreatedWithMultiNode bool true whenever user provides --node > 1. And we use this value to decide the CNI between KindNet and Disabled. And when a cluster is created with only control plane node and a node is added later, we show a warning that multi node networking might be broken (if it is not reconfigured already).

The warning should be shown when (1) the cluster is created with only a control plane node, and (2) CNI is not configured already. The corresponding change erroneously used an or instead of and.

Output diff

Before:

➜  minikube out/minikube start   
😄  minikube v1.38.1 on Darwin 26.3.1 (arm64)
✨  Automatically selected the docker driver
❗  Starting v1.39.0, minikube will default to "containerd" container runtime. See #21973 for more info.
📌  Using Docker Desktop driver with root privileges
👍  Starting "minikube" primary control-plane node in "minikube" cluster
🚜  Pulling base image v0.0.50-1772266598-22719 ...
🔥  Creating docker container (CPUs=2, Memory=8100MB) ...
🐳  Preparing Kubernetes v1.35.1 on Docker 29.2.1 ...
🔗  Configuring bridge CNI (Container Networking Interface) ...
🔎  Verifying Kubernetes components...
    ▪ Using image gcr.io/k8s-minikube/storage-provisioner:v5
🌟  Enabled addons: default-storageclass, storage-provisioner
🏄  Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default

➜  minikube out/minikube node add
😄  Adding node m02 to cluster minikube as [worker]
❗  Cluster was created without any CNI, adding a node to it might cause broken networking.
👍  Starting "minikube-m02" worker node in "minikube" cluster
🚜  Pulling base image v0.0.50-1772266598-22719 ...
🔥  Creating docker container (CPUs=2, Memory=2200MB) ...
🐳  Preparing Kubernetes v1.35.1 on Docker 29.2.1 ...
🔎  Verifying Kubernetes components...
🏄  Successfully added m02 to minikube!

After:

➜  minikube out/minikube start   
😄  minikube v1.38.1 on Darwin 26.3.1 (arm64)
✨  Automatically selected the docker driver
❗  Starting v1.39.0, minikube will default to "containerd" container runtime. See #21973 for more info.
📌  Using Docker Desktop driver with root privileges
👍  Starting "minikube" primary control-plane node in "minikube" cluster
🚜  Pulling base image v0.0.50-1772266598-22719 ...
🔥  Creating docker container (CPUs=2, Memory=8100MB) ...
🐳  Preparing Kubernetes v1.35.1 on Docker 29.2.1 ...
🔗  Configuring bridge CNI (Container Networking Interface) ...
🔎  Verifying Kubernetes components...
    ▪ Using image gcr.io/k8s-minikube/storage-provisioner:v5
🌟  Enabled addons: storage-provisioner, default-storageclass
🏄  Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default

➜  minikube out/minikube node add
😄  Adding node m02 to cluster minikube as [worker]
👍  Starting "minikube-m02" worker node in "minikube" cluster
🚜  Pulling base image v0.0.50-1772266598-22719 ...
🔥  Creating docker container (CPUs=2, Memory=2200MB) ...
🐳  Preparing Kubernetes v1.35.1 on Docker 29.2.1 ...
🔎  Verifying Kubernetes components...
🏄  Successfully added m02 to minikube!

@k8s-ci-robot
Copy link
Copy Markdown
Contributor

[APPROVALNOTIFIER] This PR is NOT APPROVED

This pull-request has been approved by: toddliebenschutz-jones
Once this PR has been reviewed and has the lgtm label, please assign medyagh for approval. For more information see the Code Review Process.

The full list of commands accepted by this bot can be found here.

Details Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@k8s-ci-robot k8s-ci-robot added size/XS Denotes a PR that changes 0-9 lines, ignoring generated files. needs-ok-to-test Indicates a PR that requires an org member to verify it is safe to test. labels Mar 24, 2026
@k8s-ci-robot
Copy link
Copy Markdown
Contributor

Hi @toddliebenschutz-jones. Thanks for your PR.

I'm waiting for a kubernetes member to verify that this patch is reasonable to test. If it is, they should reply with /ok-to-test on its own line. Until that is done, I will not automatically test new commits in this PR, but the usual testing commands by org members will still work.

Regular contributors should join the org to skip this step.

Once the patch is verified, the new status will be reflected by the ok-to-test label.

I understand the commands that are listed here.

Details

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

@k8s-ci-robot k8s-ci-robot added the cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. label Mar 24, 2026
@minikube-bot
Copy link
Copy Markdown
Collaborator

Can one of the admins verify this patch?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. needs-ok-to-test Indicates a PR that requires an org member to verify it is safe to test. size/XS Denotes a PR that changes 0-9 lines, ignoring generated files.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Starting cluster before adding another node separately, results in errors due to missing CNI

3 participants