Replies: 1 comment
-
|
I'm not sure about how to properly handle it in cloud-init, but for all of the reasons you stated, we enable FIPS during the image build through our packer scripts. This happens long before cloud-init ever runs, so it avoids all of the sort of race condition problems you're describing. We don't use the AWS (for example) provided Linux images because things like the simple disk layout is wrong for us, and this is very difficult to correct after an instance is built. As to your question, we also build our own rhel/rocky images so that we can ensure FIPS is enabled as early as possible. |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
I see
cc_ubuntu_pro.pyenables fips mode for Ubuntu pro as per the doc here. This is run on first boot since the frequency isPER_INSTANCE. For centos/fedora/rhel, we would want to implement something similar to enable fips mode for cloud instances. However, I have a question. How do you ensure thatcc_ubuntu_prois run before say ssh keys are generated bycc_ssh.py(which is also a per instance module)? If the ssh keys are generated once the instance comes up on first boot but before fips is enabled (which would not be enabled until a reboot happens), the ssh keys that are generated might be non-fips complaint. Therefore, after reboot with fips mode ON, these keys might require to be deleted. Similarly there might be other configurations that happen before the reboot that are non-fips complaint and needs to be reverted after the reboot with fips enabled happens. How does canonical handle this situation?Beta Was this translation helpful? Give feedback.
All reactions