-
Notifications
You must be signed in to change notification settings - Fork 537
Open
Description
When trying to apply the helm chart, we are getting the following error:
Failed sync attempt to ####: one or more objects failed to apply, reason: ConfigMap "altinity-clickhouse-operator-crds" is invalid: metadata.annotations: Too long: may not be more than 262144 bytes (retried 5 times).
helm chart version: 0.25.6
The issue was resolved the issue locally by splitting the configmap into 2 parts:
Values.crdHook.enabled }}
apiVersion: v1
kind: ConfigMap
metadata:
name: {{ include "altinity-clickhouse-operator.fullname" . }}-crds-part1
namespace: {{ include "altinity-clickhouse-operator.namespace" . }}
labels:
{{- include "altinity-clickhouse-operator.labels" . | nindent 4 }}
app.kubernetes.io/component: crd-install-hook
annotations:
"helm.sh/hook": pre-install,pre-upgrade
"helm.sh/hook-weight": "-7"
"helm.sh/hook-delete-policy": before-hook-creation,hook-succeeded
data:
clickhouseinstallationtemplates.yaml: |
{{ .Files.Get "crds/CustomResourceDefinition-clickhouseinstallationtemplates.clickhouse.altinity.com.yaml" | indent 4 }}
clickhousekeeperinstallations.yaml: |
{{ .Files.Get "crds/CustomResourceDefinition-clickhousekeeperinstallations.clickhouse-keeper.altinity.com.yaml" | indent 4 }}
clickhouseoperatorconfigurations.yaml: |
{{ .Files.Get "crds/CustomResourceDefinition-clickhouseoperatorconfigurations.clickhouse.altinity.com.yaml" | indent 4 }}
---
apiVersion: v1
kind: ConfigMap
metadata:
name: {{ include "altinity-clickhouse-operator.fullname" . }}-crds-part2
namespace: {{ include "altinity-clickhouse-operator.namespace" . }}
labels:
{{- include "altinity-clickhouse-operator.labels" . | nindent 4 }}
app.kubernetes.io/component: crd-install-hook
annotations:
"helm.sh/hook": pre-install,pre-upgrade
"helm.sh/hook-weight": "-7"
"helm.sh/hook-delete-policy": before-hook-creation,hook-succeeded
data:
clickhouseinstallations.yaml: |
{{ .Files.Get "crds/CustomResourceDefinition-clickhouseinstallations.clickhouse.altinity.com.yaml" | indent 4 }}
{{- end }}
then fix in the job's pod spec:
spec:
serviceAccountName: {{ include "altinity-clickhouse-operator.fullname" . }}-crd-install
restartPolicy: OnFailure
{{- with .Values.crdHook.nodeSelector }}
nodeSelector:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.crdHook.affinity }}
affinity:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.crdHook.tolerations }}
tolerations:
{{- toYaml . | nindent 8 }}
{{- end }}
containers:
- name: crd-install
image: "{{ ( tpl $.Values.crdHook.image.repository $ ) }}:{{ ( tpl $.Values.crdHook.image.tag $ ) }}"
imagePullPolicy: {{ .Values.crdHook.image.pullPolicy | default "IfNotPresent" }}
command:
- /bin/sh
- -c
- |
set -e
echo "Installing/Updating ClickHouse Operator CRDs..."
for crd_file in /crds/*/*.yaml; do
echo "Applying $(basename $crd_file)..."
kubectl apply --server-side=true --force-conflicts -f "$crd_file"
done
echo "CRD installation completed successfully"
volumeMounts:
- name: crds-part1
mountPath: /crds/part1
readOnly: true
- name: crds-part2
mountPath: /crds/part2
readOnly: true
{{- with .Values.crdHook.resources }}
resources:
{{- toYaml . | nindent 10 }}
{{- end }}
volumes:
- name: crds-part1
configMap:
name: {{ include "altinity-clickhouse-operator.fullname" . }}-crds-part1
- name: crds-part2
configMap:
name: {{ include "altinity-clickhouse-operator.fullname" . }}-crds-part2
mountPath: /crds
readOnly: true
Reactions are currently unavailable
Metadata
Metadata
Assignees
Labels
No labels