Skip to content

[BUG] reloader not work when deployment in another namespace if HA opened #1102

@CharlierZhang

Description

@CharlierZhang

Describe the bug
Hi Team ,
I deployed reloader to cl-system but my service deployments that i want to watch deployed in other namespaces , then i found out that all deployments reloader not work while they can work in cl-system namespace.

i deploy the reloader deployment by yaml.

Here is my yaml content:

apiVersion: apps/v1
kind: Deployment
metadata:
generation: 3
labels:
app: rgm-reloader
app.kubernetes.io/managed-by: Helm
chart: rgm-reloader-0.0.1-19f7078666e677628a700d90c0fe1db1b0e72d8c
group: com.stakater.platform
heritage: Helm
provider: stakater
release: rgm-reloader-3b5d664796
version: v1.0.15
name: xxx
namespace: xxx
spec:
progressDeadlineSeconds: 600
replicas: 2
revisionHistoryLimit: 2
selector:
matchLabels:
xxx
strategy:
rollingUpdate:
maxSurge: 25%
maxUnavailable: 25%
type: RollingUpdate
template:
metadata:
labels:
xxx
spec:
affinity:
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- podAffinityTerm:
labelSelector:
matchExpressions:
- key: app
operator: In
values:
- rgm-reloader
topologyKey: kubernetes.io/hostname
weight: 1
containers:
- args:
- >-
--namespaces-to-ignore=xxxx
- '--enable-ha=true'
env:
- name: KUBERNETES_NAMESPACE
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.namespace
- name: POD_NAME
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.namespace
image: xxx
imagePullPolicy: IfNotPresent
livenessProbe:
failureThreshold: 5
httpGet:
path: /live
port: http
scheme: HTTP
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 5
name: rgm-reloader
ports:
- containerPort: 9090
name: http
protocol: TCP
readinessProbe:
failureThreshold: 5
httpGet:
path: /metrics
port: http
scheme: HTTP
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 5
resources:
limits:
cpu: 200m
memory: 512Mi
requests:
cpu: 50m
memory: 128Mi
securityContext: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
dnsPolicy: ClusterFirst
imagePullSecrets:
- name: image-pull-secret
restartPolicy: Always
schedulerName: default-scheduler
securityContext:
runAsNonRoot: true
runAsUser: 65534
serviceAccount: xxx
serviceAccountName: xxx
terminationGracePeriodSeconds: 30

Here is my clusterrole :
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: rgm-reloader-role
rules:

  • apiGroups:
    • ''
      resources:
    • secrets
    • configmaps
      verbs:
    • list
    • get
    • watch
  • apiGroups:
    • apps
      resources:
    • deployments
    • daemonsets
    • statefulsets
      verbs:
    • list
    • get
    • update
    • patch
  • apiGroups:
    • extensions
      resources:
    • deployments
    • daemonsets
      verbs:
    • list
    • get
    • update
    • patch
  • apiGroups:
    • coordination.k8s.io
      resources:
    • leases
      verbs:
    • create
    • get
    • update
  • apiGroups:
    • ''
    • events.k8s.io
      resources:
    • events
      verbs:
    • create
    • patch

I also noticed that , some other mentioned the param watchGlobally = true , but when i added it to my deployment yaml, it give me the error " there is no watchGlobally flag"

Could you pls help me to check the root cause ?

reloader version: 1.4.12

Metadata

Metadata

Assignees

No one assigned

    Labels

    kind/bugSomething isn't working

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions