Skip to content

Add log filters to reduce noise in production#11344

Open
raks-tt wants to merge 2 commits intoredhat-appstudio:mainfrom
raks-tt:add_log_filters_prod
Open

Add log filters to reduce noise in production#11344
raks-tt wants to merge 2 commits intoredhat-appstudio:mainfrom
raks-tt:add_log_filters_prod

Conversation

@raks-tt
Copy link
Copy Markdown
Contributor

@raks-tt raks-tt commented Apr 17, 2026

Drop Tekton controller reconciliation conflicts, finalizer cleanup, duplicate database key errors , leader election locking conflicts, finalizers not found, resource requetes in progress, label update failures, admission controller nil annotations and TLS EOF noise

@openshift-ci openshift-ci bot requested review from TominoFTW and avi-biton April 17, 2026 17:50
@github-actions
Copy link
Copy Markdown
Contributor

github-actions bot commented Apr 17, 2026

Kustomize Render Diff

Comparing 079795679cae13681c

Component Environment Changes
components/monitoring/logging/production/base production +135 -0
components/monitoring/logging/production/kflux-fedora-01 production +135 -0
components/monitoring/logging/production/kflux-ocp-p01 production +135 -0
components/monitoring/logging/production/kflux-osp-p01 production +135 -0
components/monitoring/logging/production/kflux-prd-es01 production +135 -0
components/monitoring/logging/production/kflux-prd-rh02 production +135 -0
components/monitoring/logging/production/kflux-prd-rh03 production +135 -0
components/monitoring/logging/production/kflux-rhel-p01 production +135 -0
components/monitoring/logging/production/stone-prod-p01 production +135 -0
components/monitoring/logging/production/stone-prod-p02 production +135 -0
components/monitoring/logging/staging/base staging +3 -1
components/monitoring/logging/staging/kflux-stg-es01 staging +3 -1
components/monitoring/logging/staging/stone-stage-p01 staging +3 -1

Total: 13 components, +1359 -3 lines

📋 Full diff available in the workflow summary and as a downloadable artifact.

@codecov
Copy link
Copy Markdown

codecov bot commented Apr 17, 2026

Codecov Report

✅ All modified and coverable lines are covered by tests.
✅ Project coverage is 51.62%. Comparing base (5bebec4) to head (beee9f3).
⚠️ Report is 6 commits behind head on main.

Additional details and impacted files
@@           Coverage Diff           @@
##             main   #11344   +/-   ##
=======================================
  Coverage   51.62%   51.62%           
=======================================
  Files          18       18           
  Lines        1263     1263           
=======================================
  Hits          652      652           
  Misses        539      539           
  Partials       72       72           
Flag Coverage Δ
go 51.62% <ø> (ø)

Flags with carried forward coverage won't be shown. Click here to find out more.

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

@raks-tt
Copy link
Copy Markdown
Contributor Author

raks-tt commented Apr 17, 2026

/review

@qodo-code-review
Copy link
Copy Markdown

qodo-code-review bot commented Apr 17, 2026

PR Reviewer Guide 🔍

(Review updated until commit beee9f3)

Here are some key observations to aid the review process:

⏱️ Estimated effort to review: 3 🔵🔵🔵⚪⚪
🧪 No relevant tests
🔒 Security concerns

Log visibility reduction:
This change drops multiple classes of events (including in the audit pipeline) based on message/field regex matching. If patterns are overly broad or fields vary between components, this can hide security-relevant signals (e.g., webhook/admission anomalies, coordination/leader-election instability that might indicate disruption). Ensure the drop rules are tightly scoped, validated against production log samples, and approved for audit-index retention requirements.

⚡ Recommended focus areas for review

Fragile Patch

The JSONPatch relies on fixed pipeline indices and names; any upstream reordering/renaming of pipelines will cause patch application failures or misapplied filterRefs. Verify this is stable across ClusterLogForwarder versions/environments and consider patching by matching objects more robustly if possible.

# Validate pipeline ordering before applying filters (fail fast if base changes)
- op: test
  path: /spec/pipelines/0/name
  value: fluentd-forward-logs-to-splunk-rhtap-app-index
- op: test
  path: /spec/pipelines/1/name
  value: fluentd-forward-logs-to-splunk-rhtap-audit-index
Regex Accuracy

Multiple drop rules depend on regexes matching specific fields (e.g., .structured.error vs .message) and specific substrings (e.g., leaderelection patterns, admission controller message, TLS EOF). Validate these patterns against real log samples to ensure they match intended noise and don’t accidentally drop unrelated events due to broad namespace regexes or case-insensitive matching.

# Drop Tekton reconciliation conflicts (optimistic locking)
- op: add
  path: /spec/filters/-
  value:
    name: drop-tekton-reconcile-conflicts
    type: drop
    drop:
      - test:
          - field: .kubernetes.namespace_name
            matches: 'openshift-pipelines|tekton-.*'
          - field: .structured.error
            matches: '(?i)Operation cannot be fulfilled on (pipelineruns?|taskruns?)\.tekton\.dev'
      - test:
          - field: .kubernetes.namespace_name
            matches: 'openshift-pipelines|tekton-.*'
          - field: .structured.msg
            matches: '(?i)Reconcile error'
          - field: .structured.error
            matches: '(?i)(pipelinerun|taskrun|tekton)'
      - test:
          - field: .kubernetes.namespace_name
            matches: 'tekton-kueue'
          - field: .message
            matches: '(?i)error.*Operation cannot be fulfilled on (pipelineruns?|taskruns?)\.tekton\.dev'
# Drop "resource not found" during finalizer cleanup
- op: add
  path: /spec/filters/-
  value:
    name: drop-finalizer-not-found
    type: drop
    drop:
      - test:
          - field: .kubernetes.namespace_name
            matches: 'openshift-pipelines|tekton-.*'
          - field: .structured.error
            matches: '(?i)(pipelineruns|taskruns)\.tekton\.dev .* not found'
          - field: .structured.error
            matches: '(?i)finalizers?'
      - test:
          - field: .kubernetes.namespace_name
            matches: 'openshift-pipelines|tekton-.*'
          - field: .structured.error
            matches: '(?i)failed to (set|clear) finalizers.*not found'
# Drop "resource request in progress"
- op: add
  path: /spec/filters/-
  value:
    name: drop-resource-request-in-progress
    type: drop
    drop:
      - test:
          - field: .kubernetes.namespace_name
            matches: 'openshift-pipelines|tekton-.*'
          - field: .structured.error
            matches: '(?i)resource request in progress'
          - field: .structured.error
            matches: '(?i)(TaskRun|PipelineRun)'
      - test:
          - field: .kubernetes.namespace_name
            matches: 'openshift-pipelines|tekton-.*'
          - field: .structured.error
            matches: '(?i)TaskRun prepare error.*resource request in progress'
# Drop repetitive label update warnings
- op: add
  path: /spec/filters/-
  value:
    name: drop-tekton-label-update-failures
    type: drop
    drop:
      - test:
          - field: .kubernetes.namespace_name
            matches: 'openshift-pipelines|tekton-.*'
          - field: .structured.error
            matches: '(?i)Failed to update (PipelineRun|TaskRun) labels'
          - field: .structured.error
            matches: '(?i)Operation cannot be fulfilled'
# Drop admission controller nil annotation logs (audit noise)
- op: add
  path: /spec/filters/-
  value:
    name: drop-admission-controller-nil-annotations
    type: drop
    drop:
      - test:
          - field: .kubernetes.namespace_name
            matches: 'openshift-pipelines'
          - field: .structured.msg
            matches: 'remote admission controller audit annotations=map\[string\]string\(nil\)'
# Drop tekton-results duplicate key database errors
- op: add
  path: /spec/filters/-
  value:
    name: drop-tekton-results-duplicate-key
    type: drop
    drop:
      - test:
          - field: .kubernetes.namespace_name
            matches: 'tekton-results'
          - field: .structured.error
            matches: '(?i)duplicate key value violates unique constraint.*results_by_name'
# Drop leader election optimistic locking (Kubernetes coordination noise)
- op: add
  path: /spec/filters/-
  value:
    name: drop-leader-election-conflicts
    type: drop
    drop:
      - test:
          - field: .kubernetes.namespace_name
            matches: 'openshift-operators|openshift-pipelines|openshift-gitops|tekton-.*'
          - field: .message
            matches: '(?i)leaderelection\.go.*Failed to update lock optimistically.*leases/'
# Drop TLS EOF errors ONLY (health check noise), KEEP bad certificate errors for security audit
- op: add
  path: /spec/filters/-
  value:
    name: drop-tls-eof-noise
    type: drop
    drop:
      - test:
          - field: .kubernetes.pod_name
            matches: 'tekton.*webhook'
          - field: .structured.msg
            matches: '(?i)http: TLS handshake error.*EOF'
# Drop Kueue Workload reconciliation noise
- op: add
  path: /spec/filters/-
  value:
    name: drop-kueue-reconciliation-noise
    type: drop
    drop:
      # Kueue Workload optimistic locking
      - test:
          - field: .kubernetes.namespace_name
            matches: 'tekton-kueue'
          - field: .message
            matches: '(?i)(Removing finalizer|Reconciler error).*error.*Operation cannot be fulfilled on workloads\.kueue\.x-k8s\.io'
      # PipelineRun not found during unsuspending
      - test:
          - field: .kubernetes.namespace_name
            matches: 'tekton-kueue'
          - field: .message
            matches: '(?i)Unsuspending job.*error.*pipelineruns\.tekton\.dev.*not found'
Filter Coverage

Filters are added and then referenced in both pipelines. Confirm every added filter is supported by the deployed logging stack, that filter ordering is correct (e.g., multiline detection before/after drops as intended), and that dropping in the audit pipeline is acceptable (audit streams may require higher retention/visibility).

# Drop empty lines
- op: add
  path: /spec/filters/-
  value:
    name: drop-empty-lines
    type: drop
    drop:
      - test:
          - field: .message
            matches: '^\\s*$'
- op: add
  path: /spec/pipelines/0/filterRefs/-
  value: drop-empty-lines
- op: add
  path: /spec/pipelines/1/filterRefs/-
  value: drop-empty-lines
# Enable multi-line exception detection
- op: add
  path: /spec/filters/-
  value:
    name: multiline-exception
    type: detectMultilineException
- op: add
  path: /spec/pipelines/0/filterRefs/-
  value: multiline-exception
- op: add
  path: /spec/pipelines/1/filterRefs/-
  value: multiline-exception

# Drop Tekton reconciliation conflicts (optimistic locking)
- op: add
  path: /spec/filters/-
  value:
    name: drop-tekton-reconcile-conflicts
    type: drop
    drop:
      - test:
          - field: .kubernetes.namespace_name
            matches: 'openshift-pipelines|tekton-.*'
          - field: .structured.error
            matches: '(?i)Operation cannot be fulfilled on (pipelineruns?|taskruns?)\.tekton\.dev'
      - test:
          - field: .kubernetes.namespace_name
            matches: 'openshift-pipelines|tekton-.*'
          - field: .structured.msg
            matches: '(?i)Reconcile error'
          - field: .structured.error
            matches: '(?i)(pipelinerun|taskrun|tekton)'
      - test:
          - field: .kubernetes.namespace_name
            matches: 'tekton-kueue'
          - field: .message
            matches: '(?i)error.*Operation cannot be fulfilled on (pipelineruns?|taskruns?)\.tekton\.dev'
# Drop "resource not found" during finalizer cleanup
- op: add
  path: /spec/filters/-
  value:
    name: drop-finalizer-not-found
    type: drop
    drop:
      - test:
          - field: .kubernetes.namespace_name
            matches: 'openshift-pipelines|tekton-.*'
          - field: .structured.error
            matches: '(?i)(pipelineruns|taskruns)\.tekton\.dev .* not found'
          - field: .structured.error
            matches: '(?i)finalizers?'
      - test:
          - field: .kubernetes.namespace_name
            matches: 'openshift-pipelines|tekton-.*'
          - field: .structured.error
            matches: '(?i)failed to (set|clear) finalizers.*not found'
# Drop "resource request in progress"
- op: add
  path: /spec/filters/-
  value:
    name: drop-resource-request-in-progress
    type: drop
    drop:
      - test:
          - field: .kubernetes.namespace_name
            matches: 'openshift-pipelines|tekton-.*'
          - field: .structured.error
            matches: '(?i)resource request in progress'
          - field: .structured.error
            matches: '(?i)(TaskRun|PipelineRun)'
      - test:
          - field: .kubernetes.namespace_name
            matches: 'openshift-pipelines|tekton-.*'
          - field: .structured.error
            matches: '(?i)TaskRun prepare error.*resource request in progress'
# Drop repetitive label update warnings
- op: add
  path: /spec/filters/-
  value:
    name: drop-tekton-label-update-failures
    type: drop
    drop:
      - test:
          - field: .kubernetes.namespace_name
            matches: 'openshift-pipelines|tekton-.*'
          - field: .structured.error
            matches: '(?i)Failed to update (PipelineRun|TaskRun) labels'
          - field: .structured.error
            matches: '(?i)Operation cannot be fulfilled'
# Drop admission controller nil annotation logs (audit noise)
- op: add
  path: /spec/filters/-
  value:
    name: drop-admission-controller-nil-annotations
    type: drop
    drop:
      - test:
          - field: .kubernetes.namespace_name
            matches: 'openshift-pipelines'
          - field: .structured.msg
            matches: 'remote admission controller audit annotations=map\[string\]string\(nil\)'
# Drop tekton-results duplicate key database errors
- op: add
  path: /spec/filters/-
  value:
    name: drop-tekton-results-duplicate-key
    type: drop
    drop:
      - test:
          - field: .kubernetes.namespace_name
            matches: 'tekton-results'
          - field: .structured.error
            matches: '(?i)duplicate key value violates unique constraint.*results_by_name'
# Drop leader election optimistic locking (Kubernetes coordination noise)
- op: add
  path: /spec/filters/-
  value:
    name: drop-leader-election-conflicts
    type: drop
    drop:
      - test:
          - field: .kubernetes.namespace_name
            matches: 'openshift-operators|openshift-pipelines|openshift-gitops|tekton-.*'
          - field: .message
            matches: '(?i)leaderelection\.go.*Failed to update lock optimistically.*leases/'
# Drop TLS EOF errors ONLY (health check noise), KEEP bad certificate errors for security audit
- op: add
  path: /spec/filters/-
  value:
    name: drop-tls-eof-noise
    type: drop
    drop:
      - test:
          - field: .kubernetes.pod_name
            matches: 'tekton.*webhook'
          - field: .structured.msg
            matches: '(?i)http: TLS handshake error.*EOF'
# Drop Kueue Workload reconciliation noise
- op: add
  path: /spec/filters/-
  value:
    name: drop-kueue-reconciliation-noise
    type: drop
    drop:
      # Kueue Workload optimistic locking
      - test:
          - field: .kubernetes.namespace_name
            matches: 'tekton-kueue'
          - field: .message
            matches: '(?i)(Removing finalizer|Reconciler error).*error.*Operation cannot be fulfilled on workloads\.kueue\.x-k8s\.io'
      # PipelineRun not found during unsuspending
      - test:
          - field: .kubernetes.namespace_name
            matches: 'tekton-kueue'
          - field: .message
            matches: '(?i)Unsuspending job.*error.*pipelineruns\.tekton\.dev.*not found'
# application index filters (Tekton drops target openshift-pipelines and tekton-* namespaces)
- op: add
  path: /spec/pipelines/0/filterRefs/-
  value: drop-tekton-reconcile-conflicts
- op: add
  path: /spec/pipelines/0/filterRefs/-
  value: drop-finalizer-not-found
- op: add
  path: /spec/pipelines/0/filterRefs/-
  value: drop-resource-request-in-progress
- op: add
  path: /spec/pipelines/0/filterRefs/-
  value: drop-tekton-label-update-failures
- op: add
  path: /spec/pipelines/0/filterRefs/-
  value: drop-admission-controller-nil-annotations
- op: add
  path: /spec/pipelines/0/filterRefs/-
  value: drop-tekton-results-duplicate-key
- op: add
  path: /spec/pipelines/0/filterRefs/-
  value: drop-leader-election-conflicts
- op: add
  path: /spec/pipelines/0/filterRefs/-
  value: drop-tls-eof-noise
- op: add
  path: /spec/pipelines/0/filterRefs/-
  value: drop-kueue-reconciliation-noise
# audit index filters
- op: add
  path: /spec/pipelines/1/filterRefs/-
  value: drop-tekton-reconcile-conflicts
- op: add
  path: /spec/pipelines/1/filterRefs/-
  value: drop-finalizer-not-found
- op: add
  path: /spec/pipelines/1/filterRefs/-
  value: drop-resource-request-in-progress
- op: add
  path: /spec/pipelines/1/filterRefs/-
  value: drop-tekton-label-update-failures
- op: add
  path: /spec/pipelines/1/filterRefs/-
  value: drop-admission-controller-nil-annotations
- op: add
  path: /spec/pipelines/1/filterRefs/-
  value: drop-tekton-results-duplicate-key
- op: add
  path: /spec/pipelines/1/filterRefs/-
  value: drop-leader-election-conflicts
- op: add
  path: /spec/pipelines/1/filterRefs/-
  value: drop-tls-eof-noise
- op: add
  path: /spec/pipelines/1/filterRefs/-
  value: drop-kueue-reconciliation-noise

@qodo-code-review
Copy link
Copy Markdown

Review Summary by Qodo

Add production log filters to reduce noise

✨ Enhancement

Grey Divider

Walkthroughs

Description
• Add comprehensive log filters to ClusterLogForwarder for production
• Drop Tekton reconciliation conflicts and optimistic locking errors
• Filter finalizer cleanup, resource request, and label update failures
• Remove admission controller, database, leader election, and TLS noise
• Apply filters to both application and audit index pipelines
Diagram
flowchart LR
  A["ClusterLogForwarder<br/>Configuration"] -->|"Add filters patch"| B["Filter Definitions"]
  B -->|"drop-empty-lines"| C["Application &<br/>Audit Pipelines"]
  B -->|"multiline-exception"| C
  B -->|"Tekton/Kueue<br/>reconciliation"| C
  B -->|"Finalizer/Resource<br/>cleanup"| C
  B -->|"Leader election &<br/>TLS noise"| C
  C -->|"Filtered logs"| D["Splunk Index"]
Loading

Grey Divider

File Changes

1. components/monitoring/logging/production/base/kustomization.yaml ⚙️ Configuration changes +6/-0

Add log filters patch reference

• Add reference to new log filters patch file
• Target ClusterLogForwarder resource for patch application
• Enable filter configuration in production logging stack

components/monitoring/logging/production/base/kustomization.yaml


2. components/monitoring/logging/production/base/patches/configure-logforwarder-filters-patch.yaml ✨ Enhancement +227/-0

Define comprehensive production log filters

• Create 11 drop filters targeting Tekton, Kueue, and system noise
• Filter empty lines and enable multiline exception detection
• Drop reconciliation conflicts, finalizer errors, and resource request failures
• Remove admission controller nil annotations and database duplicate key errors
• Filter leader election locking conflicts and TLS EOF handshake errors
• Apply all filters to both application index and audit index pipelines

components/monitoring/logging/production/base/patches/configure-logforwarder-filters-patch.yaml


Grey Divider

Qodo Logo

@qodo-code-review
Copy link
Copy Markdown

qodo-code-review bot commented Apr 17, 2026

Code Review by Qodo

🐞 Bugs (2) 📘 Rule violations (0) 📎 Requirement gaps (0)

Grey Divider


Remediation recommended

1. Brittle pipeline index patch 🐞 Bug ⚙ Maintainability
Description
The JSON6902 patch appends filterRefs to /spec/pipelines/0 and /spec/pipelines/1 by array index, so
any future pipeline reordering/insertion will apply filters to the wrong pipeline (or break patch
application). This is fragile because the patch doesn’t assert which pipeline names it expects at
index 0/1.
Code

components/monitoring/logging/production/base/patches/configure-logforwarder-filters-patch.yaml[R12-17]

+- op: add
+  path: /spec/pipelines/0/filterRefs/-
+  value: drop-empty-lines
+- op: add
+  path: /spec/pipelines/1/filterRefs/-
+  value: drop-empty-lines
Evidence
The production patch repeatedly targets pipelines by numeric index (0/1) rather than by pipeline
name; the base ClusterLogForwarder currently has exactly two pipelines (application and
infrastructure/audit), but the patch does not guard against future changes in ordering, making
misapplication likely if the base evolves.

components/monitoring/logging/production/base/patches/configure-logforwarder-filters-patch.yaml[12-17]
components/monitoring/logging/production/base/patches/configure-logforwarder-filters-patch.yaml[173-227]
components/monitoring/logging/base/configure-logging/configure-logforwarder.yaml[39-57]

Agent prompt
The issue below was found during a code review. Follow the provided context and guidance below and implement a solution

### Issue description
The JSON6902 patch wires filters to pipelines using hard-coded indices (`/spec/pipelines/0` and `/spec/pipelines/1`). If the base ClusterLogForwarder ever changes pipeline ordering, the patch will silently apply filters to the wrong pipeline.

### Issue Context
Today the base has two pipelines in a known order, but the overlay does not enforce that contract.

### Fix Focus Areas
- Add JSONPatch `test` ops to assert pipeline names before any `add` operations so the patch fails fast if ordering changes.
- (Optional) Add similar `test` ops for `inputRefs` to confirm pipeline intent.

- components/monitoring/logging/production/base/patches/configure-logforwarder-filters-patch.yaml[12-29]
- components/monitoring/logging/production/base/patches/configure-logforwarder-filters-patch.yaml[173-227]
- components/monitoring/logging/base/configure-logging/configure-logforwarder.yaml[39-57]

Example pattern to add near the top of the patch:
```yaml
- op: test
 path: /spec/pipelines/0/name
 value: fluentd-forward-logs-to-splunk-rhtap-app-index
- op: test
 path: /spec/pipelines/1/name
 value: fluentd-forward-logs-to-splunk-rhtap-audit-index
```

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools


2. Leader election drop too broad 🐞 Bug ◔ Observability
Description
The drop-leader-election-conflicts filter drops any log with a matching leader-election message
without scoping to specific namespaces/pods/components. Because it is referenced by both pipelines,
it can hide leader-election churn signals from any workload routed through either pipeline.
Code

components/monitoring/logging/production/base/patches/configure-logforwarder-filters-patch.yaml[R132-141]

+- op: add
+  path: /spec/filters/-
+  value:
+    name: drop-leader-election-conflicts
+    type: drop
+    drop:
+      - test:
+          - field: .message
+            matches: '(?i)leaderelection\.go.*Failed to update lock optimistically.*leases/'
+# Drop TLS EOF errors ONLY (health check noise), KEEP bad certificate errors for security audit
Evidence
The filter condition only checks .message and has no Kubernetes scoping fields
(namespace/pod/container), yet it is added into both pipeline filter chains. The base
ClusterLogForwarder shows pipeline[0] handles application input and pipeline[1] handles
infrastructure and audit inputs, so this unscoped drop can affect multiple log classes.

components/monitoring/logging/production/base/patches/configure-logforwarder-filters-patch.yaml[132-141]
components/monitoring/logging/production/base/patches/configure-logforwarder-filters-patch.yaml[192-224]
components/monitoring/logging/base/configure-logging/configure-logforwarder.yaml[42-56]

Agent prompt
The issue below was found during a code review. Follow the provided context and guidance below and implement a solution

### Issue description
`drop-leader-election-conflicts` drops logs purely by matching `.message`, with no namespace/pod/component constraints. This can unintentionally suppress leader-election related logs beyond the intended Tekton/noise sources.

### Issue Context
The filter is referenced from both pipelines, and those pipelines cover different inputs (`application` vs `infrastructure`/`audit`).

### Fix Focus Areas
- Restrict the filter by adding additional `test` clauses (e.g., `.kubernetes.namespace_name` and/or `.kubernetes.pod_name`) to only match the intended controllers/namespaces.
- If the goal is Tekton-only noise reduction, align scoping with the other Tekton filters (openshift-pipelines, tekton-*, tekton-results, tekton-kueue).

- components/monitoring/logging/production/base/patches/configure-logforwarder-filters-patch.yaml[132-141]
- components/monitoring/logging/production/base/patches/configure-logforwarder-filters-patch.yaml[192-224]

Example tightening:
```yaml
- op: add
 path: /spec/filters/-
 value:
   name: drop-leader-election-conflicts
   type: drop
   drop:
     - test:
         - field: .kubernetes.namespace_name
           matches: 'openshift-pipelines|tekton-.*|tekton-results|tekton-kueue'
         - field: .message
           matches: '(?i)leaderelection\\.go.*Failed to update lock optimistically.*leases/'
```

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools


Grey Divider

ⓘ The new review experience is currently in Beta. Learn more

Grey Divider

Qodo Logo

raks-tt added 2 commits April 17, 2026 11:45
Drop Tekton controller reconciliation conflicts, finalizer cleanup, duplicate database key errors , leader election locking conflicts, finalizers not found, resource requetes in progress, label update failures, admission controller nil annotations and TLS EOF noise
@raks-tt raks-tt force-pushed the add_log_filters_prod branch from 8332153 to beee9f3 Compare April 17, 2026 18:49
@raks-tt
Copy link
Copy Markdown
Contributor Author

raks-tt commented Apr 17, 2026

/review

@qodo-code-review
Copy link
Copy Markdown

Persistent review updated to latest commit beee9f3

Copy link
Copy Markdown
Contributor

@pacho-rh pacho-rh left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

/lgtm

@openshift-ci
Copy link
Copy Markdown

openshift-ci bot commented Apr 17, 2026

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: pacho-rh, raks-tt

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Details Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants