Skip to content

OCPBUGS-84308: fix(cpo) delete terminated MCD pods to retry in-place upgrades#8434

Open
PoornimaSingour wants to merge 1 commit into
openshift:mainfrom
PoornimaSingour:OCPBUGS-84308
Open

OCPBUGS-84308: fix(cpo) delete terminated MCD pods to retry in-place upgrades#8434
PoornimaSingour wants to merge 1 commit into
openshift:mainfrom
PoornimaSingour:OCPBUGS-84308

Conversation

@PoornimaSingour
Copy link
Copy Markdown
Contributor

@PoornimaSingour PoornimaSingour commented May 6, 2026

What this PR does / why we need it:

When an in-place MachineConfig daemon pod is prematurely terminated (e.g., by a forced node drain), it may transition to Succeeded or Failed phase without having completed the configuration update. Previously, reconcileUpgradePods did not check the pod's phase when it already existed, leaving the terminated pod in place and causing the upgrade to stall indefinitely.

Now, when an MCD pod exists in a terminal phase (Succeeded or Failed) on a node that still requires upgrading, the controller deletes the pod so it is recreated on the next reconciliation cycle.

Which issue(s) this PR fixes:

Fixes : https://redhat.atlassian.net/browse/OCPBUGS-84308

Special notes for your reviewer:

Checklist:

  • Subject and description added to both, commit and PR.
  • Relevant issues have been referenced.
  • This change includes docs.
  • This change includes unit tests.

Summary by CodeRabbit

  • Bug Fixes

    • Upgrade flow now removes terminated upgrade pods (Succeeded/Failed) so retries can proceed and in-place upgrades continue after prior attempts finish.
    • Error reporting and logging around upgrade pod reconciliation improved for clearer operational visibility.
  • Tests

    • Added unit tests covering upgrade pod lifecycle: deletion of terminated pods, retention of running pods, creation when missing, skip on terminating pods, and cleanup of idle pods.

@openshift-merge-bot
Copy link
Copy Markdown
Contributor

Pipeline controller notification
This repo is configured to use the pipeline controller. Second-stage tests will be triggered either automatically or after lgtm label is added, depending on the repository configuration. The pipeline controller will automatically detect which contexts are required and will utilize /test Prow commands to trigger the second stage.

For optional jobs, comment /test ? to see a list of all defined jobs. To trigger manually all jobs from second stage use /pipeline required command.

This repository is configured in: LGTM mode

@openshift-ci openshift-ci Bot added the do-not-merge/work-in-progress Indicates that a PR should not merge because it is a work in progress. label May 6, 2026
@openshift-ci
Copy link
Copy Markdown
Contributor

openshift-ci Bot commented May 6, 2026

Skipping CI for Draft Pull Request.
If you want CI signal for your change, please convert it to an actual PR.
You can still manually trigger a test run with /test all

@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai Bot commented May 6, 2026

Note

Reviews paused

It looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the reviews.auto_review.auto_pause_after_reviewed_commits setting.

Use the following commands to manage reviews:

  • @coderabbitai resume to resume automatic reviews.
  • @coderabbitai review to trigger a single review.

Use the checkboxes below for quick actions:

  • ▶️ Resume reviews
  • 🔍 Trigger review
📝 Walkthrough

Walkthrough

reconcileInPlaceUpgrade now returns errors from reconcileUpgradePods as “failed to reconcile upgrade pods”. reconcileUpgradePods was extended to detect upgrade Machine Config Daemon pods in Succeeded or Failed phases when the corresponding node still needs an upgrade; it logs the detection, deletes the terminated pod (ignoring NotFound), and relies on subsequent reconciles to recreate the pod. Existing behavior for Running pods, creating missing pods, and deleting idle pods for fully updated nodes is covered by a new TestReconcileUpgradePods unit test.

Sequence Diagram(s)

sequenceDiagram
    participant Controller as Controller
    participant API_Server as API Server
    participant Node as Node
    participant Pod as Upgrade Pod

    Controller->>API_Server: Get upgrade Pod for node
    API_Server-->>Controller: Return Pod (Running | Succeeded | Failed | NotFound)

    alt Pod is Running
        Controller->>Controller: Leave Pod unchanged
    else Pod is Succeeded or Failed and Node needs upgrade
        Controller->>API_Server: Log detection and Delete Pod
        API_Server-->>Controller: Delete response (Success / NotFound / Error)
        Controller->>Controller: Retry path will recreate pod later
    else Pod NotFound
        Controller->>API_Server: Create upgrade Pod
        API_Server-->>Controller: Create response (Success / Error)
    end
Loading
🚥 Pre-merge checks | ✅ 11 | ❌ 1

❌ Failed checks (1 warning)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 0.00% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (11 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title clearly summarizes the main change: deleting terminated MCD pods to allow in-place upgrades to retry, which directly matches the core functionality described in the changeset.
Linked Issues check ✅ Passed Check skipped because no linked issues were found for this pull request.
Out of Scope Changes check ✅ Passed Check skipped because no linked issues were found for this pull request.
Stable And Deterministic Test Names ✅ Passed All test names in TestReconcileUpgradePods are stable and deterministic. No dynamic content, UUIDs, timestamps, or generated identifiers appear in test titles. Test data in bodies, not names.
Test Structure And Quality ✅ Passed Check for Ginkgo, code uses Go testing. Table-driven single responsibility. Fake clients - no cleanup. Context correct. Assertions meaningful. Follows codebase patterns.
Microshift Test Compatibility ✅ Passed TestReconcileUpgradePods is a standard Go unit test, not Ginkgo e2e. The check applies only to Ginkgo e2e tests.
Single Node Openshift (Sno) Test Compatibility ✅ Passed The PR adds only standard Go unit tests (TestReconcileUpgradePods), not Ginkgo e2e tests. The custom check applies specifically to new Ginkgo e2e tests. No Ginkgo imports or test markers present.
Topology-Aware Scheduling Compatibility ✅ Passed PR modifies pod termination handling only. No new scheduling constraints introduced. Uses hostname nodeSelector (necessary for target node) and existing wildcard toleration.
Ote Binary Stdout Contract ✅ Passed No OTE Stdout Contract violations. Files are controller/test code with no process-level entry points or stdout writes.
Ipv6 And Disconnected Network Test Compatibility ✅ Passed TestReconcileUpgradePods is a standard Go unit test with fake clients, not a Ginkgo e2e test. The custom check applies only to Ginkgo e2e tests, making it not applicable.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests

Tip

💬 Introducing Slack Agent: The best way for teams to turn conversations into code.

Slack Agent is built on CodeRabbit's deep understanding of your code, so your team can collaborate across the entire SDLC without losing context.

  • Generate code and open pull requests
  • Plan features and break down work
  • Investigate incidents and troubleshoot customer tickets together
  • Automate recurring tasks and respond to alerts with triggers
  • Summarize progress and report instantly

Built for teams:

  • Shared memory across your entire org—no repeating context
  • Per-thread sandboxes to safely plan and execute work
  • Governance built-in—scoped access, auditability, and budget controls

One agent for your entire SDLC. Right inside Slack.

👉 Get started


Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@openshift-ci
Copy link
Copy Markdown
Contributor

openshift-ci Bot commented May 6, 2026

[APPROVALNOTIFIER] This PR is NOT APPROVED

This pull-request has been approved by: PoornimaSingour
Once this PR has been reviewed and has the lgtm label, please assign sjenning for approval. For more information see the Code Review Process.

The full list of commands accepted by this bot can be found here.

Details Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@openshift-ci openshift-ci Bot added area/control-plane-operator Indicates the PR includes changes for the control plane operator - in an OCP release and removed do-not-merge/needs-area labels May 6, 2026
@PoornimaSingour PoornimaSingour changed the title fix(cpo): delete terminated MCD pods to retry in-place upgrades OCPBUGS-84308: fix(cpo) delete terminated MCD pods to retry in-place upgrades May 6, 2026
@openshift-ci-robot openshift-ci-robot added jira/severity-moderate Referenced Jira bug's severity is moderate for the branch this PR is targeting. jira/valid-reference Indicates that this PR references a valid Jira ticket of any type. labels May 6, 2026
@openshift-ci-robot
Copy link
Copy Markdown

@PoornimaSingour: This pull request references Jira Issue OCPBUGS-84308, which is invalid:

  • expected the bug to target the "5.0.0" version, but no target version was set

Comment /jira refresh to re-evaluate validity if changes to the Jira bug are made, or edit the title of this pull request to link to a different bug.

The bug has been updated to refer to the pull request using the external bug tracker.

Details

In response to this:

When an in-place MachineConfig daemon pod is prematurely terminated (e.g., by a forced node drain), it may transition to Succeeded or Failed phase without having completed the configuration update. Previously, reconcileUpgradePods did not check the pod's phase when it already existed, leaving the terminated pod in place and causing the upgrade to stall indefinitely.

Now, when an MCD pod exists in a terminal phase (Succeeded or Failed) on a node that still requires upgrading, the controller deletes the pod so it is recreated on the next reconciliation cycle.

Assisted-by: Claude Opus 4.6 noreply@anthropic.com

What this PR does / why we need it:

Which issue(s) this PR fixes:

Fixes : https://redhat.atlassian.net/browse/OCPBUGS-84308

Special notes for your reviewer:

Checklist:

  • Subject and description added to both, commit and PR.
  • Relevant issues have been referenced.
  • This change includes docs.
  • This change includes unit tests.

Summary by CodeRabbit

  • Bug Fixes

  • Improved handling of terminated upgrade pods to enable retry mechanisms during in-place upgrades, allowing progress even when previous upgrade attempts have completed.

  • Tests

  • Added comprehensive test coverage for upgrade pod lifecycle management across multiple scenarios, including pod deletion, retention, creation, and cleanup operations.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository.

@openshift-ci-robot openshift-ci-robot added the jira/invalid-bug Indicates that a referenced Jira bug is invalid for the branch this PR is targeting. label May 6, 2026
@openshift-ci-robot
Copy link
Copy Markdown

@PoornimaSingour: This pull request references Jira Issue OCPBUGS-84308, which is invalid:

  • expected the bug to target the "5.0.0" version, but no target version was set

Comment /jira refresh to re-evaluate validity if changes to the Jira bug are made, or edit the title of this pull request to link to a different bug.

Details

In response to this:

When an in-place MachineConfig daemon pod is prematurely terminated (e.g., by a forced node drain), it may transition to Succeeded or Failed phase without having completed the configuration update. Previously, reconcileUpgradePods did not check the pod's phase when it already existed, leaving the terminated pod in place and causing the upgrade to stall indefinitely.

Now, when an MCD pod exists in a terminal phase (Succeeded or Failed) on a node that still requires upgrading, the controller deletes the pod so it is recreated on the next reconciliation cycle.

Assisted-by: Claude Opus 4.6 noreply@anthropic.com

What this PR does / why we need it:

Which issue(s) this PR fixes:

Fixes : https://redhat.atlassian.net/browse/OCPBUGS-84308

Special notes for your reviewer:

Checklist:

  • Subject and description added to both, commit and PR.
  • Relevant issues have been referenced.
  • This change includes docs.
  • This change includes unit tests.

Summary by CodeRabbit

  • Bug Fixes

  • Improved handling of terminated upgrade pods to enable retry mechanisms during in-place upgrades, allowing progress even when previous upgrade attempts have completed.

  • Tests

  • Added comprehensive test coverage for upgrade pod lifecycle management across multiple scenarios, including pod deletion, retention, creation, and cleanup operations.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository.

@codecov
Copy link
Copy Markdown

codecov Bot commented May 6, 2026

Codecov Report

❌ Patch coverage is 42.85714% with 8 lines in your changes missing coverage. Please review.
✅ Project coverage is 40.00%. Comparing base (640ed89) to head (8127411).
⚠️ Report is 133 commits behind head on main.

Files with missing lines Patch % Lines
...tor/controllers/inplaceupgrader/inplaceupgrader.go 42.85% 7 Missing and 1 partial ⚠️
Additional details and impacted files
@@            Coverage Diff             @@
##             main    #8434      +/-   ##
==========================================
+ Coverage   37.39%   40.00%   +2.60%     
==========================================
  Files         751      751              
  Lines       91806    92876    +1070     
==========================================
+ Hits        34333    37154    +2821     
+ Misses      54838    53027    -1811     
- Partials     2635     2695      +60     
Files with missing lines Coverage Δ
...tor/controllers/inplaceupgrader/inplaceupgrader.go 58.90% <42.85%> (+1.84%) ⬆️

... and 59 files with indirect coverage changes

Flag Coverage Δ
cmd-support 34.09% <ø> (+1.53%) ⬆️
cpo-hostedcontrolplane 40.56% <ø> (+4.08%) ⬆️
cpo-other 40.16% <42.85%> (+2.42%) ⬆️
hypershift-operator 50.52% <ø> (+2.66%) ⬆️
other 31.54% <ø> (+3.76%) ⬆️

Flags with carried forward coverage won't be shown. Click here to find out more.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick comments (1)
control-plane-operator/hostedclusterconfigoperator/controllers/inplaceupgrader/inplaceupgrader_test.go (1)

736-738: ⚡ Quick win

Tighten deleted-pod assertion to NotFound instead of any error.

HaveOccurred() can pass for unrelated failures. Asserting IsNotFound makes the test intent explicit and failures clearer.

Proposed test hardening
+import apierrors "k8s.io/apimachinery/pkg/api/errors"
...
 			if tc.expectPodDeleted {
 				g.Expect(getErr).To(HaveOccurred(), "expected pod to be deleted")
+				g.Expect(apierrors.IsNotFound(getErr)).To(BeTrue(), "expected pod get to return NotFound after deletion")
 			}
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

In
`@control-plane-operator/hostedclusterconfigoperator/controllers/inplaceupgrader/inplaceupgrader_test.go`
around lines 736 - 738, Replace the loose assertion
g.Expect(getErr).To(HaveOccurred()) for deleted pods with a NotFound-specific
check: import k8s.io/apimachinery/pkg/api/errors as apierrors (or errors alias
used elsewhere) and replace the assertion with
g.Expect(apierrors.IsNotFound(getErr)).To(BeTrue(), "expected pod to be
NotFound") when tc.expectPodDeleted is true, referencing the tc.expectPodDeleted
branch and the getErr variable so the test fails only for a NotFound error.
🤖 Prompt for all review comments with AI agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

Nitpick comments:
In
`@control-plane-operator/hostedclusterconfigoperator/controllers/inplaceupgrader/inplaceupgrader_test.go`:
- Around line 736-738: Replace the loose assertion
g.Expect(getErr).To(HaveOccurred()) for deleted pods with a NotFound-specific
check: import k8s.io/apimachinery/pkg/api/errors as apierrors (or errors alias
used elsewhere) and replace the assertion with
g.Expect(apierrors.IsNotFound(getErr)).To(BeTrue(), "expected pod to be
NotFound") when tc.expectPodDeleted is true, referencing the tc.expectPodDeleted
branch and the getErr variable so the test fails only for a NotFound error.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Repository YAML (base), Central YAML (inherited)

Review profile: CHILL

Plan: Enterprise

Run ID: 21221ab1-79e2-4d7c-8429-c9fb954b5229

📥 Commits

Reviewing files that changed from the base of the PR and between 7ac2953 and 6fbc013.

📒 Files selected for processing (2)
  • control-plane-operator/hostedclusterconfigoperator/controllers/inplaceupgrader/inplaceupgrader.go
  • control-plane-operator/hostedclusterconfigoperator/controllers/inplaceupgrader/inplaceupgrader_test.go

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🤖 Prompt for all review comments with AI agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

Inline comments:
In
`@control-plane-operator/hostedclusterconfigoperator/controllers/inplaceupgrader/inplaceupgrader_test.go`:
- Around line 762-764: The test currently checks for a deleted pod using a broad
error assertion (g.Expect(getErr).To(HaveOccurred()) when tc.expectPodDeleted is
true); change this to assert specifically that the error is a NotFound error by
using the Kubernetes API errors helper on the getErr variable (e.g., assert
apierrors.IsNotFound(getErr) via the testing framework) so the deleted-pod
branch only passes for NotFound and not for other client/read errors; locate the
check guarded by tc.expectPodDeleted in inplaceupgrader_test.go and replace the
HaveOccurred() assertion with a specific IsNotFound assertion referencing
getErr.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Repository YAML (base), Central YAML (inherited)

Review profile: CHILL

Plan: Enterprise

Run ID: 8b8c3bd2-d52a-403e-a20b-a5bbb89688c9

📥 Commits

Reviewing files that changed from the base of the PR and between 6fbc013 and b5637a4.

📒 Files selected for processing (1)
  • control-plane-operator/hostedclusterconfigoperator/controllers/inplaceupgrader/inplaceupgrader_test.go

@PoornimaSingour
Copy link
Copy Markdown
Contributor Author

/jira refresh

@openshift-ci-robot openshift-ci-robot added jira/valid-bug Indicates that a referenced Jira bug is valid for the branch this PR is targeting. and removed jira/invalid-bug Indicates that a referenced Jira bug is invalid for the branch this PR is targeting. labels May 6, 2026
@openshift-ci-robot
Copy link
Copy Markdown

@PoornimaSingour: This pull request references Jira Issue OCPBUGS-84308, which is valid. The bug has been moved to the POST state.

3 validation(s) were run on this bug
  • bug is open, matching expected state (open)
  • bug target version (5.0.0) matches configured target version for branch (5.0.0)
  • bug is in the state ASSIGNED, which is one of the valid states (NEW, ASSIGNED, POST)

No GitHub users were found matching the public email listed for the QA contact in Jira (dhuynh@redhat.com), skipping review request.

Details

In response to this:

/jira refresh

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository.

@PoornimaSingour
Copy link
Copy Markdown
Contributor Author

@coderabbitai full review

@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai Bot commented May 6, 2026

✅ Actions performed

Full review triggered.

@openshift-ci-robot
Copy link
Copy Markdown

@PoornimaSingour: This pull request references Jira Issue OCPBUGS-84308, which is valid.

3 validation(s) were run on this bug
  • bug is open, matching expected state (open)
  • bug target version (5.0.0) matches configured target version for branch (5.0.0)
  • bug is in the state POST, which is one of the valid states (NEW, ASSIGNED, POST)

No GitHub users were found matching the public email listed for the QA contact in Jira (dhuynh@redhat.com), skipping review request.

Details

In response to this:

What this PR does / why we need it:

When an in-place MachineConfig daemon pod is prematurely terminated (e.g., by a forced node drain), it may transition to Succeeded or Failed phase without having completed the configuration update. Previously, reconcileUpgradePods did not check the pod's phase when it already existed, leaving the terminated pod in place and causing the upgrade to stall indefinitely.

Now, when an MCD pod exists in a terminal phase (Succeeded or Failed) on a node that still requires upgrading, the controller deletes the pod so it is recreated on the next reconciliation cycle.

Which issue(s) this PR fixes:

Fixes : https://redhat.atlassian.net/browse/OCPBUGS-84308

Special notes for your reviewer:

Checklist:

  • Subject and description added to both, commit and PR.
  • Relevant issues have been referenced.
  • This change includes docs.
  • This change includes unit tests.

Summary by CodeRabbit

  • Bug Fixes

  • Upgrade flow now removes terminated upgrade pods (Succeeded/Failed) so retries can proceed and in-place upgrades continue after prior attempts finish.

  • Tests

  • Added unit tests covering upgrade pod lifecycle: deletion of terminated pods, retention of running pods, creation when missing, and cleanup of idle pods on fully updated nodes.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository.

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

🧹 Nitpick comments (1)
control-plane-operator/hostedclusterconfigoperator/controllers/inplaceupgrader/inplaceupgrader.go (1)

352-363: ⚡ Quick win

Deleted-pod retry has no requeue guarantee — upgrade may stall.

After the terminated pod is deleted, reconcileUpgradePods returns nil, reconcileInPlaceUpgrade returns nil, and Reconcile returns ctrl.Result{} (no requeue). Because the deletion doesn't mutate any node annotation, no node-watch event fires to trigger a follow-up reconciliation. If no other MachineSet event arrives, the replacement pod is never created and the upgrade stalls indefinitely — which is exactly the problem this PR is fixing.

Consider either propagating a boolean "needs requeue" flag back up through reconcileInPlaceUpgrade to Reconcile, or returning ctrl.Result{RequeueAfter: ...} whenever at least one pod was deleted:

💡 Sketch of the fix
-func (r *Reconciler) reconcileUpgradePods(...) error {
+func (r *Reconciler) reconcileUpgradePods(...) (bool, error) {
     ...
+    podDeleted := false
     ...
     } else if pod.Status.Phase == corev1.PodSucceeded || pod.Status.Phase == corev1.PodFailed {
         ...
         if err := hostedClusterClient.Delete(ctx, pod); err != nil {
             ...
-            return fmt.Errorf("error deleting terminated upgrade MCD pod for node %s: %w", node.Name, err)
+            return false, fmt.Errorf("error deleting terminated upgrade MCD pod for node %s: %w", node.Name, err)
         }
+        podDeleted = true
     }
     ...
-    return nil
+    return podDeleted, nil
 }

And in reconcileInPlaceUpgrade / Reconcile, propagate the flag to return ctrl.Result{RequeueAfter: 5 * time.Second}.

🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

In
`@control-plane-operator/hostedclusterconfigoperator/controllers/inplaceupgrader/inplaceupgrader.go`
around lines 352 - 363, reconcileUpgradePods currently deletes terminated
upgrade pods but returns nil which causes reconcileInPlaceUpgrade and Reconcile
to not requeue and the replacement pod may never be created; change
reconcileUpgradePods to return a (bool, error) or similar indicator (e.g.,
deletedPod bool) when it deletes at least one pod, update
reconcileInPlaceUpgrade to propagate that flag up, and have Reconcile return
ctrl.Result{RequeueAfter: 5 * time.Second} (or another short duration) whenever
the flag indicates a pod was deleted so the controller will immediately requeue
and create the replacement pod.
🤖 Prompt for all review comments with AI agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

Inline comments:
In
`@control-plane-operator/hostedclusterconfigoperator/controllers/inplaceupgrader/inplaceupgrader_test.go`:
- Around line 692-715: Update the test case that sets existingPod with a
DeletionTimestamp and Finalizers so it actually verifies the "skip" behavior
instead of just checking getErr; in the assertion block that currently checks
getErr (references variables existingPod, expectPodSkipped and the retrieved pod
variable), either assert that the retrieved pod's DeletionTimestamp is non-nil
(e.g., pod.DeletionTimestamp != nil) to prove we hit the skip path, or
replace/add a fake-client interceptor (WithInterceptorFuncs) to spy on Delete
and assert Delete was never called for that pod — do not rely solely on getErr.

In
`@control-plane-operator/hostedclusterconfigoperator/controllers/inplaceupgrader/inplaceupgrader.go`:
- Around line 352-363: reconcileUpgradePods now deletes both idle and terminated
pods but the error wrap at the caller still says "failed to delete idle upgrade
pods", which is misleading; update the error wrapping at the call site that
wraps the error from hostedClusterClient.Delete (the delete call inside
reconcileUpgradePods) to use a neutral message like "failed to delete upgrade
pod for node %s" or include the pod phase/node context so failures deleting
terminated pods are accurately described; adjust the fmt.Errorf wrapper (the
existing "failed to delete idle upgrade pods" message) to reference the upgrade
pod deletion generically (or include pod.Status.Phase) so logs reflect the
actual deletion target.

---

Nitpick comments:
In
`@control-plane-operator/hostedclusterconfigoperator/controllers/inplaceupgrader/inplaceupgrader.go`:
- Around line 352-363: reconcileUpgradePods currently deletes terminated upgrade
pods but returns nil which causes reconcileInPlaceUpgrade and Reconcile to not
requeue and the replacement pod may never be created; change
reconcileUpgradePods to return a (bool, error) or similar indicator (e.g.,
deletedPod bool) when it deletes at least one pod, update
reconcileInPlaceUpgrade to propagate that flag up, and have Reconcile return
ctrl.Result{RequeueAfter: 5 * time.Second} (or another short duration) whenever
the flag indicates a pod was deleted so the controller will immediately requeue
and create the replacement pod.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Repository YAML (base), Central YAML (inherited)

Review profile: CHILL

Plan: Enterprise

Run ID: 7df03c82-4975-43fe-9170-34a23bcc9534

📥 Commits

Reviewing files that changed from the base of the PR and between 7ac2953 and c82c543.

📒 Files selected for processing (2)
  • control-plane-operator/hostedclusterconfigoperator/controllers/inplaceupgrader/inplaceupgrader.go
  • control-plane-operator/hostedclusterconfigoperator/controllers/inplaceupgrader/inplaceupgrader_test.go

@PoornimaSingour
Copy link
Copy Markdown
Contributor Author

@coderabbitai review

@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai Bot commented May 11, 2026

✅ Actions performed

Review triggered.

Note: CodeRabbit is an incremental review system and does not re-review already reviewed commits. This command is applicable only when automatic reviews are paused.

@PoornimaSingour PoornimaSingour marked this pull request as ready for review May 12, 2026 05:45
@openshift-ci openshift-ci Bot removed the do-not-merge/work-in-progress Indicates that a PR should not merge because it is a work in progress. label May 12, 2026
@openshift-ci openshift-ci Bot requested review from cblecker and csrwng May 12, 2026 05:45
@openshift-ci
Copy link
Copy Markdown
Contributor

openshift-ci Bot commented May 12, 2026

@PoornimaSingour: all tests passed!

Full PR test history. Your PR dashboard.

Details

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. I understand the commands that are listed here.

@PoornimaSingour
Copy link
Copy Markdown
Contributor Author

/auto-cc

@openshift-ci openshift-ci Bot requested a review from jparrill May 12, 2026 10:01
@cblecker
Copy link
Copy Markdown
Member

/uncc

@openshift-ci openshift-ci Bot removed the request for review from cblecker May 12, 2026 19:32
…grades

When the machine-config-daemon (MCD) pod terminates with an error during
an in-place node upgrade, the upgrade stalls because the reconciler does
not create a replacement pod — the terminated pod still exists.

This change:
- Detects terminated (Succeeded/Failed) MCD pods and deletes them so the
  next reconcile loop creates a fresh pod.
- Skips pod creation when the existing pod has a DeletionTimestamp set
  (avoids racing with the API server).
- Requires MCD state "Done" before marking a node as updated, preventing
  premature success when the daemon is still running.
- Adds a 30s periodic requeue while an upgrade is in progress so that
  force-deleted pod events that the controller misses are recovered.
- Adds unit tests covering the DeletionTimestamp guard path.

Signed-off-by: Poornima Singour <psingour@redhat.com>
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

area/control-plane-operator Indicates the PR includes changes for the control plane operator - in an OCP release jira/severity-moderate Referenced Jira bug's severity is moderate for the branch this PR is targeting. jira/valid-bug Indicates that a referenced Jira bug is valid for the branch this PR is targeting. jira/valid-reference Indicates that this PR references a valid Jira ticket of any type.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants