CNTRLPLANE-3371: Fix AllowedCIDRs e2e test for Route-based KAS#8469
CNTRLPLANE-3371: Fix AllowedCIDRs e2e test for Route-based KAS#8469bryan-cox wants to merge 1 commit into
Conversation
|
Pipeline controller notification For optional jobs, comment This repository is configured in: LGTM mode |
|
Skipping CI for Draft Pull Request. |
|
@bryan-cox: This pull request references CNTRLPLANE-3371 which is a valid jira issue. Warning: The referenced jira issue has an invalid target version for the target branch this PR targets: expected the task to target the "5.0.0" version, but no target version was set. DetailsIn response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository. |
|
No actionable comments were generated in the recent review. 🎉 ℹ️ Recent review info⚙️ Run configurationConfiguration used: Repository YAML (base), Central YAML (inherited) Review profile: CHILL Plan: Enterprise Run ID: 📒 Files selected for processing (3)
🚧 Files skipped from review as they are similar to previous changes (2)
📝 WalkthroughWalkthroughValidateKubeAPIServerAllowedCIDRs now passes the guest REST config into ensureAPIServerAllowedCIDRs. ensureAPIServerAllowedCIDRs first waits for the control-plane to reconcile HostedCluster.Spec.Networking.APIServer.AllowedCIDRBlocks into the downstream Service.spec.LoadBalancerSourceRanges (target Service selected by publishing strategy and cloud-specific rules). It then polls reachability by creating a fresh guest kubeclient on each attempt (copying the rest.Config with a custom Dial) and calling ServerVersion() to verify network restrictions. Sequence Diagram(s)sequenceDiagram
participant Test as Test Harness
participant CP as Control-Plane Reconciler
participant LB as Downstream Service/LoadBalancer
participant GuestAPI as Guest kube-apiserver
Test->>CP: Set HostedCluster.Spec.Networking.APIServer.AllowedCIDRBlocks
Note right of CP: Reconciler selects target Service based on publishing strategy/cloud
CP->>LB: Update Service.spec.LoadBalancerSourceRanges
loop Wait for reconciliation
Test->>LB: GET Service.spec.LoadBalancerSourceRanges
alt Ranges match expected
Note right of Test: Begin reachability polling
loop Reachability attempt
Test->>GuestAPI: Create fresh kubeclient (copy rest.Config + custom Dial) and call ServerVersion()
GuestAPI-->>Test: respond (reachable/unreachable)
end
else Not reconciled
Test-->>Test: sleep and retry
end
end
Suggested reviewers
🚥 Pre-merge checks | ✅ 10 | ❌ 2❌ Failed checks (2 warnings)
✅ Passed checks (10 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Comment |
Codecov Report✅ All modified and coverable lines are covered by tests. Additional details and impacted files@@ Coverage Diff @@
## main #8469 +/- ##
==========================================
+ Coverage 37.49% 40.00% +2.50%
==========================================
Files 751 751
Lines 91984 92863 +879
==========================================
+ Hits 34487 37147 +2660
+ Misses 54854 53024 -1830
- Partials 2643 2692 +49 see 58 files with indirect coverage changes
Flags with carried forward coverage won't be shown. Click here to find out more. 🚀 New features to boost your workflow:
|
|
/pipeline required |
51d7116 to
6b609b0
Compare
|
/pipeline required |
|
Scheduling tests matching the |
Test Resultse2e-aws
e2e-aks
Failed TestsTotal failed tests: 3
|
|
/retest |
|
/test e2e-aws |
|
/test e2e-aks-4-22 |
AI Test Failure AnalysisJob: Generated by hypershift-analyze-e2e-failure post-step using Claude claude-opus-4-6 |
6b609b0 to
29672c9
Compare
|
/test e2e-aks |
|
/test e2e-aws |
AI Test Failure AnalysisJob: Generated by hypershift-analyze-e2e-failure post-step using Claude claude-opus-4-6 |
|
/test e2e-aws |
|
I have all the evidence needed. Here is the complete analysis: Test Failure Analysis CompleteJob Information
Test Failure AnalysisErrorSummaryThis is a CI infrastructure failure, not a test or code failure. The ci-operator pod for the Root CauseThe CI pod could not be scheduled on the
The 2 nodes that were actually eligible for this pod type did not have enough memory to schedule it. Preemption was also not possible — the scheduler found no viable preemption victims on the memory-constrained nodes. The pod waited for 30 minutes (the default Prow scheduling timeout) before being terminated. This is a transient cluster capacity issue on Recommendations
Evidence
|
|
/test security |
|
Scheduling tests matching the |
|
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: bryan-cox, cblecker The full list of commands accepted by this bot can be found here. The pull request process is described here DetailsNeeds approval from an approver in each of these files:
Approvers can indicate their approval by writing |
|
@coderabbitai resume |
✅ Actions performedReviews resumed. |
AI Test Failure AnalysisJob: Generated by hypershift-analyze-e2e-failure post-step using Claude claude-opus-4-6 |
AI Test Failure AnalysisJob: Generated by hypershift-analyze-e2e-failure post-step using Claude claude-opus-4-6 |
|
/retest |
AI Test Failure AnalysisJob: Generated by hypershift-analyze-e2e-failure post-step using Claude claude-opus-4-6 |
|
/retest |
AI Test Failure AnalysisJob: Generated by hypershift-analyze-e2e-failure post-step using Claude claude-opus-4-6 |
|
cc @muraee |
|
/retest |
|
Retesting once more — all three Azure failures appear to be infrastructure issues unrelated to this PR's changes:
|
AI Test Failure AnalysisJob: Generated by hypershift-analyze-e2e-failure post-step using Claude claude-opus-4-6 |
43d818b to
b7d3e11
Compare
|
/test e2e-aks |
|
/lgtm |
|
Tests from second stage were triggered manually. Pipeline can be controlled only manually, until HEAD changes. Use command to trigger second stage. |
AI Test Failure AnalysisJob: Generated by hypershift-analyze-e2e-failure post-step using Claude claude-opus-4-6 |
The ValidateKubeAPIServerAllowedCIDRs test fails on v2 Azure self-managed clusters because KAS uses Route publishing strategy (via external-dns-domain), not LoadBalancer. Two fixes: 1. Wait for the downstream LB service (router or KAS LB) to have its LoadBalancerSourceRanges updated by the CPO before asserting KAS reachability. The target service is determined by the HC's APIServer publishing strategy. 2. Create a fresh kubeclient per poll iteration to prevent HTTP/2 connection reuse. Go's HTTP/2 transport multiplexes all requests over a single persistent TCP connection — if a prior request succeeded before Azure NSG rules took effect, subsequent requests bypass the restriction on the same connection. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
b7d3e11 to
d4e7140
Compare
|
/test e2e-aks |
AI Test Failure AnalysisJob: Generated by hypershift-analyze-e2e-failure post-step using Claude claude-opus-4-6 |
|
@bryan-cox: The following tests failed, say
Full PR test history. Your PR dashboard. DetailsInstructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. I understand the commands that are listed here. |
|
/lgtm |
|
Tests from second stage were triggered manually. Pipeline can be controlled only manually, until HEAD changes. Use command to trigger second stage. |
What
Fixes the
ValidateKubeAPIServerAllowedCIDRse2e test so it passes on v2 Azure self-managed clusters where KAS uses Route publishing strategy (via--external-dns-domain).Why
The test was skipped in v2 CI (
--ginkgo.skip="KAS allowed CIDRs") because it always failed. Both v1 and v2 Azure self-managed use Route strategy for KAS, but v1 passes while v2 fails due to a difference in cluster lifecycle timing combined with HTTP/2 connection reuse.Root cause: HTTP/2 connection reuse
The test reuses a single
kubeclient.Clientsetacross allServerVersion()poll iterations. Go's HTTP/2 transport multiplexes all requests over a single persistent TCP connection. If the first poll succeeds before Azure NSG rules take effect, all subsequent polls reuse that connection and never observe the expected failure.Why v1 passes but v2 fails: In v1, the cluster is created fresh inside
TestCreateCluster, so the CPO is in its initial reconciliation burst — the router service'sLoadBalancerSourceRangesand corresponding Azure NSG rules are updated before the firstServerVersion()call. In v2, the cluster is pre-created and shared across tests, so the CPO is in steady-state with longer reconciliation intervals. The firstServerVersion()call succeeds before the NSG rules catch up, and HTTP/2 holds that connection open for all subsequent polls.Additional fix: missing downstream service wait
The test waits for
AllowedCIDRBlocksto propagate from the HostedCluster to the HostedControlPlane, but does not wait for the CPO to reconcile the downstream LoadBalancer service'sLoadBalancerSourceRanges. This is a race condition that exists in both v1 and v2 — v1 just happens to win the race due to CPO being in active reconciliation. Adding an explicit wait makes the test correct rather than relying on timing.Changes
test/e2e/util/util.go— single file, three changes:ensureAPIServerAllowedCIDRssignature:*kubeclient.Clientset→*rest.Configto enable fresh client creation per pollServerVersion()iteration creates a new client viakubeclient.NewForConfig(rest.CopyConfig(guestConfig)), preventing HTTP/2 connection reuse.allowedCIDRsTargetService()helper determines the correct LB service based on APIServer publishing strategy (Route →router, LoadBalancer → platform-specific KAS LB). AnEventuallyblock waits for the service'sLoadBalancerSourceRangesto match before checking KAS reachability.Test Plan
go build -tags e2e ./test/e2e/...— compilesgo build -tags e2ev2 ./test/e2e/v2/...— compilesgo vet -tags e2e ./test/e2e/...— passes🤖 Generated with Claude Code
Summary by CodeRabbit
Bug Fixes
Tests
Chores