feat(e2e): add staging instance settings validation#8094
Conversation
Compares FAPI /v1/environment responses between production and staging instance pairs to detect configuration drift (auth strategies, MFA, org settings, user requirements, etc.). Runs as a non-blocking warning step in the e2e-staging workflow before integration tests. Also runnable locally via: node scripts/validate-staging-instances.mjs
|
The latest updates on your projects. Learn more about Vercel for GitHub.
|
🦋 Changeset detectedLatest commit: 1308893 The changes in this PR will be included in the next version bump. This PR includes changesets to release 0 packagesWhen changesets are added to this PR, you'll see the packages that this PR includes changesets for and the associated semver types Not sure what this means? Click here to learn what changesets are. Click here if you're a maintainer who wants to add another changeset to this PR |
@clerk/agent-toolkit
@clerk/astro
@clerk/backend
@clerk/chrome-extension
@clerk/clerk-js
@clerk/dev-cli
@clerk/expo
@clerk/expo-passkeys
@clerk/express
@clerk/fastify
@clerk/hono
@clerk/localizations
@clerk/nextjs
@clerk/nuxt
@clerk/react
@clerk/react-router
@clerk/shared
@clerk/tanstack-react-start
@clerk/testing
@clerk/ui
@clerk/upgrade
@clerk/vue
commit: |
|
Note Reviews pausedIt looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the Use the following commands to manage reviews:
Use the checkboxes below for quick actions:
📝 WalkthroughWalkthroughAdds a new "Validate Staging Instances" job to the 🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. 📝 Coding Plan
Comment |
There was a problem hiding this comment.
Actionable comments posted: 1
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In @.github/workflows/e2e-staging.yml:
- Around line 57-59: The workflow's integration-tests job is missing an explicit
dependency on the validate-instances job, so add needs: [validate-instances] to
the integration-tests job definition (reference job name integration-tests and
validate-instances) to ensure validation runs beforehand; additionally, add
tests to cover the new validation behavior and workflow ordering—create/modify
CI tests that assert the validate-instances step is required before
integration-tests and add unit/integration tests for the validation script
behavior to prevent regressions.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Repository YAML (base), Organization UI (inherited)
Review profile: ASSERTIVE
Plan: Pro
Run ID: 42251c79-33eb-4f74-a042-a5e40ebb8646
📒 Files selected for processing (2)
.github/workflows/e2e-staging.ymlscripts/validate-staging-instances.mjs
Group mismatches by section with aligned columns, collapse child fields when a parent attribute is disabled, show array diffs as missing/extra items instead of raw JSON, and collapse wholly missing social providers.
There was a problem hiding this comment.
Actionable comments posted: 1
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@scripts/validate-staging-instances.mjs`:
- Around line 61-67: The fetchEnvironment function makes an unbounded network
call and the validation summary can falsely report all pairs matched even when
some fetches failed; modify fetchEnvironment to accept a timeout parameter and
implement AbortController (or equivalent) so the fetch is aborted after the
timeout and throws a clear error, then update the validation loop that currently
skips pairs on fetch errors to record failures separately (e.g., maintain
matchedCount and failedCount or a map of pair->status) so any fetch/validation
error marks that pair as "failed" rather than silently skipping it, and change
the final summary logic that prints "all N instance pair(s) matched" to only
claim all matched when failedCount is zero and matchedCount equals totalPairs,
otherwise report exact counts and exit non-zero if any failures occurred.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Repository YAML (base), Organization UI (inherited)
Review profile: ASSERTIVE
Plan: Pro
Run ID: b1b644e7-d86b-440e-a314-b6ddc1626cb1
📒 Files selected for processing (1)
scripts/validate-staging-instances.mjs
Add 10s fetch timeout via AbortSignal. Track fetch failures separately so the summary never falsely reports 'all matched' when fetches failed.
There was a problem hiding this comment.
Actionable comments posted: 1
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@scripts/validate-staging-instances.mjs`:
- Around line 39-42: The loadKeys function currently
JSON.parse(process.env[envVar]) directly which can throw and abort the whole
run; change loadKeys to catch JSON.parse errors and validate each returned pair
has a string pair.*.pk, converting malformed JSON or invalid pairs into explicit
load/pair failures instead of throwing (e.g., return a result object {ok: false,
error: "..."} or an array where invalid entries are flagged), and update the
callers that consume loadKeys (and similar code paths referenced by the other
occurrences using the same logic) to handle these failure objects by recording a
per-pair validation error rather than letting the exception short-circuit the
script; ensure main().catch still exits non-zero only for fatal errors while
per-pair errors are reported in the summary.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Repository YAML (base), Organization UI (inherited)
Review profile: ASSERTIVE
Plan: Pro
Run ID: 066433e0-a7f5-44dc-b344-fe0da2802266
📒 Files selected for processing (1)
scripts/validate-staging-instances.mjs
There was a problem hiding this comment.
Actionable comments posted: 1
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@scripts/validate-staging-instances.mjs`:
- Around line 1-383: Add automated tests covering the validator logic: write
unit tests for loadKeys to validate parsing valid/invalid JSON and missing pk
entries, for parseFapiDomain to decode various PKs, for diffObjects to detect
scalar/array/object mismatches (including missingOnStaging/extraOnStaging
cases), for collapseAttributeMismatches and collapseSocialMismatches to ensure
child diffs are collapsed correctly, and for main/summary behavior to verify
pair matching, fetchEnvironment fetch-fail handling and the final summary counts
(mismatched, failed to fetch, key load errors, matched). Target the exported
functions: loadKeys, parseFapiDomain, fetchEnvironment (mock network),
diffObjects, collapseAttributeMismatches, collapseSocialMismatches and main
orchestration to assert expected console output and exit behavior across success
and failure scenarios.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Repository YAML (base), Organization UI (inherited)
Review profile: ASSERTIVE
Plan: Pro
Run ID: 0d3960a0-0ff9-4811-976e-3db73cbea0b5
📒 Files selected for processing (1)
scripts/validate-staging-instances.mjs
| #!/usr/bin/env node | ||
|
|
||
| /** | ||
| * Validates that staging Clerk instances have the same settings as their | ||
| * production counterparts by comparing FAPI /v1/environment responses. | ||
| * | ||
| * Usage: | ||
| * node scripts/validate-staging-instances.mjs | ||
| * | ||
| * Reads keys from INTEGRATION_INSTANCE_KEYS / INTEGRATION_STAGING_INSTANCE_KEYS | ||
| * env vars, or from integration/.keys.json / integration/.keys.staging.json. | ||
| */ | ||
|
|
||
| import { readFileSync } from 'node:fs'; | ||
| import { resolve } from 'node:path'; | ||
|
|
||
| const STAGING_KEY_PREFIX = 'clerkstage-'; | ||
|
|
||
| /** | ||
| * Paths to ignore during comparison — these are expected to differ between | ||
| * production and staging environments. | ||
| */ | ||
| const IGNORED_PATHS = [ | ||
| /\.id$/, | ||
| /^auth_config\.id$/, | ||
| /\.logo_url$/, | ||
| /\.captcha_enabled$/, | ||
| /\.captcha_widget_type$/, | ||
| /\.enforce_hibp_on_sign_in$/, | ||
| /\.disable_hibp$/, | ||
| ]; | ||
|
|
||
| function isIgnored(path) { | ||
| return IGNORED_PATHS.some(pattern => pattern.test(path)); | ||
| } | ||
|
|
||
| // ── Key loading ────────────────────────────────────────────────────────────── | ||
|
|
||
| function loadKeys(envVar, filePath) { | ||
| let raw; | ||
| const errors = []; | ||
|
|
||
| if (process.env[envVar]) { | ||
| try { | ||
| raw = JSON.parse(process.env[envVar]); | ||
| } catch (err) { | ||
| return { keys: null, errors: [`Failed to parse ${envVar}: ${err.message}`] }; | ||
| } | ||
| } else { | ||
| try { | ||
| raw = JSON.parse(readFileSync(resolve(filePath), 'utf-8')); | ||
| } catch { | ||
| return { keys: null, errors: [] }; | ||
| } | ||
| } | ||
|
|
||
| if (!raw || typeof raw !== 'object' || Array.isArray(raw)) { | ||
| return { keys: null, errors: [`Expected a JSON object from ${envVar || filePath}`] }; | ||
| } | ||
|
|
||
| const keys = {}; | ||
| for (const [name, entry] of Object.entries(raw)) { | ||
| if (entry && typeof entry === 'object' && typeof entry.pk === 'string') { | ||
| keys[name] = entry; | ||
| } else { | ||
| errors.push(`"${name}": missing or invalid pk`); | ||
| } | ||
| } | ||
|
|
||
| return { keys: Object.keys(keys).length > 0 ? keys : null, errors }; | ||
| } | ||
|
|
||
| // ── PK parsing ─────────────────────────────────────────────────────────────── | ||
|
|
||
| function parseFapiDomain(pk) { | ||
| const parts = pk.split('_'); | ||
| const encoded = parts.slice(2).join('_'); | ||
| const decoded = Buffer.from(encoded, 'base64').toString('utf-8'); | ||
| return decoded.replace(/\$$/, ''); | ||
| } | ||
|
|
||
| // ── Environment fetching ───────────────────────────────────────────────────── | ||
|
|
||
| async function fetchEnvironment(fapiDomain) { | ||
| const url = `https://${fapiDomain}/v1/environment`; | ||
| const res = await fetch(url, { signal: AbortSignal.timeout(10_000) }); | ||
| if (!res.ok) { | ||
| throw new Error(`Failed to fetch ${url}: ${res.status} ${res.statusText}`); | ||
| } | ||
| return res.json(); | ||
| } | ||
|
|
||
| // ── Comparison ─────────────────────────────────────────────────────────────── | ||
|
|
||
| const COMPARED_USER_SETTINGS_FIELDS = ['attributes', 'social', 'sign_in', 'sign_up', 'password_settings']; | ||
|
|
||
| /** | ||
| * Recursively compare two values and collect paths where they differ. | ||
| * For arrays of primitives (like strategy lists), stores structured diff info. | ||
| */ | ||
| function diffObjects(a, b, path = '') { | ||
| const mismatches = []; | ||
|
|
||
| if (a === b) return mismatches; | ||
| if (a == null || b == null || typeof a !== typeof b) { | ||
| mismatches.push({ path, prod: a, staging: b }); | ||
| return mismatches; | ||
| } | ||
| if (typeof a !== 'object') { | ||
| if (a !== b) { | ||
| mismatches.push({ path, prod: a, staging: b }); | ||
| } | ||
| return mismatches; | ||
| } | ||
| if (Array.isArray(a) && Array.isArray(b)) { | ||
| const sortedA = JSON.stringify([...a].sort()); | ||
| const sortedB = JSON.stringify([...b].sort()); | ||
| if (sortedA !== sortedB) { | ||
| // For arrays of primitives, compute added/removed | ||
| const flatA = a.flat(Infinity); | ||
| const flatB = b.flat(Infinity); | ||
| if (flatA.every(v => typeof v !== 'object') && flatB.every(v => typeof v !== 'object')) { | ||
| const setA = new Set(flatA); | ||
| const setB = new Set(flatB); | ||
| const missingOnStaging = [...new Set(flatA.filter(v => !setB.has(v)))]; | ||
| const extraOnStaging = [...new Set(flatB.filter(v => !setA.has(v)))]; | ||
| mismatches.push({ path, prod: a, staging: b, missingOnStaging, extraOnStaging }); | ||
| } else { | ||
| mismatches.push({ path, prod: a, staging: b }); | ||
| } | ||
| } | ||
| return mismatches; | ||
| } | ||
|
|
||
| const allKeys = new Set([...Object.keys(a), ...Object.keys(b)]); | ||
| for (const key of allKeys) { | ||
| const childPath = path ? `${path}.${key}` : key; | ||
| mismatches.push(...diffObjects(a[key], b[key], childPath)); | ||
| } | ||
| return mismatches; | ||
| } | ||
|
|
||
| function compareEnvironments(prodEnv, stagingEnv) { | ||
| const mismatches = []; | ||
|
|
||
| // auth_config | ||
| mismatches.push(...diffObjects(prodEnv.auth_config, stagingEnv.auth_config, 'auth_config')); | ||
|
|
||
| // organization_settings | ||
| const orgFields = ['enabled', 'force_organization_selection']; | ||
| for (const field of orgFields) { | ||
| mismatches.push( | ||
| ...diffObjects( | ||
| prodEnv.organization_settings?.[field], | ||
| stagingEnv.organization_settings?.[field], | ||
| `organization_settings.${field}`, | ||
| ), | ||
| ); | ||
| } | ||
|
|
||
| // user_settings — selected fields only | ||
| for (const field of COMPARED_USER_SETTINGS_FIELDS) { | ||
| if (field === 'social') { | ||
| const prodSocial = prodEnv.user_settings?.social ?? {}; | ||
| const stagingSocial = stagingEnv.user_settings?.social ?? {}; | ||
| const allProviders = new Set([...Object.keys(prodSocial), ...Object.keys(stagingSocial)]); | ||
| for (const provider of allProviders) { | ||
| const prodProvider = prodSocial[provider]; | ||
| const stagingProvider = stagingSocial[provider]; | ||
| if (!prodProvider?.enabled && !stagingProvider?.enabled) continue; | ||
| mismatches.push(...diffObjects(prodProvider, stagingProvider, `user_settings.social.${provider}`)); | ||
| } | ||
| } else { | ||
| mismatches.push( | ||
| ...diffObjects(prodEnv.user_settings?.[field], stagingEnv.user_settings?.[field], `user_settings.${field}`), | ||
| ); | ||
| } | ||
| } | ||
|
|
||
| return mismatches; | ||
| } | ||
|
|
||
| // ── Output formatting ──────────────────────────────────────────────────────── | ||
|
|
||
| /** | ||
| * Section display names and the path prefixes they cover. | ||
| */ | ||
| const SECTIONS = [ | ||
| { label: 'Auth Config', prefix: 'auth_config.' }, | ||
| { label: 'Organization Settings', prefix: 'organization_settings.' }, | ||
| { label: 'Attributes', prefix: 'user_settings.attributes.' }, | ||
| { label: 'Social Providers', prefix: 'user_settings.social.' }, | ||
| { label: 'Sign In', prefix: 'user_settings.sign_in.' }, | ||
| { label: 'Sign Up', prefix: 'user_settings.sign_up.' }, | ||
| { label: 'Password Settings', prefix: 'user_settings.password_settings.' }, | ||
| ]; | ||
|
|
||
| const COL_FIELD = 40; | ||
| const COL_VAL = 14; | ||
|
|
||
| function pad(str, len) { | ||
| return str.length >= len ? str : str + ' '.repeat(len - str.length); | ||
| } | ||
|
|
||
| function formatScalar(val) { | ||
| if (val === undefined) return 'undefined'; | ||
| if (val === null) return 'null'; | ||
| if (typeof val === 'object') return JSON.stringify(val); | ||
| return String(val); | ||
| } | ||
|
|
||
| /** | ||
| * Collapse attribute mismatches: if <attr>.enabled differs, skip the child | ||
| * fields (first_factors, second_factors, verifications, etc.) since the root | ||
| * cause is the enabled flag. | ||
| */ | ||
| function collapseAttributeMismatches(mismatches) { | ||
| const disabledAttrs = new Set(); | ||
| for (const m of mismatches) { | ||
| if (m.path.startsWith('user_settings.attributes.') && m.path.endsWith('.enabled')) { | ||
| disabledAttrs.add(m.path.replace('.enabled', '')); | ||
| } | ||
| } | ||
| return mismatches.filter(m => { | ||
| if (!m.path.startsWith('user_settings.attributes.')) return true; | ||
| // Keep the .enabled entry itself | ||
| if (m.path.endsWith('.enabled')) return true; | ||
| // Drop children of disabled attributes | ||
| const parentAttr = m.path.replace(/\.[^.]+$/, ''); | ||
| return !disabledAttrs.has(parentAttr); | ||
| }); | ||
| } | ||
|
|
||
| /** | ||
| * For social providers that are entirely present/missing, collapse to one line. | ||
| */ | ||
| function collapseSocialMismatches(mismatches) { | ||
| const wholeMissing = new Set(); | ||
| for (const m of mismatches) { | ||
| if (m.path.startsWith('user_settings.social.') && !m.path.includes('.', 'user_settings.social.x'.length)) { | ||
| if ((m.prod && !m.staging) || (!m.prod && m.staging)) { | ||
| wholeMissing.add(m.path); | ||
| } | ||
| } | ||
| } | ||
| return mismatches.filter(m => { | ||
| if (!m.path.startsWith('user_settings.social.')) return true; | ||
| // Keep the top-level entry | ||
| const parts = m.path.split('.'); | ||
| if (parts.length <= 3) return true; | ||
| // Drop children of wholly missing providers | ||
| const parentPath = parts.slice(0, 3).join('.'); | ||
| return !wholeMissing.has(parentPath); | ||
| }); | ||
| } | ||
|
|
||
| function formatMismatch(m, prefix) { | ||
| const field = m.path.slice(prefix.length); | ||
|
|
||
| // Array diff with missing/extra items | ||
| if (m.missingOnStaging || m.extraOnStaging) { | ||
| const parts = []; | ||
| if (m.missingOnStaging?.length) { | ||
| parts.push(`missing on staging: ${m.missingOnStaging.join(', ')}`); | ||
| } | ||
| if (m.extraOnStaging?.length) { | ||
| parts.push(`extra on staging: ${m.extraOnStaging.join(', ')}`); | ||
| } | ||
| return ` ${pad(field, COL_FIELD)} ${parts.join('; ')}`; | ||
| } | ||
|
|
||
| // Social provider entirely present/missing | ||
| if (prefix === 'user_settings.social.' && !field.includes('.')) { | ||
| if (m.prod && !m.staging) { | ||
| return ` ${pad(field, COL_FIELD)} ${pad('present', COL_VAL)} missing`; | ||
| } | ||
| if (!m.prod && m.staging) { | ||
| return ` ${pad(field, COL_FIELD)} ${pad('missing', COL_VAL)} present`; | ||
| } | ||
| } | ||
|
|
||
| const prodVal = formatScalar(m.prod); | ||
| const stagingVal = formatScalar(m.staging); | ||
| return ` ${pad(field, COL_FIELD)} ${pad(prodVal, COL_VAL)} ${stagingVal}`; | ||
| } | ||
|
|
||
| function printReport(name, mismatches) { | ||
| if (mismatches.length === 0) { | ||
| console.log(`✅ ${name}: matched\n`); | ||
| return; | ||
| } | ||
|
|
||
| console.log(`❌ ${name} (${mismatches.length} mismatch${mismatches.length === 1 ? '' : 'es'})\n`); | ||
|
|
||
| for (const section of SECTIONS) { | ||
| const sectionMismatches = mismatches.filter(m => m.path.startsWith(section.prefix)); | ||
| if (sectionMismatches.length === 0) continue; | ||
|
|
||
| console.log(` ${section.label}`); | ||
| console.log(` ${pad('', COL_FIELD)} ${pad('prod', COL_VAL)} staging`); | ||
|
|
||
| for (const m of sectionMismatches) { | ||
| console.log(formatMismatch(m, section.prefix)); | ||
| } | ||
| console.log(); | ||
| } | ||
| } | ||
|
|
||
| // ── Main ───────────────────────────────────────────────────────────────────── | ||
|
|
||
| async function main() { | ||
| const { keys: prodKeys, errors: prodErrors } = loadKeys('INTEGRATION_INSTANCE_KEYS', 'integration/.keys.json'); | ||
| for (const err of prodErrors) console.error(`⚠️ Production keys: ${err}`); | ||
| if (!prodKeys) { | ||
| console.error('No production instance keys found.'); | ||
| process.exit(0); | ||
| } | ||
|
|
||
| const { keys: stagingKeys, errors: stagingErrors } = loadKeys( | ||
| 'INTEGRATION_STAGING_INSTANCE_KEYS', | ||
| 'integration/.keys.staging.json', | ||
| ); | ||
| for (const err of stagingErrors) console.error(`⚠️ Staging keys: ${err}`); | ||
| if (!stagingKeys) { | ||
| console.error('No staging instance keys found. Skipping validation.'); | ||
| process.exit(0); | ||
| } | ||
|
|
||
| const loadErrorCount = prodErrors.length + stagingErrors.length; | ||
|
|
||
| const pairs = []; | ||
| for (const [name, keys] of Object.entries(prodKeys)) { | ||
| const stagingName = STAGING_KEY_PREFIX + name; | ||
| if (stagingKeys[stagingName]) { | ||
| pairs.push({ name, prod: keys, staging: stagingKeys[stagingName] }); | ||
| } | ||
| } | ||
|
|
||
| if (pairs.length === 0) { | ||
| console.log('No production/staging key pairs found. Skipping validation.'); | ||
| process.exit(0); | ||
| } | ||
|
|
||
| console.log(`Validating ${pairs.length} staging instance pair(s)...\n`); | ||
|
|
||
| let mismatchCount = 0; | ||
| let fetchFailCount = 0; | ||
|
|
||
| for (const pair of pairs) { | ||
| const prodDomain = parseFapiDomain(pair.prod.pk); | ||
| const stagingDomain = parseFapiDomain(pair.staging.pk); | ||
|
|
||
| let prodEnv, stagingEnv; | ||
| try { | ||
| [prodEnv, stagingEnv] = await Promise.all([fetchEnvironment(prodDomain), fetchEnvironment(stagingDomain)]); | ||
| } catch (err) { | ||
| fetchFailCount++; | ||
| console.log(`⚠️ ${pair.name}: failed to fetch environment`); | ||
| console.log(` ${err.message}\n`); | ||
| continue; | ||
| } | ||
|
|
||
| let mismatches = compareEnvironments(prodEnv, stagingEnv).filter(m => !isIgnored(m.path)); | ||
| mismatches = collapseAttributeMismatches(mismatches); | ||
| mismatches = collapseSocialMismatches(mismatches); | ||
|
|
||
| if (mismatches.length > 0) mismatchCount++; | ||
| printReport(pair.name, mismatches); | ||
| } | ||
|
|
||
| const parts = []; | ||
| if (mismatchCount > 0) parts.push(`${mismatchCount} mismatched`); | ||
| if (fetchFailCount > 0) parts.push(`${fetchFailCount} failed to fetch`); | ||
| if (loadErrorCount > 0) parts.push(`${loadErrorCount} key load errors`); | ||
| const matchedCount = pairs.length - mismatchCount - fetchFailCount; | ||
| if (matchedCount > 0) parts.push(`${matchedCount} matched`); | ||
| console.log(`Summary: ${parts.join(', ')} (${pairs.length} total)`); | ||
| } | ||
|
|
||
| main().catch(err => { | ||
| console.error('Unexpected error:', err); | ||
| process.exit(0); | ||
| }); |
There was a problem hiding this comment.
Add automated tests for this validator before merge.
This PR introduces substantial comparison/error-handling logic but includes no test additions or modifications, which leaves key paths (malformed key input, pair matching, diff collapsing, summary accounting) unprotected against regressions.
As per coding guidelines, **/*: If there are no tests added or modified as part of the PR, please suggest that tests be added to cover the changes.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@scripts/validate-staging-instances.mjs` around lines 1 - 383, Add automated
tests covering the validator logic: write unit tests for loadKeys to validate
parsing valid/invalid JSON and missing pk entries, for parseFapiDomain to decode
various PKs, for diffObjects to detect scalar/array/object mismatches (including
missingOnStaging/extraOnStaging cases), for collapseAttributeMismatches and
collapseSocialMismatches to ensure child diffs are collapsed correctly, and for
main/summary behavior to verify pair matching, fetchEnvironment fetch-fail
handling and the final summary counts (mismatched, failed to fetch, key load
errors, matched). Target the exported functions: loadKeys, parseFapiDomain,
fetchEnvironment (mock network), diffObjects, collapseAttributeMismatches,
collapseSocialMismatches and main orchestration to assert expected console
output and exit behavior across success and failure scenarios.
Export testable functions from validate-staging-instances.mjs behind an isDirectRun guard and add 45 vitest tests covering loadKeys, parseFapiDomain, diffObjects, collapseAttributeMismatches, collapseSocialMismatches, fetchEnvironment, and main orchestration. Include scripts/vitest.config.mjs and wire it into vitest.workspace.mjs so the tests run in CI.
Summary
scripts/validate-staging-instances.mjs— compares FAPI/v1/environmentresponses between production and staging Clerk instance pairsvalidate-instancesjob to thee2e-staging.ymlworkflow that runs before integration testsWhat it compares
auth_config— session mode, reverification, first/second factorsuser_settings.attributes— enabled, required, factor settings for email/phone/username/password/web3/passkeyuser_settings.social— OAuth providers (only those enabled in at least one env)user_settings.sign_in— MFA settingsuser_settings.sign_up— mode, legal consentuser_settings.password_settings— length and complexity requirementsorganization_settings— enabled, force selectionWhat it skips
Example output
Local usage
Test plan
Summary by CodeRabbit