Conversation
|
Check out this pull request on See visual diffs & provide feedback on Jupyter Notebooks. Powered by ReviewNB |
|
No actionable comments were generated in the recent review. 🎉 ℹ️ Recent review info⚙️ Run configurationConfiguration used: defaults Review profile: CHILL Plan: Pro Run ID: 📒 Files selected for processing (1)
✅ Files skipped from review due to trivial changes (1)
📝 WalkthroughWalkthroughAdded a new Jupyter notebook that runs an end-to-end total-energy k-grid convergence workflow (browser/JupyterLite bootstrap, auth, material create-or-get, inject convergence into workflow, select compute, submit and await job, parse results to emit a converged k-grid). Updated the workflows index to link to it. Changes
Sequence DiagramsequenceDiagram
participant User as User
participant Notebook as Notebook
participant API as APIClient
participant Platform as Platform
participant JobQueue as Job Queue
User->>Notebook: Configure parameters & run
Notebook->>Notebook: Prepare workflow, inject convergence
Notebook->>API: Browser authenticate() / init client
API->>Platform: Fetch account & project
Platform-->>API: Account & project data
Notebook->>API: get_or_create_material(name)
API->>Platform: Create or fetch material
Platform-->>API: Material data
Notebook->>API: select_cluster() / create_compute_config()
Notebook->>API: create_job(compute, material, workflow)
API->>Platform: Submit job (workflow with convergence)
Platform->>JobQueue: Enqueue & run job
JobQueue-->>Platform: Job completes
Platform-->>API: Job finished (includes scopeTrack)
API-->>Notebook: Fetch finished job
Notebook->>Notebook: Reconstruct series -> compute converged_kgrid
Notebook->>User: Display converged_kgrid
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~25 minutes Possibly related PRs
Suggested reviewers
Poem
🚥 Pre-merge checks | ✅ 3✅ Passed checks (3 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Warning There were issues while running some tools. Please review the errors and either fix the tool's configuration or disable the tool if it's a critical failure. 🔧 Ruff (0.15.7)other/materials_designer/workflows/total_energy_convergence.ipynbUnexpected end of JSON input Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 6
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@other/materials_designer/workflows/total_energy_convergence.ipynb`:
- Around line 342-349: The code assumes client.clusters.list() returns a
non-empty list and that the cluster selection logic yields a valid cluster
before constructing Compute(cluster=cluster,...), but it can be empty or yield
cluster=None when CLUSTER_NAME isn't found; validate the result of
client.clusters.list() and the computed cluster: if the list is empty raise/exit
with a clear error, and if CLUSTER_NAME was provided but no match was found
either fall back to a sensible default (e.g., the first cluster) or raise/exit
with a descriptive message; ensure the checks occur before calling Compute(...)
so Compute is never constructed with cluster=None (refer to
client.clusters.list(), CLUSTER_NAME, the cluster variable, and Compute(...)).
- Around line 202-204: The code assumes projects[0] exists after calling
client.projects.list and will raise IndexError when no default project exists
for ACCOUNT_ID; update the block that calls client.projects.list and sets
PROJECT_ID (and prints projects[0]['name']) to first check that the returned
projects list is non-empty, and if empty raise or exit with a clear error
message mentioning ACCOUNT_ID and that no default project was found; keep
references to the same variables/functions (client.projects.list, projects,
PROJECT_ID) and only proceed to set PROJECT_ID and print projects[0]['name']
when the list contains at least one element.
- Around line 305-306: Guard the access to workflow.subworkflows before indexing
it: check that the list exists and has at least one element (e.g., if not
workflow.subworkflows) and handle the empty case by creating a new subworkflow
or raising a clear error, then proceed to set convergence_subworkflow =
workflow.subworkflows[0] and call convergence_subworkflow.add_convergence;
update the cell so it does not assume workflow.subworkflows[0] is always present
and provides a clear fallback or error message.
- Around line 441-449: The notebook currently fetches finished_job =
client.jobs.get(job_id) and immediately reads scope_track even if
finished_job.get('status') is "error" or otherwise not finished; update the
logic to explicitly check finished_job.get('status') (e.g., compare to
"finished"/"succeeded") and short-circuit (print/log status and stop/return from
the cell or raise) when the status is not the successful finished state, only
calling scope_track = finished_job.get("scopeTrack") and
display_JSON(scope_track) when the job is confirmed finished; reference the
finished_job, job_id, client.jobs.get, scope_track, and display_JSON identifiers
when making the change.
- Around line 371-384: create_job(...) actually returns a List[dict], so calling
dict_to_namespace(job_response) and job._id will fail; update the handling of
job_response to first validate it's a non-empty list, e.g. if job_response is a
list take the first element (job_dict = job_response[0]) before calling
dict_to_namespace, raise/log an error if the list is empty, then set job =
dict_to_namespace(job_dict) and job_id = job._id; make these changes around the
create_job call and references to job_response/job_id (symbols: create_job,
job_response, dict_to_namespace, job, job_id) so the notebook handles both list
responses and empty results safely.
- Around line 418-420: The notebook call incorrectly treats
wait_for_jobs_to_finish_async as an async function and supplies an unsupported
keyword; update the invocation that currently reads await
wait_for_jobs_to_finish_async(client.jobs, [job_id],
poll_interval=POLL_INTERVAL) to call the synchronous function without awaiting
and without the poll_interval argument (i.e., call
wait_for_jobs_to_finish_async(client.jobs, [job_id]) to match the utils/api.py
function signature that accepts (endpoint, job_ids) and returns a boolean).
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: bf2e0371-dc10-4920-8bd2-d1ecce31b249
📒 Files selected for processing (1)
other/materials_designer/workflows/total_energy_convergence.ipynb
| "projects = client.projects.list({\"isDefault\": True, \"owner._id\": ACCOUNT_ID})\n", | ||
| "PROJECT_ID = projects[0][\"_id\"]\n", | ||
| "print(f\"Using project: {projects[0]['name']} ({PROJECT_ID})\")\n" |
There was a problem hiding this comment.
Handle empty default-project list before indexing.
projects[0] will raise IndexError when the selected account has no default project. Fail early with a clear message.
Suggested fix
projects = client.projects.list({"isDefault": True, "owner._id": ACCOUNT_ID})
+if not projects:
+ raise RuntimeError(f"No default project found for account {ACCOUNT_ID}")
PROJECT_ID = projects[0]["_id"]
print(f"Using project: {projects[0]['name']} ({PROJECT_ID})")📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| "projects = client.projects.list({\"isDefault\": True, \"owner._id\": ACCOUNT_ID})\n", | |
| "PROJECT_ID = projects[0][\"_id\"]\n", | |
| "print(f\"Using project: {projects[0]['name']} ({PROJECT_ID})\")\n" | |
| "projects = client.projects.list({\"isDefault\": True, \"owner._id\": ACCOUNT_ID})\n", | |
| "if not projects:\n", | |
| " raise RuntimeError(f\"No default project found for account {ACCOUNT_ID}\")\n", | |
| "PROJECT_ID = projects[0][\"_id\"]\n", | |
| "print(f\"Using project: {projects[0]['name']} ({PROJECT_ID})\")\n" |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@other/materials_designer/workflows/total_energy_convergence.ipynb` around
lines 202 - 204, The code assumes projects[0] exists after calling
client.projects.list and will raise IndexError when no default project exists
for ACCOUNT_ID; update the block that calls client.projects.list and sets
PROJECT_ID (and prints projects[0]['name']) to first check that the returned
projects list is non-empty, and if empty raise or exit with a clear error
message mentioning ACCOUNT_ID and that no default project was found; keep
references to the same variables/functions (client.projects.list, projects,
PROJECT_ID) and only proceed to set PROJECT_ID and print projects[0]['name']
when the list contains at least one element.
| "convergence_subworkflow = workflow.subworkflows[0]\n", | ||
| "convergence_subworkflow.add_convergence(\n", |
There was a problem hiding this comment.
Guard subworkflow access before workflow.subworkflows[0].
This assumes the template always has at least one subworkflow; if not, this cell crashes with IndexError.
Suggested fix
assert KGRID_CONVERGENCE_TYPE in {"uniform", "non-uniform"}
+if not workflow.subworkflows:
+ raise RuntimeError("Loaded workflow has no subworkflows to apply convergence.")
convergence_subworkflow = workflow.subworkflows[0]🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@other/materials_designer/workflows/total_energy_convergence.ipynb` around
lines 305 - 306, Guard the access to workflow.subworkflows before indexing it:
check that the list exists and has at least one element (e.g., if not
workflow.subworkflows) and handle the empty case by creating a new subworkflow
or raising a clear error, then proceed to set convergence_subworkflow =
workflow.subworkflows[0] and call convergence_subworkflow.add_convergence;
update the cell so it does not assume workflow.subworkflows[0] is always present
and provides a clear fallback or error message.
| "clusters = client.clusters.list()\n", | ||
| "if CLUSTER_NAME:\n", | ||
| " cluster = next((c for c in clusters if CLUSTER_NAME in c[\"hostname\"]), None)\n", | ||
| "else:\n", | ||
| " cluster = clusters[0]\n", | ||
| "\n", | ||
| "compute = Compute(cluster=cluster, queue=QUEUE_NAME, ppn=PPN)\n", | ||
| "print(f\"Using cluster: {compute.cluster.hostname}, queue: {QUEUE_NAME}, ppn: {PPN}\")\n" |
There was a problem hiding this comment.
Validate cluster selection result before constructing Compute.
Two failure modes are unhandled: empty cluster list, and CLUSTER_NAME not found (yielding cluster=None).
Suggested fix
clusters = client.clusters.list()
+if not clusters:
+ raise RuntimeError("No clusters available for the selected account.")
if CLUSTER_NAME:
cluster = next((c for c in clusters if CLUSTER_NAME in c["hostname"]), None)
+ if cluster is None:
+ raise RuntimeError(f"Cluster '{CLUSTER_NAME}' was not found.")
else:
cluster = clusters[0]📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| "clusters = client.clusters.list()\n", | |
| "if CLUSTER_NAME:\n", | |
| " cluster = next((c for c in clusters if CLUSTER_NAME in c[\"hostname\"]), None)\n", | |
| "else:\n", | |
| " cluster = clusters[0]\n", | |
| "\n", | |
| "compute = Compute(cluster=cluster, queue=QUEUE_NAME, ppn=PPN)\n", | |
| "print(f\"Using cluster: {compute.cluster.hostname}, queue: {QUEUE_NAME}, ppn: {PPN}\")\n" | |
| "clusters = client.clusters.list()\n", | |
| "if not clusters:\n", | |
| " raise RuntimeError(\"No clusters available for the selected account.\")\n", | |
| "if CLUSTER_NAME:\n", | |
| " cluster = next((c for c in clusters if CLUSTER_NAME in c[\"hostname\"]), None)\n", | |
| " if cluster is None:\n", | |
| " raise RuntimeError(f\"Cluster '{CLUSTER_NAME}' was not found.\")\n", | |
| "else:\n", | |
| " cluster = clusters[0]\n", | |
| "\n", | |
| "compute = Compute(cluster=cluster, queue=QUEUE_NAME, ppn=PPN)\n", | |
| "print(f\"Using cluster: {compute.cluster.hostname}, queue: {QUEUE_NAME}, ppn: {PPN}\")\n" |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@other/materials_designer/workflows/total_energy_convergence.ipynb` around
lines 342 - 349, The code assumes client.clusters.list() returns a non-empty
list and that the cluster selection logic yields a valid cluster before
constructing Compute(cluster=cluster,...), but it can be empty or yield
cluster=None when CLUSTER_NAME isn't found; validate the result of
client.clusters.list() and the computed cluster: if the list is empty raise/exit
with a clear error, and if CLUSTER_NAME was provided but no match was found
either fall back to a sensible default (e.g., the first cluster) or raise/exit
with a descriptive message; ensure the checks occur before calling Compute(...)
so Compute is never constructed with cluster=None (refer to
client.clusters.list(), CLUSTER_NAME, the cluster variable, and Compute(...)).
| "job_name = f\"{workflow.name} {saved_material.formula}\"\n", | ||
| "job_response = create_job(\n", | ||
| " api_client=client,\n", | ||
| " materials=[saved_material],\n", | ||
| " workflow=workflow,\n", | ||
| " project_id=PROJECT_ID,\n", | ||
| " owner_id=ACCOUNT_ID,\n", | ||
| " prefix=job_name,\n", | ||
| " compute=compute.to_dict(),\n", | ||
| ")\n", | ||
| "\n", | ||
| "job = dict_to_namespace(job_response)\n", | ||
| "job_id = job._id\n", | ||
| "print(f\"Job created: {job_id}\")\n", |
There was a problem hiding this comment.
create_job(...) response is handled as a single object, but API returns a list.
dict_to_namespace(job_response) followed by job._id will fail if job_response is List[dict] as defined in utils/api.py.
Suggested fix
job_response = create_job(
api_client=client,
materials=[saved_material],
workflow=workflow,
project_id=PROJECT_ID,
owner_id=ACCOUNT_ID,
prefix=job_name,
compute=compute.to_dict(),
)
-job = dict_to_namespace(job_response)
+if not job_response:
+ raise RuntimeError("Job creation returned an empty response.")
+job = dict_to_namespace(job_response[0])
job_id = job._id
print(f"Job created: {job_id}")
display_JSON(job_response)📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| "job_name = f\"{workflow.name} {saved_material.formula}\"\n", | |
| "job_response = create_job(\n", | |
| " api_client=client,\n", | |
| " materials=[saved_material],\n", | |
| " workflow=workflow,\n", | |
| " project_id=PROJECT_ID,\n", | |
| " owner_id=ACCOUNT_ID,\n", | |
| " prefix=job_name,\n", | |
| " compute=compute.to_dict(),\n", | |
| ")\n", | |
| "\n", | |
| "job = dict_to_namespace(job_response)\n", | |
| "job_id = job._id\n", | |
| "print(f\"Job created: {job_id}\")\n", | |
| "job_name = f\"{workflow.name} {saved_material.formula}\"\n", | |
| "job_response = create_job(\n", | |
| " api_client=client,\n", | |
| " materials=[saved_material],\n", | |
| " workflow=workflow,\n", | |
| " project_id=PROJECT_ID,\n", | |
| " owner_id=ACCOUNT_ID,\n", | |
| " prefix=job_name,\n", | |
| " compute=compute.to_dict(),\n", | |
| ")\n", | |
| "\n", | |
| "if not job_response:\n", | |
| " raise RuntimeError(\"Job creation returned an empty response.\")\n", | |
| "job = dict_to_namespace(job_response[0])\n", | |
| "job_id = job._id\n", | |
| "print(f\"Job created: {job_id}\")\n", |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@other/materials_designer/workflows/total_energy_convergence.ipynb` around
lines 371 - 384, create_job(...) actually returns a List[dict], so calling
dict_to_namespace(job_response) and job._id will fail; update the handling of
job_response to first validate it's a non-empty list, e.g. if job_response is a
list take the first element (job_dict = job_response[0]) before calling
dict_to_namespace, raise/log an error if the list is empty, then set job =
dict_to_namespace(job_dict) and job_id = job._id; make these changes around the
create_job call and references to job_response/job_id (symbols: create_job,
job_response, dict_to_namespace, job, job_id) so the notebook handles both list
responses and empty results safely.
| "from utils.api import wait_for_jobs_to_finish_async\n", | ||
| "\n", | ||
| "await wait_for_jobs_to_finish_async(client.jobs, [job_id], poll_interval=POLL_INTERVAL)\n" |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Verify function signature and coroutine-ness
rg -n -C3 'def\s+wait_for_jobs_to_finish_async\s*\(' utils/api.py
rg -n -C2 '^\s*async\s+def\s+wait_for_jobs_to_finish_async\s*\(' utils/api.py
# Verify current notebook call shape
rg -n -C2 'wait_for_jobs_to_finish_async\s*\(' other/materials_designer/workflows/total_energy_convergence.ipynb
rg -n -C2 'await\s+wait_for_jobs_to_finish_async' other/materials_designer/workflows/total_energy_convergence.ipynbRepository: Exabyte-io/api-examples
Length of output: 765
🏁 Script executed:
#!/bin/bash
# Find decorator definition
rg -n -B5 -A15 'def interruptible_polling_loop' utils/api.py
# Check if there are multiple versions of wait_for_jobs_to_finish_async
rg -n 'wait_for_jobs_to_finish_async' utils/api.pyRepository: Exabyte-io/api-examples
Length of output: 156
🏁 Script executed:
#!/bin/bash
# Read the utils/api.py file around the function definition to see the decorator
head -100 utils/api.py | tail -50
# Also search for the decorator definition
rg -n 'interruptible_polling_loop' utils/api.pyRepository: Exabyte-io/api-examples
Length of output: 2038
🏁 Script executed:
#!/bin/bash
# Search for any async version in the file
rg -n 'async.*wait_for_jobs_to_finish' utils/api.py
# Check the file size to read it completely if manageable
wc -l utils/api.py
# Look at imports to understand what interruptible_polling_loop might do
head -30 utils/api.pyRepository: Exabyte-io/api-examples
Length of output: 1232
🏁 Script executed:
#!/bin/bash
# Get more context from the notebook around the problematic line
rg -n -B10 -A10 'await wait_for_jobs_to_finish_async' other/materials_designer/workflows/total_energy_convergence.ipynb
# Check if there's any documentation or hints about poll_interval
rg -n 'poll_interval\|POLL_INTERVAL' other/materials_designer/workflows/total_energy_convergence.ipynbRepository: Exabyte-io/api-examples
Length of output: 660
await + poll_interval usage is incompatible with wait_for_jobs_to_finish_async definition.
The function in utils/api.py (line 66) is synchronous (no async def), takes only (endpoint, job_ids), and returns a boolean. The notebook call will fail with two errors: attempting to await a non-coroutine and passing an unexpected keyword argument poll_interval that the function does not accept.
"await wait_for_jobs_to_finish_async(client.jobs, [job_id], poll_interval=POLL_INTERVAL)\n"
Remove await and the poll_interval argument.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@other/materials_designer/workflows/total_energy_convergence.ipynb` around
lines 418 - 420, The notebook call incorrectly treats
wait_for_jobs_to_finish_async as an async function and supplies an unsupported
keyword; update the invocation that currently reads await
wait_for_jobs_to_finish_async(client.jobs, [job_id],
poll_interval=POLL_INTERVAL) to call the synchronous function without awaiting
and without the poll_interval argument (i.e., call
wait_for_jobs_to_finish_async(client.jobs, [job_id]) to match the utils/api.py
function signature that accepts (endpoint, job_ids) and returns a boolean).
| "finished_job = client.jobs.get(job_id)\n", | ||
| "scope_track = finished_job.get(\"scopeTrack\") or []\n", | ||
| "print(f\"Job status: {finished_job.get('status')}\")\n", | ||
| "print(f\"scopeTrack entries: {len(scope_track)}\")\n", | ||
| "\n", | ||
| "if scope_track:\n", | ||
| " display_JSON(scope_track)\n", | ||
| "else:\n", | ||
| " print(\"scopeTrack is empty. If this is a finished convergence job, we likely need a dedicated property.\")\n" |
There was a problem hiding this comment.
Stop on non-finished job status before consuming scopeTrack.
If the job ends in error, the notebook still proceeds as if convergence data were valid.
Suggested fix
finished_job = client.jobs.get(job_id)
+status = finished_job.get("status")
+if status != "finished":
+ raise RuntimeError(f"Job {job_id} did not finish successfully (status={status}).")
+
scope_track = finished_job.get("scopeTrack") or []
-print(f"Job status: {finished_job.get('status')}")
+print(f"Job status: {status}")
print(f"scopeTrack entries: {len(scope_track)}")📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| "finished_job = client.jobs.get(job_id)\n", | |
| "scope_track = finished_job.get(\"scopeTrack\") or []\n", | |
| "print(f\"Job status: {finished_job.get('status')}\")\n", | |
| "print(f\"scopeTrack entries: {len(scope_track)}\")\n", | |
| "\n", | |
| "if scope_track:\n", | |
| " display_JSON(scope_track)\n", | |
| "else:\n", | |
| " print(\"scopeTrack is empty. If this is a finished convergence job, we likely need a dedicated property.\")\n" | |
| "finished_job = client.jobs.get(job_id)\n", | |
| "status = finished_job.get(\"status\")\n", | |
| "if status != \"finished\":\n", | |
| " raise RuntimeError(f\"Job {job_id} did not finish successfully (status={status}).\")\n", | |
| "\n", | |
| "scope_track = finished_job.get(\"scopeTrack\") or []\n", | |
| "print(f\"Job status: {status}\")\n", | |
| "print(f\"scopeTrack entries: {len(scope_track)}\")\n", | |
| "\n", | |
| "if scope_track:\n", | |
| " display_JSON(scope_track)\n", | |
| "else:\n", | |
| " print(\"scopeTrack is empty. If this is a finished convergence job, we likely need a dedicated property.\")\n" |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@other/materials_designer/workflows/total_energy_convergence.ipynb` around
lines 441 - 449, The notebook currently fetches finished_job =
client.jobs.get(job_id) and immediately reads scope_track even if
finished_job.get('status') is "error" or otherwise not finished; update the
logic to explicitly check finished_job.get('status') (e.g., compare to
"finished"/"succeeded") and short-circuit (print/log status and stop/return from
the cell or raise) when the status is not the successful finished state, only
calling scope_track = finished_job.get("scopeTrack") and
display_JSON(scope_track) when the job is confirmed finished; reference the
finished_job, job_id, client.jobs.get, scope_track, and display_JSON identifiers
when making the change.
There was a problem hiding this comment.
Actionable comments posted: 3
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@other/materials_designer/workflows/total_energy_convergence.ipynb`:
- Around line 310-317: The branch for KGRID_CONVERGENCE_TYPE (symbols:
KGRID_CONVERGENCE_TYPE, CONVERGENCE_PARAMETER, PARAMETER_INITIAL,
PARAMETER_INCREMENT) lacks an else path so those variables can remain undefined;
add an explicit else that validates the value and either sets sensible defaults
or raises a clear error (e.g., ValueError) mentioning the invalid
KGRID_CONVERGENCE_TYPE and listing allowed values ("uniform", "non-uniform") so
downstream code cannot encounter a NameError.
- Around line 126-135: The cell hardcodes sensitive environment variables
(notably os.environ["OIDC_ACCESS_TOKEN"]) for debugging and must be removed;
delete the entire debug cell that sets API_PORT, API_SECURE, API_HOST, and
OIDC_ACCESS_TOKEN from the notebook (total_energy_convergence.ipynb) and instead
rely on secure runtime configuration (e.g., environment variables set outside
the repo or a secrets manager) so no secret tokens are committed.
- Around line 484-488: Guard against an empty series and ensure converged_kgrid
is always defined: first check if series is non-empty before accessing
series[-1] (e.g., raise/return with a clear message or handle as non-converged)
so accessing parameter_value = series[-1]["param"] cannot IndexError; then
change the two separate if statements for parameter_name to an if/elif and add a
final else that sets converged_kgrid to a sensible default or raises a clear
error if parameter_name is unsupported. Ensure you reference the same symbols
(series, parameter_value, parameter_name, converged_kgrid) when making these
checks and defaults.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: f327a429-9d49-42f1-8b02-04e5fa412b0d
📒 Files selected for processing (1)
other/materials_designer/workflows/total_energy_convergence.ipynb
other/materials_designer/workflows/total_energy_convergence.ipynb
Outdated
Show resolved
Hide resolved
| "if KGRID_CONVERGENCE_TYPE == \"uniform\":\n", | ||
| " CONVERGENCE_PARAMETER = \"N_k\"\n", | ||
| " PARAMETER_INITIAL = UNIFORM_KGRID_INITIAL\n", | ||
| " PARAMETER_INCREMENT = UNIFORM_KGRID_INCREMENT\n", | ||
| "elif KGRID_CONVERGENCE_TYPE == \"non-uniform\":\n", | ||
| " CONVERGENCE_PARAMETER = \"N_k_nonuniform\"\n", | ||
| " PARAMETER_INITIAL = NON_UNIFORM_KGRID_INITIAL\n", | ||
| " PARAMETER_INCREMENT = NON_UNIFORM_KGRID_INCREMENT\n", |
There was a problem hiding this comment.
Add validation for KGRID_CONVERGENCE_TYPE.
The if/elif chain has no else clause. If KGRID_CONVERGENCE_TYPE is neither "uniform" nor "non-uniform", CONVERGENCE_PARAMETER, PARAMETER_INITIAL, and PARAMETER_INCREMENT remain undefined, causing a NameError downstream.
Suggested fix
if KGRID_CONVERGENCE_TYPE == "uniform":
CONVERGENCE_PARAMETER = "N_k"
PARAMETER_INITIAL = UNIFORM_KGRID_INITIAL
PARAMETER_INCREMENT = UNIFORM_KGRID_INCREMENT
elif KGRID_CONVERGENCE_TYPE == "non-uniform":
CONVERGENCE_PARAMETER = "N_k_nonuniform"
PARAMETER_INITIAL = NON_UNIFORM_KGRID_INITIAL
PARAMETER_INCREMENT = NON_UNIFORM_KGRID_INCREMENT
+else:
+ raise ValueError(f"KGRID_CONVERGENCE_TYPE must be 'uniform' or 'non-uniform', got '{KGRID_CONVERGENCE_TYPE}'")🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@other/materials_designer/workflows/total_energy_convergence.ipynb` around
lines 310 - 317, The branch for KGRID_CONVERGENCE_TYPE (symbols:
KGRID_CONVERGENCE_TYPE, CONVERGENCE_PARAMETER, PARAMETER_INITIAL,
PARAMETER_INCREMENT) lacks an else path so those variables can remain undefined;
add an explicit else that validates the value and either sets sensible defaults
or raises a clear error (e.g., ValueError) mentioning the invalid
KGRID_CONVERGENCE_TYPE and listing allowed values ("uniform", "non-uniform") so
downstream code cannot encounter a NameError.
| "parameter_value = series[-1][\"param\"]\n", | ||
| "if parameter_name == \"N_k\":\n", | ||
| " converged_kgrid = [parameter_value, parameter_value, parameter_value]\n", | ||
| "if parameter_name == \"N_k_nonuniform\":\n", | ||
| " converged_kgrid = list(parameter_value)\n", |
There was a problem hiding this comment.
Guard against empty series and undefined converged_kgrid.
Two issues:
series[-1]raisesIndexErrorif convergence produced no entries.- The two
ifstatements (notif/elif) leaveconverged_kgridundefined ifparameter_namematches neither case.
Suggested fix
+if not series:
+ raise RuntimeError("Convergence series is empty — job may have failed or not run.")
+
parameter_name = job_workflow.subworkflows[subworkflow_index].convergence_param
parameter_value = series[-1]["param"]
if parameter_name == "N_k":
converged_kgrid = [parameter_value, parameter_value, parameter_value]
-if parameter_name == "N_k_nonuniform":
+elif parameter_name == "N_k_nonuniform":
converged_kgrid = list(parameter_value)
+else:
+ raise ValueError(f"Unexpected convergence parameter: {parameter_name}")🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@other/materials_designer/workflows/total_energy_convergence.ipynb` around
lines 484 - 488, Guard against an empty series and ensure converged_kgrid is
always defined: first check if series is non-empty before accessing series[-1]
(e.g., raise/return with a clear message or handle as non-converged) so
accessing parameter_value = series[-1]["param"] cannot IndexError; then change
the two separate if statements for parameter_name to an if/elif and add a final
else that sets converged_kgrid to a sensible default or raises a clear error if
parameter_name is unsupported. Ensure you reference the same symbols (series,
parameter_value, parameter_name, converged_kgrid) when making these checks and
defaults.
| @@ -0,0 +1,498 @@ | |||
| { | |||
There was a problem hiding this comment.
The logic related to extracting the convergence plot should be in a helper function, to be moved to JoDe when implemented.
Reply via ReviewNB
There was a problem hiding this comment.
Plot should be added for Convergence Series
Summary by CodeRabbit