Merge the feature to download dose response curve analysis results into development#180
Merge the feature to download dose response curve analysis results into development#180tonywu1999 merged 3 commits intodevelfrom
Conversation
|
Note Reviews pausedIt looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the Use the following commands to manage reviews:
Use the checkboxes below for quick actions:
📝 WalkthroughWalkthroughAdds MSstatsResponse dose‑response code generation and tests; introduces experiment‑design TPR simulation and plotting (run_tpr_simulation, plot_tpr_power_curve); makes Expdes UI dynamic and namespaced; wires statmodel contrast/preprocess_data through server/module layers; updates NAMESPACE imports and adds related docs and download helpers. Changes
Sequence Diagram(s)sequenceDiagram
participant User
participant UI as Expdes_UI
participant Server as expdesServer
participant Sim as run_tpr_simulation
participant MSR as MSstatsResponse
participant Plot as plot_tpr_power_curve
participant DL as zip_and_copy_plot
User->>UI: Click "Run simulation"
UI->>Server: trigger run (mode, rep_range, params)
Server->>Sim: run_tpr_simulation(rep_range, n_proteins)
Sim->>MSR: futureExperimentSimulation(...)
MSR-->>Sim: simulation results
Sim-->>Server: aggregated TPR results
Server->>Plot: plot_tpr_power_curve(simulation_results)
Plot-->>UI: render Plotly output
User->>UI: Click download
UI->>Server: request zip
Server->>DL: zip_and_copy_plot(pdf_files, dest_file)
DL-->>Server: return .zip
Estimated code review effort🎯 4 (Complex) | ⏱️ ~45 minutes Possibly related issues
Possibly related PRs
Suggested labels
Suggested reviewers
Poem
🚥 Pre-merge checks | ✅ 3✅ Passed checks (3 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
📝 Coding Plan
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment Tip You can enable review details to help with troubleshooting, context usage and more.Enable the |
There was a problem hiding this comment.
Actionable comments posted: 1
🧹 Nitpick comments (1)
tests/testthat/test-utils-statmodel-server.R (1)
462-538: Consider adding edge case tests for robustness.The test suite covers the main scenarios well. For additional robustness, consider adding tests for:
- Condition names containing special characters (quotes, backslashes)
- NULL or missing input parameters (e.g.,
modeling_response_curve_increasing_trendbeing NULL)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/testthat/test-utils-statmodel-server.R` around lines 462 - 538, Add two new unit tests for generate_analysis_code: one that supplies a comp_mat with GROUP names containing special characters (quotes and backslashes) and asserts the resulting code contains the exact serialized names (ensuring proper escaping, e.g., presence of the quoted strings in result); and another that sets NAMESPACE_STATMODEL$modeling_response_curve_increasing_trend (and/or modeling_response_curve_log_xaxis/visualization_response_curve_ratio_scale) to NULL in mock_input and asserts generate_analysis_code still emits valid parameter values (the expected defaults, e.g., increasing = FALSE or transform_dose = <default>) rather than crashing or inserting "NULL". Reference generate_analysis_code, the comp_mat construction and the NAMESPACE_STATMODEL$... keys when adding these tests.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@R/statmodel-server-download-code.R`:
- Around line 15-16: The generated R string for condition_names directly
interpolates comp_mat$GROUP which can contain double-quote characters and break
the produced code; before building codes, escape any quotes in comp_mat$GROUP
(e.g., replace " with \" or use R's encodeString) and then use the escaped
vector in the paste that builds condition_names so the resulting R code remains
syntactically valid; update the line that constructs codes (the paste(...) that
references comp_mat$GROUP and condition_names) to use the escaped names variable
instead of comp_mat$GROUP directly.
---
Nitpick comments:
In `@tests/testthat/test-utils-statmodel-server.R`:
- Around line 462-538: Add two new unit tests for generate_analysis_code: one
that supplies a comp_mat with GROUP names containing special characters (quotes
and backslashes) and asserts the resulting code contains the exact serialized
names (ensuring proper escaping, e.g., presence of the quoted strings in
result); and another that sets
NAMESPACE_STATMODEL$modeling_response_curve_increasing_trend (and/or
modeling_response_curve_log_xaxis/visualization_response_curve_ratio_scale) to
NULL in mock_input and asserts generate_analysis_code still emits valid
parameter values (the expected defaults, e.g., increasing = FALSE or
transform_dose = <default>) rather than crashing or inserting "NULL". Reference
generate_analysis_code, the comp_mat construction and the
NAMESPACE_STATMODEL$... keys when adding these tests.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: 072cfcab-9556-4387-9be8-f3f307320bdc
📒 Files selected for processing (2)
R/statmodel-server-download-code.Rtests/testthat/test-utils-statmodel-server.R
R/statmodel-server-download-code.R
Outdated
| codes = paste(codes, "condition_names = c(\"", | ||
| paste(comp_mat$GROUP, collapse = "\",\""), "\")\n", sep = "") |
There was a problem hiding this comment.
Consider escaping special characters in condition names.
If a condition name contains a double-quote character ("), the generated R code would be syntactically invalid. While this is unlikely in typical experimental conditions, defensive escaping would prevent edge-case failures.
🛡️ Proposed fix to escape quotes in condition names
- codes = paste(codes, "condition_names = c(\"",
- paste(comp_mat$GROUP, collapse = "\",\""), "\")\n", sep = "")
+ escaped_groups = gsub("\"", "\\\\\"", comp_mat$GROUP)
+ codes = paste(codes, "condition_names = c(\"",
+ paste(escaped_groups, collapse = "\",\""), "\")\n", sep = "")📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| codes = paste(codes, "condition_names = c(\"", | |
| paste(comp_mat$GROUP, collapse = "\",\""), "\")\n", sep = "") | |
| escaped_groups = gsub("\"", "\\\\\"", comp_mat$GROUP) | |
| codes = paste(codes, "condition_names = c(\"", | |
| paste(escaped_groups, collapse = "\",\""), "\")\n", sep = "") |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@R/statmodel-server-download-code.R` around lines 15 - 16, The generated R
string for condition_names directly interpolates comp_mat$GROUP which can
contain double-quote characters and break the produced code; before building
codes, escape any quotes in comp_mat$GROUP (e.g., replace " with \" or use R's
encodeString) and then use the escaped vector in the paste that builds
condition_names so the resulting R code remains syntactically valid; update the
line that constructs codes (the paste(...) that references comp_mat$GROUP and
condition_names) to use the escaped names variable instead of comp_mat$GROUP
directly.
There was a problem hiding this comment.
@tonywu1999 Condition names are parsed from experimental design annotations and don't contain special characters. No other code generation in the codebase applies quote escaping either. So applying it here only would be inconsistent. What do you suggest?
There was a problem hiding this comment.
Seems like the humans are having a chat. I'll hop back into my burrow for now. If you need me again, just tag @coderabbitai in a new comment, and I'll come hopping out!
There was a problem hiding this comment.
Actionable comments posted: 3
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
man/expdesServer.Rd (1)
20-38:⚠️ Potential issue | 🟡 MinorDocument the new
statmodel_contrastparameter in\arguments.
expdesServer(...)now exposesstatmodel_contrastin usage, but its argument documentation is missing.📝 Proposed Rd fix
\item{preprocess_data}{function returning preprocessed data} + +\item{statmodel_contrast}{reactiveValues object carrying Statmodel contrast data (e.g., contrast matrix) for expdes workflows}🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@man/expdesServer.Rd` around lines 20 - 38, The Rd file for expdesServer is missing documentation for the new statmodel_contrast parameter; update the \arguments section of expdesServer.Rd to add an \item{statmodel_contrast}{...} entry describing the parameter (its type, role, expected values or structure, and default behavior) so users understand how to pass contrasts to expdesServer; reference the symbol name statmodel_contrast and the function expdesServer in the description to match the usage in the code and keep the style consistent with the other items (e.g., mention it is passed to the statmodel workflow or used by the data_comparison/statmodel_input handlers).
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@R/module-expdes-server.R`:
- Around line 107-113: Create a single composition helper (e.g., compose_panels
or build_combined_plot) that builds the combined figure by calling make_panel
for results_strong and results_weak, wrapping them with plotly::subplot(...) and
plotly::layout(...), and return that combined plot; replace the current separate
p_strong/p_weak usage so both the UI render and the PDF export call this helper
(instead of printing p_strong and p_weak separately) so the PDF device receives
the same combined subplot as shown in the app (apply same change to the other
occurrence around lines 310-320).
- Around line 76-77: The linetype mapping currently assumes at most five
replicate levels (ltypes and ltype_values) and breaks when length(rep_levels) >
5; update the logic to only map up to min(length(rep_levels), length(ltypes))
(e.g., use n_ltype <- min(length(rep_levels), length(ltypes)) and ltype_values
<- setNames(ltypes[seq_len(n_ltype)],
as.character(rep_levels[seq_len(n_ltype)]))), and for any extra rep_levels
beyond that assign a fallback aesthetic (e.g., default "solid" linetype plus a
distinct colour or size) or else cap the UI slider at 5; apply the same fix to
the other occurrences referenced (around the blocks at lines ~186-188 and
~293-294) so no replicate level is left without a mapped aesthetic.
- Around line 153-160: The response-mode selector is not wired into the
simulation: locate the reactive prepared_response_data() and the run handler
that calls run_tpr_simulation(), then pass the user's selected protein/dataset
into the simulation by (1) using the input value (e.g., input$response_mode or
the control that populates choices) to subset prepared_response_data() to a
single protein or dataset before calling prepare_dose_response_fit, (2) update
the run handler to supply that subset (and any chosen replicate range) as an
argument to run_tpr_simulation(), and (3) ensure the UI choices still come from
prepared_response_data(); if you prefer not to support this path yet,
remove/disable the response-mode selector and any code branches that assume it
exists (also apply the same wiring/removal changes at the other spots where
choices and runs are prepared and invoked).
---
Outside diff comments:
In `@man/expdesServer.Rd`:
- Around line 20-38: The Rd file for expdesServer is missing documentation for
the new statmodel_contrast parameter; update the \arguments section of
expdesServer.Rd to add an \item{statmodel_contrast}{...} entry describing the
parameter (its type, role, expected values or structure, and default behavior)
so users understand how to pass contrasts to expdesServer; reference the symbol
name statmodel_contrast and the function expdesServer in the description to
match the usage in the code and keep the style consistent with the other items
(e.g., mention it is passed to the statmodel workflow or used by the
data_comparison/statmodel_input handlers).
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: 034d4569-614d-4b50-84d5-a78ca424abb2
📒 Files selected for processing (12)
NAMESPACER/MSstatsShiny.RR/constants.RR/module-expdes-server.RR/module-expdes-ui.RR/module-statmodel-server.RR/server.Rman/expdesServer.Rdman/get_download_plot_filename.Rdman/plot_tpr_power_curve.Rdman/run_tpr_simulation.Rdman/zip_and_copy_plot.Rd
✅ Files skipped from review due to trivial changes (2)
- man/plot_tpr_power_curve.Rd
- man/run_tpr_simulation.Rd
R/module-expdes-server.R
Outdated
| ltypes <- c("dotted", "dotdash", "dashed", "longdash", "solid") | ||
| ltype_values <- setNames(ltypes[seq_along(rep_levels)], as.character(rep_levels)) |
There was a problem hiding this comment.
Ranges above five replicates outgrow the line-style mapping.
The UI allows 1:10, but both plot builders only define five linetypes. Once length(rep_levels) > 5, ltypes[seq_along(rep_levels)] starts producing missing mappings, so valid selections can break or partially drop traces in both the interactive plot and the PDF export. Either cap the slider at five or switch the extra levels to another aesthetic instead of relying on linetype alone.
Also applies to: 186-188, 293-294
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@R/module-expdes-server.R` around lines 76 - 77, The linetype mapping
currently assumes at most five replicate levels (ltypes and ltype_values) and
breaks when length(rep_levels) > 5; update the logic to only map up to
min(length(rep_levels), length(ltypes)) (e.g., use n_ltype <-
min(length(rep_levels), length(ltypes)) and ltype_values <-
setNames(ltypes[seq_len(n_ltype)], as.character(rep_levels[seq_len(n_ltype)]))),
and for any extra rep_levels beyond that assign a fallback aesthetic (e.g.,
default "solid" linetype plus a distinct colour or size) or else cap the UI
slider at 5; apply the same fix to the other occurrences referenced (around the
blocks at lines ~186-188 and ~293-294) so no replicate level is left without a
mapped aesthetic.
R/module-expdes-server.R
Outdated
| p_strong <- plotly::ggplotly(make_panel(results_strong, "#1b9e77", FALSE)) | ||
| p_weak <- plotly::ggplotly(make_panel(results_weak, "#d95f02", TRUE)) | ||
|
|
||
| plotly::subplot(p_strong, p_weak, | ||
| nrows = 1, shareY = TRUE, titleX = TRUE, titleY = TRUE, | ||
| margin = 0.05 | ||
| ) |> plotly::layout( |
There was a problem hiding this comment.
The PDF export does not reproduce the plot shown in the app.
The UI renders one side-by-side subplot, but the download path prints p_strong and p_weak separately to the PDF device. That yields two standalone pages instead of the single combined figure behind the “Download plot” button. Reusing one shared composition helper for both render and download would keep the exported artifact aligned with the UI.
Also applies to: 310-320
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@R/module-expdes-server.R` around lines 107 - 113, Create a single composition
helper (e.g., compose_panels or build_combined_plot) that builds the combined
figure by calling make_panel for results_strong and results_weak, wrapping them
with plotly::subplot(...) and plotly::layout(...), and return that combined
plot; replace the current separate p_strong/p_weak usage so both the UI render
and the PDF export call this helper (instead of printing p_strong and p_weak
separately) so the PDF device receives the same combined subplot as shown in the
app (apply same change to the other occurrence around lines 310-320).
R/module-expdes-server.R
Outdated
| prepared_response_data <- reactive({ | ||
| req(is_response_curve()) | ||
| req(preprocess_data) | ||
| req(statmodel_contrast) | ||
| req(statmodel_contrast$matrix) | ||
| matrix <- statmodel_contrast$matrix | ||
| protein_level_data <- merge(preprocess_data()$ProteinLevelData, matrix, by = "GROUP") | ||
| prepare_dose_response_fit(protein_level_data) |
There was a problem hiding this comment.
The response-mode selector is currently a no-op.
prepared_response_data() is only used to populate choices, and the run handler never passes either the selected protein or any prepared response data into run_tpr_simulation(). For a given replicate range, every run produces the same generic curve regardless of which protein the user picks or which dataset is loaded. Either wire that selection into the simulation or remove the control until that path exists.
Also applies to: 181-185, 212-223
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@R/module-expdes-server.R` around lines 153 - 160, The response-mode selector
is not wired into the simulation: locate the reactive prepared_response_data()
and the run handler that calls run_tpr_simulation(), then pass the user's
selected protein/dataset into the simulation by (1) using the input value (e.g.,
input$response_mode or the control that populates choices) to subset
prepared_response_data() to a single protein or dataset before calling
prepare_dose_response_fit, (2) update the run handler to supply that subset (and
any chosen replicate range) as an argument to run_tpr_simulation(), and (3)
ensure the UI choices still come from prepared_response_data(); if you prefer
not to support this path yet, remove/disable the response-mode selector and any
code branches that assume it exists (also apply the same wiring/removal changes
at the other spots where choices and runs are prepared and invoked).
There was a problem hiding this comment.
Actionable comments posted: 1
🧹 Nitpick comments (3)
R/module-expdes-server.R (3)
296-308: Duplicatedmake_panellogic violates DRY.The
make_panelfunction defined here (lines 296-308) duplicates the implementation fromplot_tpr_power_curve(lines 79-101). Extract a shared helper function to avoid divergence and reduce maintenance burden.♻️ Proposed refactor to extract shared helper
Define a shared helper at module scope:
# At top of file, after plot_tpr_power_curve make_tpr_panel <- function(data, title = NULL, color, k_grid, ltype_values, show_legend = FALSE) { p <- ggplot2::ggplot(data, ggplot2::aes(x = NumConcs, y = TPR, linetype = factor(N_rep))) + ggplot2::geom_line(linewidth = 1.2, color = color) + ggplot2::geom_point(size = 2, color = color) + ggplot2::scale_x_continuous(breaks = k_grid) + ggplot2::scale_y_continuous(limits = c(0, 100), breaks = seq(0, 100, 20)) + ggplot2::scale_linetype_manual(values = ltype_values) + ggplot2::labs(title = title, x = "Number of concentrations", y = "True Positive Rate (%)", linetype = "Replicates") + ggplot2::theme_bw(base_size = 14) + ggplot2::theme(plot.title = ggplot2::element_text(face = "bold", hjust = 0.5)) if (!show_legend) p <- p + ggplot2::theme(legend.position = "none") p }Then use it in both
plot_tpr_power_curveand the download handler.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@R/module-expdes-server.R` around lines 296 - 308, The duplicated plotting logic in make_panel (used in the download handler) repeats code from plot_tpr_power_curve; extract a shared helper (e.g., make_tpr_panel) at module scope and have both plot_tpr_power_curve and the download handler call it. The helper should accept parameters used in both places (data, title, color, k_grid, ltype_values, and a show_legend flag) and apply the same ggplot layers and theme, hiding the legend when show_legend is FALSE; replace the existing make_panel implementation with a call to this new make_tpr_panel to avoid duplication.
28-31: Consider adding input validation forrep_range.The function assumes
rep_rangeis a length-2 integer vector withrep_range[1] <= rep_range[2], but doesn't validate this. Invalid input would produce silent failures or unexpected sequences.🛡️ Proposed defensive check
run_tpr_simulation <- function(rep_range, n_proteins = 1000) { + stopifnot( + is.numeric(rep_range), + length(rep_range) == 2, + rep_range[1] <= rep_range[2] + ) k_grid <- as.integer(names(CONC_MAP)) rep_grid <- seq(rep_range[1], rep_range[2])🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@R/module-expdes-server.R` around lines 28 - 31, Add defensive input validation in run_tpr_simulation for the rep_range parameter: verify rep_range is a length-2 numeric/integer vector with no NAs and that rep_range[1] <= rep_range[2]; if needed coerce to integers (or reject non-integer values) and stop() with a clear error message when the checks fail before constructing rep_grid. Reference rep_range, rep_grid and run_tpr_simulation when locating where to add the validation.
231-275: Output handlers re-assigned on each observe trigger.Defining
output[[...]] <- renderPlotly(...)inside anobserve()block means the handler is re-registered every time the reactive dependencies change. While this works, it's a subtle anti-pattern that can cause unnecessary overhead. Consider using a singlerenderPlotlywith conditional logic based onis_response_curve(), or useobserveEventwithonce = TRUEfor registration.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@R/module-expdes-server.R` around lines 231 - 275, The output handlers (output[[NAMESPACE_EXPDES$result_plot]] and output[[NAMESPACE_EXPDES$download_future]]) are being re-registered inside observe(), causing repeated re-assignment each reactive trigger; move their registration out of observe() and use a single renderPlotly and downloadHandler that call a reactive or function (e.g., future_exp or a wrapped reactive like future_exp_reactive) which checks is_response_curve() or returns NULL/empty when not applicable, or alternatively register them once via observeEvent(..., once = TRUE); ensure designSampleSizePlots(future_exp(), ...) remains the rendering logic but the output assignment happens only once.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@R/module-expdes-server.R`:
- Around line 18-28: The roxygen block for run_tpr_simulation documents a
non-existent parameter `protein`; remove the stale `@param protein` line from
the function's documentation so the signature and docs match, ensuring the
documented params (e.g., rep_range, n_proteins) accurately reflect the
run_tpr_simulation signature; if you intended to accept a protein argument
instead, add `protein` to the run_tpr_simulation function signature and use it
accordingly—choose one fix and update the roxygen tags to match.
---
Nitpick comments:
In `@R/module-expdes-server.R`:
- Around line 296-308: The duplicated plotting logic in make_panel (used in the
download handler) repeats code from plot_tpr_power_curve; extract a shared
helper (e.g., make_tpr_panel) at module scope and have both plot_tpr_power_curve
and the download handler call it. The helper should accept parameters used in
both places (data, title, color, k_grid, ltype_values, and a show_legend flag)
and apply the same ggplot layers and theme, hiding the legend when show_legend
is FALSE; replace the existing make_panel implementation with a call to this new
make_tpr_panel to avoid duplication.
- Around line 28-31: Add defensive input validation in run_tpr_simulation for
the rep_range parameter: verify rep_range is a length-2 numeric/integer vector
with no NAs and that rep_range[1] <= rep_range[2]; if needed coerce to integers
(or reject non-integer values) and stop() with a clear error message when the
checks fail before constructing rep_grid. Reference rep_range, rep_grid and
run_tpr_simulation when locating where to add the validation.
- Around line 231-275: The output handlers
(output[[NAMESPACE_EXPDES$result_plot]] and
output[[NAMESPACE_EXPDES$download_future]]) are being re-registered inside
observe(), causing repeated re-assignment each reactive trigger; move their
registration out of observe() and use a single renderPlotly and downloadHandler
that call a reactive or function (e.g., future_exp or a wrapped reactive like
future_exp_reactive) which checks is_response_curve() or returns NULL/empty when
not applicable, or alternatively register them once via observeEvent(..., once =
TRUE); ensure designSampleSizePlots(future_exp(), ...) remains the rendering
logic but the output assignment happens only once.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: 36d5a6c0-8744-46b5-be02-e222a7765cea
📒 Files selected for processing (1)
R/module-expdes-server.R
There was a problem hiding this comment.
♻️ Duplicate comments (3)
R/module-expdes-server.R (3)
211-222:⚠️ Potential issue | 🟠 MajorProtein selection is required but unused.
Line 212 requires
protein_selectto have a value, but the selected protein is never passed torun_tpr_simulation(). This creates a misleading UX where users must select a protein that has no effect on the simulation output. Either remove the requirement or wire the selection into the simulation.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@R/module-expdes-server.R` around lines 211 - 222, The code currently requires input[[NAMESPACE_EXPDES$protein_select]] in the observeEvent but never uses it; either remove that req or pass the selected protein into run_tpr_simulation so the selection affects results. Update the observeEvent to forward input[[NAMESPACE_EXPDES$protein_select]] as an argument to run_tpr_simulation (e.g., a parameter named protein or protein_select) and ensure simulation_results(results) still receives the returned object, or alternatively delete the req(input[[NAMESPACE_EXPDES$protein_select]]) if the simulation should not depend on a protein selection; modify run_tpr_simulation signature if needed to accept the new parameter.
309-319:⚠️ Potential issue | 🟡 MinorPDF export differs from UI: two separate pages vs. combined subplot.
The UI renders a side-by-side subplot via
plotly::subplot(), but the PDF download printsp_strongandp_weakas separate pages. Users downloading the plot will get a different layout than what they see in the app.💡 Suggested fix: Use gridExtra or patchwork for combined PDF
pdf(file, width = 16, height = 6) - print(p_strong) - print(p_weak) + combined <- gridExtra::grid.arrange(p_strong, p_weak, ncol = 2) + print(combined) dev.off()Alternatively, use
patchwork::wrap_plots(p_strong, p_weak, ncol = 2).🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@R/module-expdes-server.R` around lines 309 - 319, The PDF export currently prints p_strong and p_weak on separate pages (pdf(); print(p_strong); print(p_weak); dev.off()), which differs from the UI subplot; combine the two ggplot objects into a single side-by-side layout (e.g., use patchwork::wrap_plots(p_strong, p_weak, ncol = 2) or gridExtra::grid.arrange(p_strong, p_weak, ncol = 2)) and print that combined object between pdf() and dev.off() so the exported PDF matches the plotly::subplot UI; ensure necessary package is imported/required before use.
152-159:⚠️ Potential issue | 🟠 Major
prepared_response_data()is computed but never used in the simulation.The reactive prepares dose-response fit data, but
run_tpr_simulation()at line 218 doesn't receive it. The simulation runs a generic experiment regardless of the loaded dataset or selected protein.Either wire
prepared_response_data()into the simulation logic or remove it if the current generic simulation is intentional.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@R/module-expdes-server.R` around lines 152 - 159, prepared_response_data() is computed but never used by the simulation; wire it into the simulation path by passing the prepared dose-response data into run_tpr_simulation (or using it inside run_tpr_simulation) instead of the generic dataset. Locate prepared_response_data (which builds protein_level_data and calls prepare_dose_response_fit) and change the simulation invocation at run_tpr_simulation to accept prepared_response_data() (or update run_tpr_simulation to read prepared_response_data()) so the simulation uses the selected dataset/protein; if the generic simulation is intentional, remove prepared_response_data() and its related code (protein_level_data, prepare_dose_response_fit) to avoid dead code.
🧹 Nitpick comments (2)
R/module-expdes-server.R (2)
42-56: Consider adding error handling for individual simulation runs.If
futureExperimentSimulationfails for a specific parameter combination, the entire simulation will fail. Wrappingrun_oneintryCatchwould allow partial results and better diagnostics.💡 Optional: Add resilience to individual run failures
grid_df <- expand.grid(N_rep = rep_grid, k_conc = k_grid) - results <- do.call(rbind, lapply(seq_len(nrow(grid_df)), function(i) { - run_one(n_rep = grid_df$N_rep[i], k_conc = grid_df$k_conc[i]) - })) + results <- do.call(rbind, lapply(seq_len(nrow(grid_df)), function(i) { + tryCatch( + run_one(n_rep = grid_df$N_rep[i], k_conc = grid_df$k_conc[i]), + error = function(e) { + warning(sprintf("Simulation failed for N_rep=%d, k_conc=%d: %s", + grid_df$N_rep[i], grid_df$k_conc[i], conditionMessage(e))) + NULL + } + ) + }))🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@R/module-expdes-server.R` around lines 42 - 56, The run_one function currently calls futureExperimentSimulation directly, so a single failure aborts the whole batch; wrap the call to futureExperimentSimulation inside a tryCatch in run_one (catch error and warning), and on error return a safe, consistently-shaped data.frame (same columns as the successful branch: Interaction, TPR/Percent renamed to TPR, N_rep, NumConcs) or an empty row with NA values plus an extra column (e.g., ErrorMessage) containing the error text for diagnostics; then leave the outer loop that builds results via lapply/do.call(rbind) unchanged so partial results are combined and failures are visible.
290-307: Duplicatedmake_panelhelper and linetype logic.The
make_panelfunction andltype_valuescomputation are duplicated betweenplot_tpr_power_curve()(lines 75-100) and the download handler (lines 290-307). This increases maintenance burden and risks inconsistent behavior if one is updated without the other.♻️ Extract shared helper to module scope
Consider extracting
make_tpr_panel()as a module-level helper that bothplot_tpr_power_curve()and the download handler can reuse:# At module scope (after CONC_MAP) make_tpr_panel <- function(data, title = NULL, color, k_grid, ltype_values, show_legend = TRUE) { p <- ggplot2::ggplot(data, ggplot2::aes(x = NumConcs, y = TPR, linetype = factor(N_rep))) + ggplot2::geom_line(linewidth = 1.2, color = color) + ggplot2::geom_point(size = 2, color = color) + ggplot2::scale_x_continuous(breaks = k_grid) + ggplot2::scale_y_continuous(limits = c(0, 100), breaks = seq(0, 100, 20)) + ggplot2::scale_linetype_manual(values = ltype_values) + ggplot2::labs(title = title, x = "Number of concentrations", y = "True Positive Rate (%)", linetype = "Replicates") + ggplot2::theme_bw(base_size = 14) + ggplot2::theme(plot.title = ggplot2::element_text(face = "bold", hjust = 0.5)) if (!show_legend) p <- p + ggplot2::theme(legend.position = "none") p }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@R/module-expdes-server.R` around lines 290 - 307, The duplicated make_panel and ltype_values logic should be extracted to a single module-level helper (e.g., make_tpr_panel) and reused by both plot_tpr_power_curve() and the download handler: create make_tpr_panel(data, title = NULL, color, k_grid, ltype_values, show_legend = TRUE) that contains the ggplot construction and linetype scaling, remove the duplicated make_panel and ltype_values in both places, compute k_grid and ltype_values once at module scope (using the same rep_levels logic) and pass them into make_tpr_panel from plot_tpr_power_curve() and the download handler, and ensure show_legend toggles legend visibility so both call sites render correctly.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Duplicate comments:
In `@R/module-expdes-server.R`:
- Around line 211-222: The code currently requires
input[[NAMESPACE_EXPDES$protein_select]] in the observeEvent but never uses it;
either remove that req or pass the selected protein into run_tpr_simulation so
the selection affects results. Update the observeEvent to forward
input[[NAMESPACE_EXPDES$protein_select]] as an argument to run_tpr_simulation
(e.g., a parameter named protein or protein_select) and ensure
simulation_results(results) still receives the returned object, or alternatively
delete the req(input[[NAMESPACE_EXPDES$protein_select]]) if the simulation
should not depend on a protein selection; modify run_tpr_simulation signature if
needed to accept the new parameter.
- Around line 309-319: The PDF export currently prints p_strong and p_weak on
separate pages (pdf(); print(p_strong); print(p_weak); dev.off()), which differs
from the UI subplot; combine the two ggplot objects into a single side-by-side
layout (e.g., use patchwork::wrap_plots(p_strong, p_weak, ncol = 2) or
gridExtra::grid.arrange(p_strong, p_weak, ncol = 2)) and print that combined
object between pdf() and dev.off() so the exported PDF matches the
plotly::subplot UI; ensure necessary package is imported/required before use.
- Around line 152-159: prepared_response_data() is computed but never used by
the simulation; wire it into the simulation path by passing the prepared
dose-response data into run_tpr_simulation (or using it inside
run_tpr_simulation) instead of the generic dataset. Locate
prepared_response_data (which builds protein_level_data and calls
prepare_dose_response_fit) and change the simulation invocation at
run_tpr_simulation to accept prepared_response_data() (or update
run_tpr_simulation to read prepared_response_data()) so the simulation uses the
selected dataset/protein; if the generic simulation is intentional, remove
prepared_response_data() and its related code (protein_level_data,
prepare_dose_response_fit) to avoid dead code.
---
Nitpick comments:
In `@R/module-expdes-server.R`:
- Around line 42-56: The run_one function currently calls
futureExperimentSimulation directly, so a single failure aborts the whole batch;
wrap the call to futureExperimentSimulation inside a tryCatch in run_one (catch
error and warning), and on error return a safe, consistently-shaped data.frame
(same columns as the successful branch: Interaction, TPR/Percent renamed to TPR,
N_rep, NumConcs) or an empty row with NA values plus an extra column (e.g.,
ErrorMessage) containing the error text for diagnostics; then leave the outer
loop that builds results via lapply/do.call(rbind) unchanged so partial results
are combined and failures are visible.
- Around line 290-307: The duplicated make_panel and ltype_values logic should
be extracted to a single module-level helper (e.g., make_tpr_panel) and reused
by both plot_tpr_power_curve() and the download handler: create
make_tpr_panel(data, title = NULL, color, k_grid, ltype_values, show_legend =
TRUE) that contains the ggplot construction and linetype scaling, remove the
duplicated make_panel and ltype_values in both places, compute k_grid and
ltype_values once at module scope (using the same rep_levels logic) and pass
them into make_tpr_panel from plot_tpr_power_curve() and the download handler,
and ensure show_legend toggles legend visibility so both call sites render
correctly.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: 38d6e54d-3027-4d97-ade4-1ea24fe89dbf
📒 Files selected for processing (1)
R/module-expdes-server.R
There was a problem hiding this comment.
Actionable comments posted: 2
♻️ Duplicate comments (2)
R/module-expdes-server.R (2)
159-167:⚠️ Potential issue | 🟠 Major
protein_selectis required but not used by the simulation.The run path currently ignores the selected protein and
prepared_response_data(), so results are unchanged regardless of the chosen protein. This makes the response-mode selector effectively a no-op.Possible wiring direction
- results <- run_tpr_simulation( - rep_range = input[[NAMESPACE_EXPDES$rep_range]], - n_proteins = 1000 - ) + selected_protein_data <- prepared_response_data() |> + dplyr::filter(protein == input[[NAMESPACE_EXPDES$protein_select]]) + + results <- run_tpr_simulation( + rep_range = input[[NAMESPACE_EXPDES$rep_range]], + n_proteins = 1000, + protein_data = selected_protein_data + )Also applies to: 218-228
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@R/module-expdes-server.R` around lines 159 - 167, The reactive prepared_response_data() is building the dose-response dataset but it never applies the selected protein (protein_select) so the simulation is invariant to the protein choice; update prepared_response_data() to read the selected protein (e.g., protein_select()) and filter preprocess_data()$ProteinLevelData to that protein before merging with statmodel_contrast$matrix, then call prepare_dose_response_fit on the filtered protein_level_data; also ensure any downstream run path (the code referenced around the run path lines using prepared_response_data()) consumes prepared_response_data() rather than raw preprocess_data so the selected protein actually affects results.
316-326:⚠️ Potential issue | 🟡 MinorPDF export still diverges from the in-app figure layout.
The app renders a side-by-side combined plot, but the PDF path prints two separate panels/pages. Consider sharing one composition helper between render and download to keep artifacts consistent.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@R/module-expdes-server.R` around lines 316 - 326, The PDF export prints p_strong and p_weak on separate pages because the download path prints them individually instead of using the same side-by-side composition used in the app; create a single composition helper (e.g., make_composite or compose_panels) that takes the two ggplot objects produced by make_panel and returns one combined plot (using patchwork::`/` or `|`, cowplot::plot_grid, or gridExtra::grid.arrange wrapped as a grob). Replace the two separate print calls in the download code with a single print (or grid::grid.draw) of the combined plot object and reuse that same helper from the in-app render to ensure identical layout for both render and PDF output.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@R/module-expdes-server.R`:
- Around line 27-30: Add explicit validation at the start of run_tpr_simulation
to reject malformed rep_range: check that rep_range has length 2, contains no
NA, is numeric/integer, both values are positive, and rep_range[1] <=
rep_range[2]; if any check fails call stop() with a clear error message
explaining the expected format (e.g., "rep_range must be a length-2 numeric
vector with non-NA, positive values and min <= max"). Place this guard before
k_grid and rep_grid are constructed so invalid input is rejected early.
- Around line 293-295: Inside the downloadHandler's content function (the
content = function(file) block) remove the req(simulation_results()) call and
replace it with an explicit null check on simulation_results(); if
simulation_results() is NULL, write a clear error message to the provided file
(using writeLines or similar) and return early, otherwise proceed to assign
results <- simulation_results() and continue the normal file writing; reference
symbols: downloadHandler, content = function(file), simulation_results(), and
results.
---
Duplicate comments:
In `@R/module-expdes-server.R`:
- Around line 159-167: The reactive prepared_response_data() is building the
dose-response dataset but it never applies the selected protein (protein_select)
so the simulation is invariant to the protein choice; update
prepared_response_data() to read the selected protein (e.g., protein_select())
and filter preprocess_data()$ProteinLevelData to that protein before merging
with statmodel_contrast$matrix, then call prepare_dose_response_fit on the
filtered protein_level_data; also ensure any downstream run path (the code
referenced around the run path lines using prepared_response_data()) consumes
prepared_response_data() rather than raw preprocess_data so the selected protein
actually affects results.
- Around line 316-326: The PDF export prints p_strong and p_weak on separate
pages because the download path prints them individually instead of using the
same side-by-side composition used in the app; create a single composition
helper (e.g., make_composite or compose_panels) that takes the two ggplot
objects produced by make_panel and returns one combined plot (using
patchwork::`/` or `|`, cowplot::plot_grid, or gridExtra::grid.arrange wrapped as
a grob). Replace the two separate print calls in the download code with a single
print (or grid::grid.draw) of the combined plot object and reuse that same
helper from the in-app render to ensure identical layout for both render and PDF
output.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: 1f63be72-bd50-44ef-baf1-092a1ee2d173
📒 Files selected for processing (1)
R/module-expdes-server.R
R/module-expdes-server.R
Outdated
| run_tpr_simulation <- function(rep_range, n_proteins = 1000) { | ||
| k_grid <- as.integer(names(CONC_MAP)) | ||
| rep_grid <- seq(rep_range[1], rep_range[2]) | ||
|
|
There was a problem hiding this comment.
Validate rep_range before building the simulation grid.
run_tpr_simulation() is exported; it should reject malformed ranges (length != 2, NA, non-positive, or min > max) early with a clear message.
Suggested guard
run_tpr_simulation <- function(rep_range, n_proteins = 1000) {
+ if (length(rep_range) != 2 || anyNA(rep_range) ||
+ any(rep_range < 1) || rep_range[1] > rep_range[2]) {
+ stop("`rep_range` must be c(min, max) with positive integers and min <= max.")
+ }
k_grid <- as.integer(names(CONC_MAP))
rep_grid <- seq(rep_range[1], rep_range[2])🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@R/module-expdes-server.R` around lines 27 - 30, Add explicit validation at
the start of run_tpr_simulation to reject malformed rep_range: check that
rep_range has length 2, contains no NA, is numeric/integer, both values are
positive, and rep_range[1] <= rep_range[2]; if any check fails call stop() with
a clear error message explaining the expected format (e.g., "rep_range must be a
length-2 numeric vector with non-NA, positive values and min <= max"). Place
this guard before k_grid and rep_grid are constructed so invalid input is
rejected early.
R/module-expdes-server.R
Outdated
| content = function(file) { | ||
| req(simulation_results()) | ||
| results <- simulation_results() |
There was a problem hiding this comment.
🧩 Analysis chain
🌐 Web query:
In Shiny, what is the recommended error-handling pattern inside downloadHandler(content=...) when req()/stop() would be triggered, and does rstudio/shiny#2070 still apply?
💡 Result:
Recommended pattern (when req() / stop() might be triggered)
Don’t let req()/validate()/stop() fire inside downloadHandler(content=...). Instead, run all checks + expensive/fragile logic before the download is initiated, and only have content do a “dumb” write of already-prepared data (or a known-good temp file).
Common implementation pattern:
- User clicks an
actionButton()(“Prepare download”) observeEvent()doesreq(...)/tryCatch(...); on failure, show a modal/notification and do not trigger a download- On success, store the result in a
reactiveVal()(or create a temp file) and then programmatically click a hiddendownloadButton() downloadHandler(content=...)just writes the stored result
This is the core idea shown in the Stack Overflow answer (hidden downloadButton + visible actionButton + shinyjs::click() only after success). [1]
Does rstudio/shiny#2070 still apply?
Yes—the issue is still Open (opened May 23, 2018) and describes that including stop()/req() in downloadHandler()’s filename or content “simply crashes the app.” [2]
Sources: [1] [2]
🏁 Script executed:
cd /repo && wc -l "R/module-expdes-server.R"Repository: Vitek-Lab/MSstatsShiny
Length of output: 121
🏁 Script executed:
cd /repo && sed -n '280,310p' "R/module-expdes-server.R"Repository: Vitek-Lab/MSstatsShiny
Length of output: 121
🏁 Script executed:
pwd && git rev-parse --show-toplevel 2>/dev/null || echo "Not in git repo"Repository: Vitek-Lab/MSstatsShiny
Length of output: 103
🏁 Script executed:
git ls-files "R/module-expdes-server.R" 2>/dev/null && wc -l "R/module-expdes-server.R"Repository: Vitek-Lab/MSstatsShiny
Length of output: 119
🏁 Script executed:
sed -n '280,310p' "R/module-expdes-server.R"Repository: Vitek-Lab/MSstatsShiny
Length of output: 1434
Remove req() from inside downloadHandler(content=...) and write an error message to the file instead.
Calling req() inside the content function can crash or disconnect the Shiny session (rstudio/shiny#2070). Instead, guard with explicit checks like is.null(simulation_results()) and write an informative error message to the file so users receive meaningful feedback rather than a broken download.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@R/module-expdes-server.R` around lines 293 - 295, Inside the
downloadHandler's content function (the content = function(file) block) remove
the req(simulation_results()) call and replace it with an explicit null check on
simulation_results(); if simulation_results() is NULL, write a clear error
message to the provided file (using writeLines or similar) and return early,
otherwise proceed to assign results <- simulation_results() and continue the
normal file writing; reference symbols: downloadHandler, content =
function(file), simulation_results(), and results.
2cfbade to
ca04912
Compare
R/statmodel-server-download-code.R
Outdated
| codes = paste(codes, "library(MSstatsResponse)\n", sep = "") | ||
| codes = paste(codes, "condition_names = c(\"", | ||
| paste(comp_mat$GROUP, collapse = "\",\""), "\")\n", sep = "") | ||
| codes = paste(codes, "contrast_matrix = setup_metadata(condition_names)\n", sep = "") |
There was a problem hiding this comment.
There's no function called setup_metadata.
Preferably, you take what the existing contrast matrix is and create a data table/frame containing those specific values.
R/statmodel-server-download-code.R
Outdated
|
|
||
| codes = paste(codes, "\n# Prepare data for dose response fitting\n", sep = "") | ||
| codes = paste(codes, "protein_level_data = merge(summarized$ProteinLevelData, contrast_matrix, by = \"GROUP\")\n", sep = "") | ||
| codes = paste(codes, "prepared_data = prepare_dose_response_fit(protein_level_data)\n", sep = "") |
There was a problem hiding this comment.
Users don't have access to the prepare_dose_response_fit function because it's a private function in MSstatsShiny. This function basically does column mapping to the appropriate column name format for MSstatsResponse.
Could you review this function? I expect to see users to download code that does this column mapping explicitly, e.g. like:
protein_level_data$protein = protein_level_data$Protein
protein_level_data$response = protein_level_data$LogIntensities
Mapping the intervention_value and intervention_type will be slightly trickier and may require some refactoring of the code to make it clean.
…d of calling private function
Motivation and Context
The app previously supported downloading analysis code for standard comparative proteomics but lacked support for dose–response (response-curve) studies and tools to plan future dose–response experiments. This PR adds: (1) generation of reproducible dose–response analysis code (MSstatsResponse) for users to download and run locally, (2) simulation utilities and plotting for true-positive-rate (TPR) power curves to plan future experiments, and (3) UI/server plumbing to expose these capabilities in the "Design future experiments" (Expdes) workflow.
Short summary of the solution
Detailed changes
R/statmodel-server-download-code.R
R/module-expdes-server.R
R/module-expdes-ui.R
R/module-statmodel-server.R
R/server.R
R/constants.R
R/statmodel-server-visualization.R
NAMESPACE & R/MSstatsShiny.R
Documentation (man/*.Rd)
Tests
Unit tests added or modified
Coding guidelines / notable issues (violations or concerns)
Mismatch between fitted-model and visualization ratio flag
Emitted code contains hardcoded placeholders and fixed parameters
Generated/downloaded scripts lack defensive checks and error handling
Documentation mismatch in man/run_tpr_simulation.Rd
Module size and import surface
Tests assert on exact emitted code strings (brittle)