Conversation
…into learningFixes
…into learningFixes
…into learningFixes
…into learningFixes
…into learningFixes
There was a problem hiding this comment.
Pull request overview
This PR bundles a set of “learning” fixes intended for the v1.8.1 release, spanning frontend UI tweaks, improved error propagation from the Go/Python execution layer, and updated evaluation/metrics handling in the Python ML pipeline.
Changes:
- Update model-result rendering to incorporate
overall_metrics(e.g., optimized-threshold metrics) and enhance dataset tooltip copy. - Improve CodeEditor error handling to accept structured backend errors (message + stack trace).
- Enhance Go server execution error reporting when Python exits before emitting a response, and adjust Python evaluation output column naming.
Reviewed changes
Copilot reviewed 8 out of 8 changed files in this pull request and generated 5 comments.
Show a summary per file
| File | Description |
|---|---|
| renderer/components/mainPages/settings.jsx | Re-run initial settings/status checks when pythonEmbedded changes. |
| renderer/components/learning/results/node/modelsResults.jsx | Incorporate overall_metrics into displayed model metrics. |
| renderer/components/learning/nodesTypes/datasetNode.jsx | Tooltip copy/formatting improvements for class-imbalance guidance. |
| renderer/components/flow/codeEditor.jsx | Support structured error objects and render stack traces in the UI. |
| pythonCode/modules/evaluation/predict_test.py | Rename prediction_score to confidence_score in prediction output. |
| pythonCode/med_libs/MEDml/nodes/ModelHandler.py | Threshold-optimization metric handling + recalculation of reported metrics. |
| go_server/src/utils.go | Capture stderr for early Python crashes and return structured error payloads. |
| electron-builder.yml | Package baseFiles/emptyNotebook.ipynb into app resources. |
Comments suppressed due to low confidence (2)
renderer/components/flow/codeEditor.jsx:227
- When a file loads successfully,
error/stackTraceare never cleared. If the user previously hit an error, the component will keep rendering the error view (and/or stale stack trace) even after a subsequent successful load. ClearerrorandstackTracewhen starting a load and on the success path; also clearstackTracewhenresponse.erroris a string or when the object error lacksstack_trace.
if (response.error) {
if (typeof response.error === "object") {
setError(response.error.message || "An error occurred while loading the file.")
toast.error("Error loading file: " + (response.error.message || "Unknown error"))
if (response.error.stack_trace) {
setStackTrace(response.error.stack_trace)
}
} else if (typeof response.error === "string") {
setError(response.error)
toast.error("Error opening file: " + response.error)
}
}
else {
setContent(response.content)
setMode(response.language)
toast.success("Opened file successfully")
}
pythonCode/med_libs/MEDml/nodes/ModelHandler.py:752
- There are two
set_modelmethods; the second definition overwrites the first, making the try/except handling and the tuning settings update in the first method unreachable. Remove the duplicate and keep a single implementation (or rename them if both behaviors are needed).
def set_model(self, model_id: str) -> None:
try:
model_obj = self.global_config_json['nodes'][model_id]
self.config_json['data']['estimator'] = {
"type": model_obj['data']['selection'],
"settings": model_obj['data']['settings']
}
if self.isTuningEnabled:
self.config_json['data']['internal']['settingsTuning'] = model_obj['data']['internal'].get('settingsTuning', {})
except KeyError as e:
print(f"ERROR: Failed to set model {model_id}. Missing key: {e}")
raise ValueError(f"Model configuration for {model_id} not found in the global config.")
except Exception as e:
print(f"ERROR: An error occurred while setting the model: {e}")
raise ValueError(f"An error occurred while setting model {model_id}.")
def set_model(self, model_id: str) -> None:
model_obj = self.global_config_json['nodes'][model_id]
self.config_json['data']['estimator'] = {
"type": model_obj['data']['selection'],
"settings": model_obj['data']['settings']
}
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| useEffect(() => { | ||
| let allModelsData = [] | ||
| if (models.length > 0) { | ||
| models.forEach((model) => { | ||
| if (!model.metrics) return | ||
| let modifiedRow = { ...model.metrics } | ||
| modifiedRow["Parameters"] = model.params | ||
| modifiedRow["OverallMetrics"] = selectedResults?.data?.overall_metrics | ||
| modifiedRow = Object.assign({ Name: model.name }, modifiedRow) | ||
| allModelsData.push(modifiedRow) | ||
| }) | ||
| models.forEach((model) => { | ||
| if (!model.metrics) return | ||
| let modifiedRow = { ...model.metrics } | ||
|
|
||
| // FIX: if overall_metrics exists, use these new metrics (optimal threshold) | ||
| const overallMetrics = selectedResults?.data?.overall_metrics | ||
| if (overallMetrics) { | ||
| Object.keys(overallMetrics).forEach((key) => { | ||
| modifiedRow[key] = overallMetrics[key].mean | ||
| }) | ||
| } |
There was a problem hiding this comment.
This effect reads selectedResults (for overall_metrics) but it isn’t included in the dependency array. That can lead to stale metrics if selectedResults changes without triggering a models state change, and it will also trip the React hooks exhaustive-deps rule. Include selectedResults (or selectedResults.data.overall_metrics) in the dependency list, or derive overallMetrics earlier and pass it in.
| from sklearn.metrics import (accuracy_score, confusion_matrix, f1_score, | ||
| matthews_corrcoef, precision_score, recall_score, | ||
| roc_auc_score) | ||
| roc_auc_score, roc_curve) | ||
|
|
There was a problem hiding this comment.
roc_curve is imported but never used. This will fail linting/CI in setups that treat unused imports as errors. Remove the unused import (or add the missing usage if intended).
| y_proba = model.predict_proba(X_test_fold)[:, 1] if hasattr(model, 'predict_proba') else None | ||
|
|
||
| # Calculate all metrics manually | ||
| if y_proba is not None: | ||
| # FIX 2: read probability_threshold with fallback to threshold | ||
| threshold = getattr(model, 'probability_threshold', getattr(model, 'threshold', 0.5)) | ||
| y_pred = (y_proba >= threshold).astype(int) | ||
| else: | ||
| y_pred = model.predict(X_test_fold) | ||
|
|
There was a problem hiding this comment.
__custom_train_and_evaluate now derives y_pred by thresholding predict_proba()[:, 1] for any classifier that exposes predict_proba. This is incorrect for multiclass classification (it will force binary predictions based on the second column) and can corrupt all fold metrics. Gate the thresholding logic to binary problems only (e.g., len(np.unique(y_test_fold)) == 2 and/or only when optimize_threshold was applied / the model exposes probability_threshold).
| stdout, err := Scripts[id].Cmd.StdoutPipe() | ||
| Mu.Unlock() | ||
| if err != nil { | ||
| log.Println("Error getting stdout pipe") | ||
| log.Println(err.Error()) | ||
| log.Println("Error getting stdout pipe: " + err.Error()) | ||
| } | ||
| Mu.Lock() | ||
| stderr, err := Scripts[id].Cmd.StderrPipe() | ||
| Mu.Unlock() | ||
| if err != nil { | ||
| log.Println("Error getting stderr pipe") | ||
| log.Println(err.Error()) | ||
| log.Println("Error getting stderr pipe: " + err.Error()) | ||
| } |
There was a problem hiding this comment.
If StdoutPipe() / StderrPipe() fails, the function logs the error but continues. That can pass a nil reader into copyOutput / copyOutputErr and lead to panics or hangs later. Return early with the pipe error (and consider cleaning up Scripts[id]) instead of continuing.
| response := "" | ||
| go copyOutput(stdout, &response) | ||
| go copyOutput(stderr, &response) | ||
| err = Scripts[id].Cmd.Wait() | ||
| if err != nil { | ||
| log.Println("Error waiting for command to finish") | ||
| return "", err | ||
| // stderrBuf collects every line from stderr so we can surface it on early | ||
| // crashes (e.g. ModuleNotFoundError) that never reach send_response(). | ||
| var stderrBuf strings.Builder | ||
| var stderrMu sync.Mutex | ||
|
|
||
| var wg sync.WaitGroup | ||
| wg.Add(2) | ||
| go func() { | ||
| defer wg.Done() | ||
| copyOutput(stdout, &response) | ||
| }() | ||
| go func() { | ||
| defer wg.Done() | ||
| copyOutputErr(stderr, &response, &stderrBuf, &stderrMu) | ||
| }() | ||
| wg.Wait() |
There was a problem hiding this comment.
response is written from both stdout and stderr goroutines without synchronization (*response = ...). This introduces a data race and nondeterministic behavior if both streams emit a response-ready signal or if a crash path writes while stdout is still running. Protect response with a mutex/atomic or send the response via a channel and only set it from a single goroutine.
…ual pycaret exp and not fold exp)
There was a problem hiding this comment.
Pull request overview
Copilot reviewed 10 out of 10 changed files in this pull request and generated 3 comments.
Comments suppressed due to low confidence (2)
renderer/components/flow/codeEditor.jsx:227
- When a file load succeeds (or when an error without
stack_traceoccurs), the component doesn’t clearerror/stackTrace. This can leave the UI stuck in the error view or show a stale stack trace from a previous failure. ClearerrorandstackTraceat the start ofloadFileContentand/or in the success path, and ensure the string-error path also resetsstackTrace.
if (response.error) {
if (typeof response.error === "object") {
setError(response.error.message || "An error occurred while loading the file.")
toast.error("Error loading file: " + (response.error.message || "Unknown error"))
if (response.error.stack_trace) {
setStackTrace(response.error.stack_trace)
}
} else if (typeof response.error === "string") {
setError(response.error)
toast.error("Error opening file: " + response.error)
}
}
else {
setContent(response.content)
setMode(response.language)
toast.success("Opened file successfully")
}
pythonCode/med_libs/MEDml/nodes/ModelHandler.py:855
ModelHandlerdefinesset_modeltwice. In Python the second definition overwrites the first, so the error handling and tuning-update logic in the first implementation becomes dead code. Remove one of the definitions (or merge their behavior) so there is a single authoritativeset_modelimplementation.
def set_model(self, model_id: str) -> None:
try:
model_obj = self.global_config_json['nodes'][model_id]
self.config_json['data']['estimator'] = {
"type": model_obj['data']['selection'],
"settings": model_obj['data']['settings']
}
if self.isTuningEnabled:
self.config_json['data']['internal']['settingsTuning'] = model_obj['data']['internal'].get('settingsTuning', {})
except KeyError as e:
print(f"ERROR: Failed to set model {model_id}. Missing key: {e}")
raise ValueError(f"Model configuration for {model_id} not found in the global config.")
except Exception as e:
print(f"ERROR: An error occurred while setting the model: {e}")
raise ValueError(f"An error occurred while setting model {model_id}.")
def set_model(self, model_id: str) -> None:
model_obj = self.global_config_json['nodes'][model_id]
self.config_json['data']['estimator'] = {
"type": model_obj['data']['selection'],
"settings": model_obj['data']['settings']
}
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| useEffect(() => { | ||
| ipcRenderer.invoke("get-settings").then((receivedSettings) => { | ||
| console.log("received settings", receivedSettings) | ||
| setSettings(receivedSettings) | ||
| if (pythonEmbedded.pythonEmbedded) { | ||
| setCondaPath(pythonEmbedded.pythonEmbedded) | ||
| } else if (receivedSettings?.condaPath) { | ||
| setCondaPath(receivedSettings?.condaPath) | ||
| } | ||
| if (receivedSettings?.seed) { | ||
| setSeed(receivedSettings?.seed) | ||
| } | ||
| }) | ||
| // ipcRenderer.invoke("server-is-running").then((status) => { | ||
| // setServerIsRunning(status) | ||
| // console.log("server is running", status) | ||
| // }) | ||
| checkMongoIsRunning() | ||
| checkServer() | ||
| getJupyterStatus() | ||
| }, []) | ||
| }, [pythonEmbedded]) |
| // FIX: if overall_metrics exists, use these new metrics (optimal threshold) | ||
| const overallMetrics = selectedResults?.data?.overall_metrics | ||
| if (overallMetrics) { | ||
| Object.keys(overallMetrics).forEach((key) => { | ||
| modifiedRow[key] = overallMetrics[key].mean | ||
| }) | ||
| } | ||
|
|
||
| modifiedRow["Parameters"] = model.params | ||
| modifiedRow["OverallMetrics"] = overallMetrics | ||
| modifiedRow = Object.assign({ Name: model.name }, modifiedRow) | ||
| allModelsData.push(modifiedRow) |
| self.optimize_threshold = self.config_json['data']['internal'].get('optimizeThreshold', False) | ||
| if self.optimize_threshold: | ||
| self.threshold_optimization_metric = self.config_json['data']['internal'].get('threshOptimizationMetric', 'Accuracy') | ||
| # Normalisation du nom de métrique pour PyCaret | ||
| METRIC_NAME_MAP = { | ||
| 'Youden': 'Youden Index', | ||
| 'BAC': 'Balanced Accuracy', | ||
| 'Specificity': 'Specificity', | ||
| 'NPV': 'NPV', | ||
| } | ||
| self.threshold_optimization_metric = METRIC_NAME_MAP.get( | ||
| self.threshold_optimization_metric, | ||
| self.threshold_optimization_metric | ||
| ) |
No description provided.