Skip to content

Learning fixes for v1.8.1#550

Merged
MahdiAll99 merged 19 commits intodevelopfrom
learningFixes
Apr 10, 2026
Merged

Learning fixes for v1.8.1#550
MahdiAll99 merged 19 commits intodevelopfrom
learningFixes

Conversation

@MahdiAll99
Copy link
Copy Markdown
Contributor

No description provided.

Copy link
Copy Markdown
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This PR bundles a set of “learning” fixes intended for the v1.8.1 release, spanning frontend UI tweaks, improved error propagation from the Go/Python execution layer, and updated evaluation/metrics handling in the Python ML pipeline.

Changes:

  • Update model-result rendering to incorporate overall_metrics (e.g., optimized-threshold metrics) and enhance dataset tooltip copy.
  • Improve CodeEditor error handling to accept structured backend errors (message + stack trace).
  • Enhance Go server execution error reporting when Python exits before emitting a response, and adjust Python evaluation output column naming.

Reviewed changes

Copilot reviewed 8 out of 8 changed files in this pull request and generated 5 comments.

Show a summary per file
File Description
renderer/components/mainPages/settings.jsx Re-run initial settings/status checks when pythonEmbedded changes.
renderer/components/learning/results/node/modelsResults.jsx Incorporate overall_metrics into displayed model metrics.
renderer/components/learning/nodesTypes/datasetNode.jsx Tooltip copy/formatting improvements for class-imbalance guidance.
renderer/components/flow/codeEditor.jsx Support structured error objects and render stack traces in the UI.
pythonCode/modules/evaluation/predict_test.py Rename prediction_score to confidence_score in prediction output.
pythonCode/med_libs/MEDml/nodes/ModelHandler.py Threshold-optimization metric handling + recalculation of reported metrics.
go_server/src/utils.go Capture stderr for early Python crashes and return structured error payloads.
electron-builder.yml Package baseFiles/emptyNotebook.ipynb into app resources.
Comments suppressed due to low confidence (2)

renderer/components/flow/codeEditor.jsx:227

  • When a file loads successfully, error / stackTrace are never cleared. If the user previously hit an error, the component will keep rendering the error view (and/or stale stack trace) even after a subsequent successful load. Clear error and stackTrace when starting a load and on the success path; also clear stackTrace when response.error is a string or when the object error lacks stack_trace.
        if (response.error) {
          if (typeof response.error === "object") {
            setError(response.error.message || "An error occurred while loading the file.")
            toast.error("Error loading file: " + (response.error.message || "Unknown error"))
            if (response.error.stack_trace) {
              setStackTrace(response.error.stack_trace)
            }
          } else if (typeof response.error === "string") {
            setError(response.error)
            toast.error("Error opening file: " + response.error)
          }
        }
        else {
          setContent(response.content)
          setMode(response.language)
          toast.success("Opened file successfully")
        }

pythonCode/med_libs/MEDml/nodes/ModelHandler.py:752

  • There are two set_model methods; the second definition overwrites the first, making the try/except handling and the tuning settings update in the first method unreachable. Remove the duplicate and keep a single implementation (or rename them if both behaviors are needed).
    def set_model(self, model_id: str) -> None:
        try:
            model_obj = self.global_config_json['nodes'][model_id]
            self.config_json['data']['estimator'] = {
                "type": model_obj['data']['selection'],
                "settings": model_obj['data']['settings']
            }
            if self.isTuningEnabled:
                self.config_json['data']['internal']['settingsTuning'] = model_obj['data']['internal'].get('settingsTuning', {})
        except KeyError as e:
            print(f"ERROR: Failed to set model {model_id}. Missing key: {e}")
            raise ValueError(f"Model configuration for {model_id} not found in the global config.")
        except Exception as e:
            print(f"ERROR: An error occurred while setting the model: {e}")
            raise ValueError(f"An error occurred while setting model {model_id}.")

    def set_model(self, model_id: str) -> None:
        model_obj = self.global_config_json['nodes'][model_id]
        self.config_json['data']['estimator'] = {
            "type": model_obj['data']['selection'],
            "settings": model_obj['data']['settings']
        }

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

Comment on lines 47 to +60
useEffect(() => {
let allModelsData = []
if (models.length > 0) {
models.forEach((model) => {
if (!model.metrics) return
let modifiedRow = { ...model.metrics }
modifiedRow["Parameters"] = model.params
modifiedRow["OverallMetrics"] = selectedResults?.data?.overall_metrics
modifiedRow = Object.assign({ Name: model.name }, modifiedRow)
allModelsData.push(modifiedRow)
})
models.forEach((model) => {
if (!model.metrics) return
let modifiedRow = { ...model.metrics }

// FIX: if overall_metrics exists, use these new metrics (optimal threshold)
const overallMetrics = selectedResults?.data?.overall_metrics
if (overallMetrics) {
Object.keys(overallMetrics).forEach((key) => {
modifiedRow[key] = overallMetrics[key].mean
})
}
Copy link

Copilot AI Apr 9, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This effect reads selectedResults (for overall_metrics) but it isn’t included in the dependency array. That can lead to stale metrics if selectedResults changes without triggering a models state change, and it will also trip the React hooks exhaustive-deps rule. Include selectedResults (or selectedResults.data.overall_metrics) in the dependency list, or derive overallMetrics earlier and pass it in.

Copilot uses AI. Check for mistakes.
Comment on lines 11 to 14
from sklearn.metrics import (accuracy_score, confusion_matrix, f1_score,
matthews_corrcoef, precision_score, recall_score,
roc_auc_score)
roc_auc_score, roc_curve)

Copy link

Copilot AI Apr 9, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

roc_curve is imported but never used. This will fail linting/CI in setups that treat unused imports as errors. Remove the unused import (or add the missing usage if intended).

Copilot uses AI. Check for mistakes.
Comment on lines 283 to +291
y_proba = model.predict_proba(X_test_fold)[:, 1] if hasattr(model, 'predict_proba') else None

# Calculate all metrics manually
if y_proba is not None:
# FIX 2: read probability_threshold with fallback to threshold
threshold = getattr(model, 'probability_threshold', getattr(model, 'threshold', 0.5))
y_pred = (y_proba >= threshold).astype(int)
else:
y_pred = model.predict(X_test_fold)

Copy link

Copilot AI Apr 9, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

__custom_train_and_evaluate now derives y_pred by thresholding predict_proba()[:, 1] for any classifier that exposes predict_proba. This is incorrect for multiclass classification (it will force binary predictions based on the second column) and can corrupt all fold metrics. Gate the thresholding logic to binary problems only (e.g., len(np.unique(y_test_fold)) == 2 and/or only when optimize_threshold was applied / the model exposes probability_threshold).

Copilot uses AI. Check for mistakes.
Comment on lines 170 to 180
stdout, err := Scripts[id].Cmd.StdoutPipe()
Mu.Unlock()
if err != nil {
log.Println("Error getting stdout pipe")
log.Println(err.Error())
log.Println("Error getting stdout pipe: " + err.Error())
}
Mu.Lock()
stderr, err := Scripts[id].Cmd.StderrPipe()
Mu.Unlock()
if err != nil {
log.Println("Error getting stderr pipe")
log.Println(err.Error())
log.Println("Error getting stderr pipe: " + err.Error())
}
Copy link

Copilot AI Apr 9, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If StdoutPipe() / StderrPipe() fails, the function logs the error but continues. That can pass a nil reader into copyOutput / copyOutputErr and lead to panics or hangs later. Return early with the pipe error (and consider cleaning up Scripts[id]) instead of continuing.

Copilot uses AI. Check for mistakes.
Comment on lines 189 to +205
response := ""
go copyOutput(stdout, &response)
go copyOutput(stderr, &response)
err = Scripts[id].Cmd.Wait()
if err != nil {
log.Println("Error waiting for command to finish")
return "", err
// stderrBuf collects every line from stderr so we can surface it on early
// crashes (e.g. ModuleNotFoundError) that never reach send_response().
var stderrBuf strings.Builder
var stderrMu sync.Mutex

var wg sync.WaitGroup
wg.Add(2)
go func() {
defer wg.Done()
copyOutput(stdout, &response)
}()
go func() {
defer wg.Done()
copyOutputErr(stderr, &response, &stderrBuf, &stderrMu)
}()
wg.Wait()
Copy link

Copilot AI Apr 9, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

response is written from both stdout and stderr goroutines without synchronization (*response = ...). This introduces a data race and nondeterministic behavior if both streams emit a response-ready signal or if a crash path writes while stdout is still running. Protect response with a mutex/atomic or send the response via a channel and only set it from a single goroutine.

Copilot uses AI. Check for mistakes.
Copy link
Copy Markdown
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Copilot reviewed 10 out of 10 changed files in this pull request and generated 3 comments.

Comments suppressed due to low confidence (2)

renderer/components/flow/codeEditor.jsx:227

  • When a file load succeeds (or when an error without stack_trace occurs), the component doesn’t clear error/stackTrace. This can leave the UI stuck in the error view or show a stale stack trace from a previous failure. Clear error and stackTrace at the start of loadFileContent and/or in the success path, and ensure the string-error path also resets stackTrace.
        if (response.error) {
          if (typeof response.error === "object") {
            setError(response.error.message || "An error occurred while loading the file.")
            toast.error("Error loading file: " + (response.error.message || "Unknown error"))
            if (response.error.stack_trace) {
              setStackTrace(response.error.stack_trace)
            }
          } else if (typeof response.error === "string") {
            setError(response.error)
            toast.error("Error opening file: " + response.error)
          }
        }
        else {
          setContent(response.content)
          setMode(response.language)
          toast.success("Opened file successfully")
        }

pythonCode/med_libs/MEDml/nodes/ModelHandler.py:855

  • ModelHandler defines set_model twice. In Python the second definition overwrites the first, so the error handling and tuning-update logic in the first implementation becomes dead code. Remove one of the definitions (or merge their behavior) so there is a single authoritative set_model implementation.
    def set_model(self, model_id: str) -> None:
        try:
            model_obj = self.global_config_json['nodes'][model_id]
            self.config_json['data']['estimator'] = {
                "type": model_obj['data']['selection'],
                "settings": model_obj['data']['settings']
            }
            if self.isTuningEnabled:
                self.config_json['data']['internal']['settingsTuning'] = model_obj['data']['internal'].get('settingsTuning', {})
        except KeyError as e:
            print(f"ERROR: Failed to set model {model_id}. Missing key: {e}")
            raise ValueError(f"Model configuration for {model_id} not found in the global config.")
        except Exception as e:
            print(f"ERROR: An error occurred while setting the model: {e}")
            raise ValueError(f"An error occurred while setting model {model_id}.")

    def set_model(self, model_id: str) -> None:
        model_obj = self.global_config_json['nodes'][model_id]
        self.config_json['data']['estimator'] = {
            "type": model_obj['data']['selection'],
            "settings": model_obj['data']['settings']
        }

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

Comment on lines 75 to +95
useEffect(() => {
ipcRenderer.invoke("get-settings").then((receivedSettings) => {
console.log("received settings", receivedSettings)
setSettings(receivedSettings)
if (pythonEmbedded.pythonEmbedded) {
setCondaPath(pythonEmbedded.pythonEmbedded)
} else if (receivedSettings?.condaPath) {
setCondaPath(receivedSettings?.condaPath)
}
if (receivedSettings?.seed) {
setSeed(receivedSettings?.seed)
}
})
// ipcRenderer.invoke("server-is-running").then((status) => {
// setServerIsRunning(status)
// console.log("server is running", status)
// })
checkMongoIsRunning()
checkServer()
getJupyterStatus()
}, [])
}, [pythonEmbedded])
Comment on lines +54 to +65
// FIX: if overall_metrics exists, use these new metrics (optimal threshold)
const overallMetrics = selectedResults?.data?.overall_metrics
if (overallMetrics) {
Object.keys(overallMetrics).forEach((key) => {
modifiedRow[key] = overallMetrics[key].mean
})
}

modifiedRow["Parameters"] = model.params
modifiedRow["OverallMetrics"] = overallMetrics
modifiedRow = Object.assign({ Name: model.name }, modifiedRow)
allModelsData.push(modifiedRow)
Comment on lines 85 to +98
self.optimize_threshold = self.config_json['data']['internal'].get('optimizeThreshold', False)
if self.optimize_threshold:
self.threshold_optimization_metric = self.config_json['data']['internal'].get('threshOptimizationMetric', 'Accuracy')
# Normalisation du nom de métrique pour PyCaret
METRIC_NAME_MAP = {
'Youden': 'Youden Index',
'BAC': 'Balanced Accuracy',
'Specificity': 'Specificity',
'NPV': 'NPV',
}
self.threshold_optimization_metric = METRIC_NAME_MAP.get(
self.threshold_optimization_metric,
self.threshold_optimization_metric
)
@MahdiAll99 MahdiAll99 merged commit db9ac0a into develop Apr 10, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants