Conversation
There was a problem hiding this comment.
Pull request overview
This PR enables the Autograder pipeline’s Feedback step for web-initiated grading jobs, so web submissions will now produce and persist user-facing feedback alongside scores and result trees.
Changes:
- Enable feedback generation in the web grading pipeline (
include_feedback=True). - Provide an (empty) feedback configuration dictionary to satisfy the feedback subsystem.
- Pass
localeintobuild_pipeline().
ArthurCRodrigues
left a comment
There was a problem hiding this comment.
include_feedback should a optionable flag.
It was False by default previously because FeedbackStep wasn't actually ready yet.
But now that we have it, requests should come in with include_feedback toggled true or false.
| external_user_id: str | ||
| submission_files: dict | ||
| locale: str = "en" | ||
|
|
There was a problem hiding this comment.
Basically add the include feedback attr here
web/service/grading_service.py
Outdated
| """Build the autograder pipeline and run it in a thread.""" | ||
| pipeline = build_pipeline( | ||
| template_name=request.template_name, | ||
| include_feedback=True, |
There was a problem hiding this comment.
and then use
include_feedback=request.include_feedback,There was a problem hiding this comment.
Important: This will cause changes in api contract!
|
Ok, I'm also implementing the feedback config |
Introduce feedback configuration and targeted learning resources throughout the grading stack. DefaultReporter now filters online resources so that resources with linked_tests are only shown when related tests fail; the reporter API was adjusted to pass failed test names. ReporterService converts incoming online_content dicts to LearningResource objects. Persist feedback options by adding feedback_config (JSON) and include_feedback (bool) to the GradingConfiguration model and wiring those fields through the API (create submissions / grading_configs) and GradingRequest. Added an Alembic migration to add the new DB columns and updated API documentation and Pydantic schemas to document the feedback_config and include_feedback fields.
|
@ArthurCRodrigues added support for customizable feedback reports in the grading system. It introduces new fields ( Feedback report configuration and logic:
Database migration:
API and documentation updates:
Other technical improvements: |
This pull request refactors the grading workflow to use a new
GradingRequestdataclass for improved clarity and maintainability. It also restructures the grading logic into smaller helper functions for better error handling and separation of concerns.API and Data Structure Improvements:
GradingRequestdataclass to encapsulate all parameters needed for grading a submission, replacing the previous use of multiple function arguments. (web/service/grading_service.py)create_submission) to create and pass aGradingRequestobject to the grading task, instead of passing individual parameters. (web/api/v1/submissions.py) [1] [2] [3]Grading Logic Refactoring:
grade_submission) to accept aGradingRequestand split the logic into helper functions:_run_pipeline,_persist_success, and_persist_failure, improving readability and error handling. (web/service/grading_service.py)web/service/grading_service.py)Feedback and Result Handling:
web/service/grading_service.py)Minor Cleanups:
_node_to_dictutility function for serializing result trees. (web/service/grading_service.py)