Skip to content

ENG-2851: Add LLM model override setting to website monitor configuration#7527

Merged
speaker-ender merged 11 commits intomainfrom
ENG-2851-llm-monitor-config
Mar 4, 2026
Merged

ENG-2851: Add LLM model override setting to website monitor configuration#7527
speaker-ender merged 11 commits intomainfrom
ENG-2851-llm-monitor-config

Conversation

@thabofletcher
Copy link
Contributor

@thabofletcher thabofletcher commented Feb 28, 2026

Ticket ENG-2851

Description Of Changes

Adds LLM model override configuration to website and database monitor forms, with a shared LlmModelSelector component that encapsulates the server capability check and form fields.

Code Changes

  • Add LlmModelSelector component that:
    • Fetches server config to check if LLM classifier is enabled
    • Shows "Use LLM classifier" switch toggle (when showSwitch=true)
    • Conditionally shows model override input field
    • Resets switch to false when server doesn't support LLM
  • Add alphaWebMonitorLlmClassification feature flag for website monitors
  • Update ConfigureWebsiteMonitorForm to use shared component
  • Update ConfigureMonitorForm (database monitors) to use shared component
  • Update AssessmentSettingsModal to use shared component with showSwitch=false
  • Add comprehensive tests for LlmModelSelector

Steps to Confirm

  1. Enable alphaWebMonitorLlmClassification feature flag
  2. Navigate to website monitor configuration
  3. Verify "Use LLM classifier" switch and model override field appear
  4. Verify fields are disabled when server doesn't support LLM
  5. Navigate to database monitor configuration
  6. Verify same LLM fields appear and work correctly
  7. Navigate to Assessment Settings modal
  8. Verify model override fields appear (without switch toggle)

Pre-Merge Checklist

  • Issue requirements met
  • All CI pipelines succeeded
  • CHANGELOG.md updated
    • Add a db-migration This indicates that a change includes a database migration label to the entry if your change includes a DB migration
    • Add a high-risk This issue suggests changes that have a high-probability of breaking existing code label to the entry if your change includes a high-risk change (i.e. potential for performance impact or unexpected regression) that should be flagged
    • Updates unreleased work already in Changelog, no new entry necessary
  • UX feedback:
    • All UX related changes have been reviewed by a designer
    • No UX review needed
  • Followup issues:
    • Followup issues created
    • No followup issues
  • Database migrations:
    • Ensure that your downrev is up to date with the latest revision on main
    • Ensure that your downgrade() migration is correct and works
      • If a downgrade migration is not possible for this change, please call this out in the PR description!
    • No migrations
  • Documentation:
    • Documentation complete, PR opened in fidesdocs
    • Documentation issue created in fidesdocs
    • If there are any new client scopes created as part of the pull request, remember to update public-facing documentation that references our scope registry
    • No documentation updates required

Summary by CodeRabbit

  • New Features
    • Added LLM classification support for website monitor assets under alpha feature flag.
    • Introduced standardized UI component for configuring LLM model selection in monitor and assessment forms.

…2851]

- Add alphaWebMonitorLlmClassification feature flag
- Add LLM model override field to ConfigureWebsiteMonitorForm
- Conditionally show field based on feature flag and server capability
- Field allows specifying custom LLM model for website asset classification

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
@vercel
Copy link
Contributor

vercel bot commented Feb 28, 2026

The latest updates on your projects. Learn more about Vercel for GitHub.

Project Deployment Actions Updated (UTC)
fides-plus-nightly Ready Ready Preview, Comment Mar 4, 2026 3:06pm
1 Skipped Deployment
Project Deployment Actions Updated (UTC)
fides-privacy-center Ignored Ignored Mar 4, 2026 3:06pm

Request Review

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
- Create reusable LlmModelOverrideField component in common/form
- Add comprehensive tests for the shared component
- Update ConfigureWebsiteMonitorForm to use shared component
- Update ConfigureMonitorForm (database monitor) to use shared component
- Update AssessmentSettingsModal (privacy assessments) to use shared component

The component handles:
- Label with tooltip
- Disabled state with alternative tooltip message
- Placeholder for showing default model
- Consistent test IDs

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
- Consolidate LLM classifier UI into single reusable component
- Component encapsulates server config query for LLM capability
- showSwitch prop controls whether toggle is shown (true for monitors, false for settings)
- Used by ConfigureMonitorForm, ConfigureWebsiteMonitorForm, and AssessmentSettingsModal
- Delete old LlmModelOverrideField in favor of unified component
- Add comprehensive tests covering all use cases

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
The component now resets the form value to false when:
- Server config indicates LLM classifier is not supported
- This ensures we don't show a checked toggle for an unavailable feature
- Model override field also hidden when server doesn't support

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
- LlmModelSelector now has a default placeholder showing an example model
- DB monitor explicitly overrides with empty string (preserving original behavior)
- Website monitor uses the default

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
<LlmModelSelector
skip={!showLLMOption}
useLlmClassifier={!!form.getFieldValue("use_llm_classifier")}
modelOverridePlaceholder=""
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This preserves the existing behavior where this is an empty field, but we should consider providing an example here like the others do so users have a way of knowing what the format is

@thabofletcher thabofletcher marked this pull request as ready for review February 28, 2026 22:08
@thabofletcher thabofletcher requested a review from a team as a code owner February 28, 2026 22:08
@thabofletcher thabofletcher requested review from gilluminate and removed request for a team February 28, 2026 22:08
@greptile-apps
Copy link
Contributor

greptile-apps bot commented Feb 28, 2026

Greptile Summary

This PR adds LLM model override configuration to website and database monitor forms by creating a well-designed shared LlmModelSelector component. The implementation consolidates duplicate LLM configuration UI across multiple forms and adds new LLM support for website monitors behind the alphaWebMonitorLlmClassification feature flag.

Key changes:

  • Created reusable LlmModelSelector component with comprehensive test coverage that handles server capability checking and conditional rendering
  • Refactored ConfigureMonitorForm (database monitors) to use the shared component, removing duplicate code
  • Added LLM classification support to ConfigureWebsiteMonitorForm with proper feature flag gating
  • Refactored AssessmentSettingsModal to use the shared component with showSwitch=false option
  • Added comprehensive tests covering all component states and use cases

Code quality:

  • The shared component demonstrates good separation of concerns with clear props interface
  • Test coverage is thorough, including edge cases like server support checks and loading states
  • The refactoring successfully reduces code duplication across three different forms

Confidence Score: 5/5

  • This PR is safe to merge with minimal risk
  • The changes are well-structured with excellent test coverage, proper feature flag gating, and successful code consolidation. The single style issue identified is minor and doesn't affect functionality.
  • No files require special attention

Important Files Changed

Filename Overview
clients/admin-ui/src/features/common/form/LlmModelSelector.tsx Well-designed shared component with clear props interface, server capability checking, and proper state management
clients/admin-ui/tests/features/common/form/LlmModelSelector.test.tsx Comprehensive test coverage for all component states and use cases including server support, loading, and multiple integration scenarios
clients/admin-ui/src/features/integrations/configure-monitor/ConfigureWebsiteMonitorForm.tsx Added LLM classification support with feature flag gating. Minor style issue with form-only fields in payload spread.

Last reviewed commit: d84346e

Copy link
Contributor

@greptile-apps greptile-apps bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

8 files reviewed, 1 comment

Edit Code Review Agent Settings | Greptile

Comment on lines 164 to 172
const payload: WebsiteMonitorConfig = {
...monitor,
...values,
...executionInfo,
key: monitor?.key,
classify_params: monitor?.classify_params || {},
classify_params: classifyParams,
datasource_params: values.datasource_params || {},
connection_config_key: integrationId,
};
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Spreading values includes form-only fields at the payload's top level. Consider destructuring out the LLM-related form fields before spreading, similar to how ConfigureMonitorForm.tsx explicitly selects payload fields.

Note: If this suggestion doesn't match your team's coding style, reply to this and let me know. I'll remember it for next time!

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

classifyParams was already destructured. Added destructuring for values

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@greptileai is that what you meant?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not quite. While you correctly extracted classifyParams, the issue is that ...values still spreads form-only fields like use_llm_classifier and llm_model_override into the payload's top level.

Looking at ConfigureMonitorForm.tsx (lines 154-175), notice how it explicitly selects only the fields needed for the payload:

const payload: EditableMonitorConfig = isEditing
  ? {
      ...monitor,
      ...executionInfo,
      name: values.name,
      shared_config_id: values.shared_config_id,
      classify_params: classifyParams,
      stewards: values.stewards,
    }
  : { /* ... */ };

It doesn't spread ...values at all. For ConfigureWebsiteMonitorForm, you should either:

Option 1: Destructure out form-only fields

const { use_llm_classifier, llm_model_override, ...payloadValues } = values;
const payload: WebsiteMonitorConfig = {
  ...monitor,
  ...payloadValues,
  ...executionInfo,
  key: monitor?.key,
  classify_params: classifyParams,
  datasource_params: values.datasource_params || {},
  connection_config_key: integrationId,
};

Option 2: Explicitly select fields (preferred for consistency)
Follow the same pattern as the database monitor form and explicitly pick only the fields that belong in the payload.

Address Greptile feedback: destructure use_llm_classifier and
llm_model_override before spreading values into payload to avoid
sending form-internal fields to the API.
Copy link
Contributor

@speaker-ender speaker-ender left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Small suggestions to fix some of the linting + refinements but overall looks good

@speaker-ender speaker-ender force-pushed the ENG-2851-llm-monitor-config branch from d4d5f96 to 0b4a3fc Compare March 4, 2026 15:02
@coderabbitai
Copy link

coderabbitai bot commented Mar 4, 2026

📝 Walkthrough

Walkthrough

Introduces LLM classification support for website monitor assets via a new alpha feature flag. Adds a reusable LlmModelSelector component for configuring LLM settings and integrates it into website monitor and privacy assessment configuration forms. Includes comprehensive tests and feature flag setup.

Changes

Cohort / File(s) Summary
New LLM Model Selector Component
clients/admin-ui/src/features/common/form/LlmModelSelector.tsx, clients/admin-ui/__tests__/features/common/form/LlmModelSelector.test.tsx
Introduces a reusable component for LLM classifier configuration with conditional rendering based on server capability and switch state. Comprehensive test suite covering skip behavior, server support variations, loading states, and use cases for website and database monitors.
Feature Flag Configuration
clients/admin-ui/src/flags.json
Adds new feature flag alphaWebMonitorLlmClassification with development enabled and production disabled, controlling visibility of LLM classification in website monitors.
Website Monitor Form Integration
clients/admin-ui/src/features/integrations/configure-monitor/ConfigureWebsiteMonitorForm.tsx
Extends form state with use_llm_classifier and llm_model_override fields; renders LlmModelSelector conditionally based on feature flag; updates payload construction to handle classify_params with LLM settings.
Configure Monitor Base Form
clients/admin-ui/src/features/integrations/configure-monitor/ConfigureMonitorForm.tsx
Refactors to remove inlined LLM classifier UI logic by replacing it with the centralized LlmModelSelector component; simplifies server capability checking by delegating to the selector.
Privacy Assessments Modal
clients/admin-ui/src/features/privacy-assessments/AssessmentSettingsModal.tsx, clients/admin-ui/src/features/privacy-assessments/AssessmentSettingsModal.test.tsx
Replaces two inline input fields for assessment and chat model overrides with LlmModelSelector components; adds configuration mock to enable LLM settings in tests.
Changelog
changelog/7527.yaml
Documents feature addition for LLM classification support in website monitor assets behind alpha feature flag.

Sequence Diagram

sequenceDiagram
    participant ConfigForm as ConfigureWebsiteMonitorForm
    participant Selector as LlmModelSelector
    participant Query as useGetConfigurationSettingsQuery
    participant FormContext as Form Context
    participant Server as Server Config

    ConfigForm->>ConfigForm: Check feature flag<br/>(alphaWebMonitorLlmClassification)
    ConfigForm->>Selector: Render with showSwitch=true<br/>and skip=!featureEnabled
    
    Selector->>Query: Fetch configuration settings
    Query->>Server: Query llm_classifier_enabled
    Server-->>Query: Return server support status
    Query-->>Selector: Provide config data
    
    Selector->>FormContext: Render Switch<br/>(enabled if server supports)
    FormContext-->>ConfigForm: User toggles switch
    
    Selector->>Selector: Check switch state & server support
    alt Server supports & Switch ON
        Selector->>FormContext: Show model override field
    else Server does not support OR Switch OFF
        Selector->>FormContext: Hide model override field
    end
    
    ConfigForm->>ConfigForm: On submit, construct classify_params<br/>with llm_model_override if enabled
Loading

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~20 minutes

Poem

🐰 A hop through features, flags so fine,
LLM classifiers now align!
Website monitors gain a brain,
To sort through assets with might and main,
Wrapped in alpha, ready to play—
Selective selection saves the day! 🌟

🚥 Pre-merge checks | ✅ 3
✅ Passed checks (3 passed)
Check name Status Explanation
Title check ✅ Passed The title clearly and specifically describes the main change: adding an LLM model override setting to website monitor configuration, aligning with the PR's primary objective.
Description check ✅ Passed The description covers all required template sections with sufficient detail: ticket reference, code changes summary, confirmation steps, and completed pre-merge checklist items.
Docstring Coverage ✅ Passed No functions found in the changed files to evaluate docstring coverage. Skipping docstring coverage check.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
  • 📝 Generate docstrings (stacked PR)
  • 📝 Generate docstrings (commit on current branch)
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch ENG-2851-llm-monitor-config

Tip

Try Coding Plans. Let us write the prompt for your AI agent so you can ship faster (with fewer bugs).
Share your feedback on Discord.


Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

🧹 Nitpick comments (1)
clients/admin-ui/__tests__/features/common/form/LlmModelSelector.test.tsx (1)

265-346: Add a regression case for showSwitch={false} with server support disabled.

Current no-switch tests only validate the enabled-server path. Please add a companion case for llm_classifier_enabled: false to lock expected behavior for assessment/chat override flows.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@clients/admin-ui/__tests__/features/common/form/LlmModelSelector.test.tsx`
around lines 265 - 346, Add a regression test that mocks
mockUseGetConfigurationSettingsQuery to return
detection_discovery.llm_classifier_enabled: false and verifies LlmModelSelector
still behaves correctly when showSwitch={false}; specifically, create a new test
(or describe block) that renders <LlmModelSelector showSwitch={false} /> (and
one with custom props like modelOverrideName/modelOverrideTestId) and asserts
screen.queryByTestId("input-use_llm_classifier") is not present and that the
model override input (screen.getByTestId("input-llm_model_override") or custom
test id like "input-assessment-model") is rendered and has the expected
attributes (placeholder/enabled) to lock the assessment/chat override flow
behavior when server support is disabled.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@clients/admin-ui/src/features/common/form/LlmModelSelector.tsx`:
- Around line 72-75: The model input and switch are currently disabled whenever
appConfig.llm_classifier_enabled is false, which incorrectly blocks edits even
when the switch UI is not used; update the gating logic in the LlmModelSelector
component so that server classifier capability is only enforced when the switch
flow is active (i.e., wrap checks of appConfig?.llm_classifier_enabled with the
showSwitch prop). Concretely, replace conditions like
!appConfig?.llm_classifier_enabled with (showSwitch &&
!appConfig?.llm_classifier_enabled) in the places that disable the model
input/switch (references: LlmModelSelector component, the disabled/readonly
checks for the model input and the switch rendering code that currently uses
appConfig.llm_classifier_enabled), leaving fetch usage of
useGetConfigurationSettingsQuery unchanged.

In
`@clients/admin-ui/src/features/integrations/configure-monitor/ConfigureWebsiteMonitorForm.tsx`:
- Around line 117-119: The current check for LLM-enabled uses
!!monitor?.classify_params?.llm_model_override which treats an empty string as
false and desyncs the toggle; change the logic to explicitly detect presence
(e.g., llm_model_override !== undefined && llm_model_override !== null) so an
empty-string override is considered "set". Update the monitorUsesLlmClassifier
assignment and the analogous checks in the other block (the code around the
classifier/toggle handling currently duplicated at the later section) to use the
explicit null/undefined check against
monitor?.classify_params?.llm_model_override.

---

Nitpick comments:
In `@clients/admin-ui/__tests__/features/common/form/LlmModelSelector.test.tsx`:
- Around line 265-346: Add a regression test that mocks
mockUseGetConfigurationSettingsQuery to return
detection_discovery.llm_classifier_enabled: false and verifies LlmModelSelector
still behaves correctly when showSwitch={false}; specifically, create a new test
(or describe block) that renders <LlmModelSelector showSwitch={false} /> (and
one with custom props like modelOverrideName/modelOverrideTestId) and asserts
screen.queryByTestId("input-use_llm_classifier") is not present and that the
model override input (screen.getByTestId("input-llm_model_override") or custom
test id like "input-assessment-model") is rendered and has the expected
attributes (placeholder/enabled) to lock the assessment/chat override flow
behavior when server support is disabled.

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: de5f6c38-1418-4e78-bba4-895fecb8fa07

📥 Commits

Reviewing files that changed from the base of the PR and between 789ec53 and 0b4a3fc.

📒 Files selected for processing (8)
  • changelog/7527.yaml
  • clients/admin-ui/__tests__/features/common/form/LlmModelSelector.test.tsx
  • clients/admin-ui/src/features/common/form/LlmModelSelector.tsx
  • clients/admin-ui/src/features/integrations/configure-monitor/ConfigureMonitorForm.tsx
  • clients/admin-ui/src/features/integrations/configure-monitor/ConfigureWebsiteMonitorForm.tsx
  • clients/admin-ui/src/features/privacy-assessments/AssessmentSettingsModal.test.tsx
  • clients/admin-ui/src/features/privacy-assessments/AssessmentSettingsModal.tsx
  • clients/admin-ui/src/flags.json

Comment on lines +72 to +75
const { data: appConfig, isLoading } = useGetConfigurationSettingsQuery(
{ api_set: false },
{ skip },
);
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

showSwitch={false} still hard-gates model editing on classifier support.

Line 148 disables the model input whenever llm_classifier_enabled is false. That couples no-switch usages (e.g., assessment/chat model overrides) to monitor-classifier capability and can block valid edits.

💡 Proposed fix (gate server capability only when the switch flow is used)
 export const LlmModelSelector = ({
   skip = false,
   showSwitch = true,
@@
 }: LlmModelSelectorProps) => {
   const form = Form.useFormInstance();
+  const enforceServerCapability = showSwitch;
 
   // Fetch server configuration to check LLM capability
   const { data: appConfig, isLoading } = useGetConfigurationSettingsQuery(
     { api_set: false },
-    { skip },
+    { skip: skip || !enforceServerCapability },
   );
 
-  const serverSupportsLlmClassifier =
-    !!appConfig?.detection_discovery?.llm_classifier_enabled;
+  const serverSupportsLlmClassifier = enforceServerCapability
+    ? !!appConfig?.detection_discovery?.llm_classifier_enabled
+    : true;
@@
-    if (!skip && !isLoading && !serverSupportsLlmClassifier && showSwitch) {
+    if (!skip && !isLoading && !serverSupportsLlmClassifier && showSwitch) {
       form?.setFieldValue(switchName, false);
     }

Also applies to: 77-79, 102-104, 133-149

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@clients/admin-ui/src/features/common/form/LlmModelSelector.tsx` around lines
72 - 75, The model input and switch are currently disabled whenever
appConfig.llm_classifier_enabled is false, which incorrectly blocks edits even
when the switch UI is not used; update the gating logic in the LlmModelSelector
component so that server classifier capability is only enforced when the switch
flow is active (i.e., wrap checks of appConfig?.llm_classifier_enabled with the
showSwitch prop). Concretely, replace conditions like
!appConfig?.llm_classifier_enabled with (showSwitch &&
!appConfig?.llm_classifier_enabled) in the places that disable the model
input/switch (references: LlmModelSelector component, the disabled/readonly
checks for the model input and the switch rendering code that currently uses
appConfig.llm_classifier_enabled), leaving fetch usage of
useGetConfigurationSettingsQuery unchanged.

Comment on lines +117 to +119
// Check if monitor currently uses LLM classifier
const monitorUsesLlmClassifier =
!!monitor?.classify_params?.llm_model_override;
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

LLM-enabled state is inferred with truthiness and can flip off for empty-string overrides.

Line 119 uses !!llm_model_override, so a monitor saved with an empty override ("") is treated as disabled on reload. That desynchronizes the toggle from persisted classifier state.

✅ Minimal fix
-  const monitorUsesLlmClassifier =
-    !!monitor?.classify_params?.llm_model_override;
+  const monitorUsesLlmClassifier =
+    monitor?.classify_params?.llm_model_override != null;

Also applies to: 153-159

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In
`@clients/admin-ui/src/features/integrations/configure-monitor/ConfigureWebsiteMonitorForm.tsx`
around lines 117 - 119, The current check for LLM-enabled uses
!!monitor?.classify_params?.llm_model_override which treats an empty string as
false and desyncs the toggle; change the logic to explicitly detect presence
(e.g., llm_model_override !== undefined && llm_model_override !== null) so an
empty-string override is considered "set". Update the monitorUsesLlmClassifier
assignment and the analogous checks in the other block (the code around the
classifier/toggle handling currently duplicated at the later section) to use the
explicit null/undefined check against
monitor?.classify_params?.llm_model_override.

@speaker-ender speaker-ender added this pull request to the merge queue Mar 4, 2026
Merged via the queue into main with commit a8084fc Mar 4, 2026
46 of 47 checks passed
@speaker-ender speaker-ender deleted the ENG-2851-llm-monitor-config branch March 4, 2026 16:06
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants