ENG-2851: Add LLM model override setting to website monitor configuration#7527
ENG-2851: Add LLM model override setting to website monitor configuration#7527speaker-ender merged 11 commits intomainfrom
Conversation
…2851] - Add alphaWebMonitorLlmClassification feature flag - Add LLM model override field to ConfigureWebsiteMonitorForm - Conditionally show field based on feature flag and server capability - Field allows specifying custom LLM model for website asset classification Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
|
The latest updates on your projects. Learn more about Vercel for GitHub.
1 Skipped Deployment
|
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
- Create reusable LlmModelOverrideField component in common/form - Add comprehensive tests for the shared component - Update ConfigureWebsiteMonitorForm to use shared component - Update ConfigureMonitorForm (database monitor) to use shared component - Update AssessmentSettingsModal (privacy assessments) to use shared component The component handles: - Label with tooltip - Disabled state with alternative tooltip message - Placeholder for showing default model - Consistent test IDs Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
- Consolidate LLM classifier UI into single reusable component - Component encapsulates server config query for LLM capability - showSwitch prop controls whether toggle is shown (true for monitors, false for settings) - Used by ConfigureMonitorForm, ConfigureWebsiteMonitorForm, and AssessmentSettingsModal - Delete old LlmModelOverrideField in favor of unified component - Add comprehensive tests covering all use cases Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
The component now resets the form value to false when: - Server config indicates LLM classifier is not supported - This ensures we don't show a checked toggle for an unavailable feature - Model override field also hidden when server doesn't support Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
a142275 to
39b3312
Compare
- LlmModelSelector now has a default placeholder showing an example model - DB monitor explicitly overrides with empty string (preserving original behavior) - Website monitor uses the default Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
| <LlmModelSelector | ||
| skip={!showLLMOption} | ||
| useLlmClassifier={!!form.getFieldValue("use_llm_classifier")} | ||
| modelOverridePlaceholder="" |
There was a problem hiding this comment.
This preserves the existing behavior where this is an empty field, but we should consider providing an example here like the others do so users have a way of knowing what the format is
Greptile SummaryThis PR adds LLM model override configuration to website and database monitor forms by creating a well-designed shared Key changes:
Code quality:
Confidence Score: 5/5
Important Files Changed
Last reviewed commit: d84346e |
| const payload: WebsiteMonitorConfig = { | ||
| ...monitor, | ||
| ...values, | ||
| ...executionInfo, | ||
| key: monitor?.key, | ||
| classify_params: monitor?.classify_params || {}, | ||
| classify_params: classifyParams, | ||
| datasource_params: values.datasource_params || {}, | ||
| connection_config_key: integrationId, | ||
| }; |
There was a problem hiding this comment.
Spreading values includes form-only fields at the payload's top level. Consider destructuring out the LLM-related form fields before spreading, similar to how ConfigureMonitorForm.tsx explicitly selects payload fields.
Note: If this suggestion doesn't match your team's coding style, reply to this and let me know. I'll remember it for next time!
There was a problem hiding this comment.
classifyParams was already destructured. Added destructuring for values
There was a problem hiding this comment.
Not quite. While you correctly extracted classifyParams, the issue is that ...values still spreads form-only fields like use_llm_classifier and llm_model_override into the payload's top level.
Looking at ConfigureMonitorForm.tsx (lines 154-175), notice how it explicitly selects only the fields needed for the payload:
const payload: EditableMonitorConfig = isEditing
? {
...monitor,
...executionInfo,
name: values.name,
shared_config_id: values.shared_config_id,
classify_params: classifyParams,
stewards: values.stewards,
}
: { /* ... */ };It doesn't spread ...values at all. For ConfigureWebsiteMonitorForm, you should either:
Option 1: Destructure out form-only fields
const { use_llm_classifier, llm_model_override, ...payloadValues } = values;
const payload: WebsiteMonitorConfig = {
...monitor,
...payloadValues,
...executionInfo,
key: monitor?.key,
classify_params: classifyParams,
datasource_params: values.datasource_params || {},
connection_config_key: integrationId,
};Option 2: Explicitly select fields (preferred for consistency)
Follow the same pattern as the database monitor form and explicitly pick only the fields that belong in the payload.
Address Greptile feedback: destructure use_llm_classifier and llm_model_override before spreading values into payload to avoid sending form-internal fields to the API.
speaker-ender
left a comment
There was a problem hiding this comment.
Small suggestions to fix some of the linting + refinements but overall looks good
clients/admin-ui/src/features/integrations/configure-monitor/ConfigureWebsiteMonitorForm.tsx
Outdated
Show resolved
Hide resolved
clients/admin-ui/src/features/integrations/configure-monitor/ConfigureWebsiteMonitorForm.tsx
Outdated
Show resolved
Hide resolved
clients/admin-ui/__tests__/features/common/form/LlmModelSelector.test.tsx
Show resolved
Hide resolved
d4d5f96 to
0b4a3fc
Compare
📝 WalkthroughWalkthroughIntroduces LLM classification support for website monitor assets via a new alpha feature flag. Adds a reusable LlmModelSelector component for configuring LLM settings and integrates it into website monitor and privacy assessment configuration forms. Includes comprehensive tests and feature flag setup. Changes
Sequence DiagramsequenceDiagram
participant ConfigForm as ConfigureWebsiteMonitorForm
participant Selector as LlmModelSelector
participant Query as useGetConfigurationSettingsQuery
participant FormContext as Form Context
participant Server as Server Config
ConfigForm->>ConfigForm: Check feature flag<br/>(alphaWebMonitorLlmClassification)
ConfigForm->>Selector: Render with showSwitch=true<br/>and skip=!featureEnabled
Selector->>Query: Fetch configuration settings
Query->>Server: Query llm_classifier_enabled
Server-->>Query: Return server support status
Query-->>Selector: Provide config data
Selector->>FormContext: Render Switch<br/>(enabled if server supports)
FormContext-->>ConfigForm: User toggles switch
Selector->>Selector: Check switch state & server support
alt Server supports & Switch ON
Selector->>FormContext: Show model override field
else Server does not support OR Switch OFF
Selector->>FormContext: Hide model override field
end
ConfigForm->>ConfigForm: On submit, construct classify_params<br/>with llm_model_override if enabled
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~20 minutes Poem
🚥 Pre-merge checks | ✅ 3✅ Passed checks (3 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches
🧪 Generate unit tests (beta)
Tip Try Coding Plans. Let us write the prompt for your AI agent so you can ship faster (with fewer bugs). Comment |
There was a problem hiding this comment.
Actionable comments posted: 2
🧹 Nitpick comments (1)
clients/admin-ui/__tests__/features/common/form/LlmModelSelector.test.tsx (1)
265-346: Add a regression case forshowSwitch={false}with server support disabled.Current no-switch tests only validate the enabled-server path. Please add a companion case for
llm_classifier_enabled: falseto lock expected behavior for assessment/chat override flows.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@clients/admin-ui/__tests__/features/common/form/LlmModelSelector.test.tsx` around lines 265 - 346, Add a regression test that mocks mockUseGetConfigurationSettingsQuery to return detection_discovery.llm_classifier_enabled: false and verifies LlmModelSelector still behaves correctly when showSwitch={false}; specifically, create a new test (or describe block) that renders <LlmModelSelector showSwitch={false} /> (and one with custom props like modelOverrideName/modelOverrideTestId) and asserts screen.queryByTestId("input-use_llm_classifier") is not present and that the model override input (screen.getByTestId("input-llm_model_override") or custom test id like "input-assessment-model") is rendered and has the expected attributes (placeholder/enabled) to lock the assessment/chat override flow behavior when server support is disabled.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@clients/admin-ui/src/features/common/form/LlmModelSelector.tsx`:
- Around line 72-75: The model input and switch are currently disabled whenever
appConfig.llm_classifier_enabled is false, which incorrectly blocks edits even
when the switch UI is not used; update the gating logic in the LlmModelSelector
component so that server classifier capability is only enforced when the switch
flow is active (i.e., wrap checks of appConfig?.llm_classifier_enabled with the
showSwitch prop). Concretely, replace conditions like
!appConfig?.llm_classifier_enabled with (showSwitch &&
!appConfig?.llm_classifier_enabled) in the places that disable the model
input/switch (references: LlmModelSelector component, the disabled/readonly
checks for the model input and the switch rendering code that currently uses
appConfig.llm_classifier_enabled), leaving fetch usage of
useGetConfigurationSettingsQuery unchanged.
In
`@clients/admin-ui/src/features/integrations/configure-monitor/ConfigureWebsiteMonitorForm.tsx`:
- Around line 117-119: The current check for LLM-enabled uses
!!monitor?.classify_params?.llm_model_override which treats an empty string as
false and desyncs the toggle; change the logic to explicitly detect presence
(e.g., llm_model_override !== undefined && llm_model_override !== null) so an
empty-string override is considered "set". Update the monitorUsesLlmClassifier
assignment and the analogous checks in the other block (the code around the
classifier/toggle handling currently duplicated at the later section) to use the
explicit null/undefined check against
monitor?.classify_params?.llm_model_override.
---
Nitpick comments:
In `@clients/admin-ui/__tests__/features/common/form/LlmModelSelector.test.tsx`:
- Around line 265-346: Add a regression test that mocks
mockUseGetConfigurationSettingsQuery to return
detection_discovery.llm_classifier_enabled: false and verifies LlmModelSelector
still behaves correctly when showSwitch={false}; specifically, create a new test
(or describe block) that renders <LlmModelSelector showSwitch={false} /> (and
one with custom props like modelOverrideName/modelOverrideTestId) and asserts
screen.queryByTestId("input-use_llm_classifier") is not present and that the
model override input (screen.getByTestId("input-llm_model_override") or custom
test id like "input-assessment-model") is rendered and has the expected
attributes (placeholder/enabled) to lock the assessment/chat override flow
behavior when server support is disabled.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: de5f6c38-1418-4e78-bba4-895fecb8fa07
📒 Files selected for processing (8)
changelog/7527.yamlclients/admin-ui/__tests__/features/common/form/LlmModelSelector.test.tsxclients/admin-ui/src/features/common/form/LlmModelSelector.tsxclients/admin-ui/src/features/integrations/configure-monitor/ConfigureMonitorForm.tsxclients/admin-ui/src/features/integrations/configure-monitor/ConfigureWebsiteMonitorForm.tsxclients/admin-ui/src/features/privacy-assessments/AssessmentSettingsModal.test.tsxclients/admin-ui/src/features/privacy-assessments/AssessmentSettingsModal.tsxclients/admin-ui/src/flags.json
| const { data: appConfig, isLoading } = useGetConfigurationSettingsQuery( | ||
| { api_set: false }, | ||
| { skip }, | ||
| ); |
There was a problem hiding this comment.
showSwitch={false} still hard-gates model editing on classifier support.
Line 148 disables the model input whenever llm_classifier_enabled is false. That couples no-switch usages (e.g., assessment/chat model overrides) to monitor-classifier capability and can block valid edits.
💡 Proposed fix (gate server capability only when the switch flow is used)
export const LlmModelSelector = ({
skip = false,
showSwitch = true,
@@
}: LlmModelSelectorProps) => {
const form = Form.useFormInstance();
+ const enforceServerCapability = showSwitch;
// Fetch server configuration to check LLM capability
const { data: appConfig, isLoading } = useGetConfigurationSettingsQuery(
{ api_set: false },
- { skip },
+ { skip: skip || !enforceServerCapability },
);
- const serverSupportsLlmClassifier =
- !!appConfig?.detection_discovery?.llm_classifier_enabled;
+ const serverSupportsLlmClassifier = enforceServerCapability
+ ? !!appConfig?.detection_discovery?.llm_classifier_enabled
+ : true;
@@
- if (!skip && !isLoading && !serverSupportsLlmClassifier && showSwitch) {
+ if (!skip && !isLoading && !serverSupportsLlmClassifier && showSwitch) {
form?.setFieldValue(switchName, false);
}Also applies to: 77-79, 102-104, 133-149
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@clients/admin-ui/src/features/common/form/LlmModelSelector.tsx` around lines
72 - 75, The model input and switch are currently disabled whenever
appConfig.llm_classifier_enabled is false, which incorrectly blocks edits even
when the switch UI is not used; update the gating logic in the LlmModelSelector
component so that server classifier capability is only enforced when the switch
flow is active (i.e., wrap checks of appConfig?.llm_classifier_enabled with the
showSwitch prop). Concretely, replace conditions like
!appConfig?.llm_classifier_enabled with (showSwitch &&
!appConfig?.llm_classifier_enabled) in the places that disable the model
input/switch (references: LlmModelSelector component, the disabled/readonly
checks for the model input and the switch rendering code that currently uses
appConfig.llm_classifier_enabled), leaving fetch usage of
useGetConfigurationSettingsQuery unchanged.
| // Check if monitor currently uses LLM classifier | ||
| const monitorUsesLlmClassifier = | ||
| !!monitor?.classify_params?.llm_model_override; |
There was a problem hiding this comment.
LLM-enabled state is inferred with truthiness and can flip off for empty-string overrides.
Line 119 uses !!llm_model_override, so a monitor saved with an empty override ("") is treated as disabled on reload. That desynchronizes the toggle from persisted classifier state.
✅ Minimal fix
- const monitorUsesLlmClassifier =
- !!monitor?.classify_params?.llm_model_override;
+ const monitorUsesLlmClassifier =
+ monitor?.classify_params?.llm_model_override != null;Also applies to: 153-159
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In
`@clients/admin-ui/src/features/integrations/configure-monitor/ConfigureWebsiteMonitorForm.tsx`
around lines 117 - 119, The current check for LLM-enabled uses
!!monitor?.classify_params?.llm_model_override which treats an empty string as
false and desyncs the toggle; change the logic to explicitly detect presence
(e.g., llm_model_override !== undefined && llm_model_override !== null) so an
empty-string override is considered "set". Update the monitorUsesLlmClassifier
assignment and the analogous checks in the other block (the code around the
classifier/toggle handling currently duplicated at the later section) to use the
explicit null/undefined check against
monitor?.classify_params?.llm_model_override.
Ticket ENG-2851
Description Of Changes
Adds LLM model override configuration to website and database monitor forms, with a shared
LlmModelSelectorcomponent that encapsulates the server capability check and form fields.Code Changes
LlmModelSelectorcomponent that:showSwitch=true)alphaWebMonitorLlmClassificationfeature flag for website monitorsConfigureWebsiteMonitorFormto use shared componentConfigureMonitorForm(database monitors) to use shared componentAssessmentSettingsModalto use shared component withshowSwitch=falseLlmModelSelectorSteps to Confirm
alphaWebMonitorLlmClassificationfeature flagPre-Merge Checklist
CHANGELOG.mdupdatedmaindowngrade()migration is correct and worksSummary by CodeRabbit