Skip to content

fix(ai): add optional reasoning support for GPT-5 compatible APIs#63

Merged
AmintaCCCP merged 2 commits intomainfrom
fix/ai-reasoning-support
Mar 25, 2026
Merged

fix(ai): add optional reasoning support for GPT-5 compatible APIs#63
AmintaCCCP merged 2 commits intomainfrom
fix/ai-reasoning-support

Conversation

@AmintaCCCP
Copy link
Copy Markdown
Owner

@AmintaCCCP AmintaCCCP commented Mar 25, 2026

Summary

  • add optional reasoningEffort support for OpenAI-compatible AI configs while keeping legacy behavior unchanged by default
  • send reasoning: { effort } for OpenAI Chat Completions and Responses only when configured
  • persist and sync the new field across frontend state, backend config APIs, import/export, and database schema migration
  • extend AI proxy timeout for responses/reasoning requests and relax connection test timeout for these cases

Why

Issue #62 reports that GPT-5 series models may require a reasoning parameter, which is not supported in v0.2.6.
PR #44 added Responses API support, but did not actually add reasoning configuration or request forwarding.

This PR fixes that gap while remaining backward compatible:

  • existing configs continue to work without sending reasoning
  • users who need GPT-5 style reasoning can opt in explicitly

Validation

  • npm run build
  • server build could not be fully validated in this environment because local better-sqlite3 install fails under node-gyp/python tooling, but the changed server paths were reviewed for request/config/schema consistency

Closes #62

Summary by CodeRabbit

  • New Features
    • Added a "Reasoning Effort" option to AI configs with selectable levels (none, low, medium, high, xhigh); shown in the UI list and included in save/restore flows.
    • Proxy and AI request handling can include the reasoning block when configured and will use extended timeouts for reasoning-enabled requests.
  • Chores
    • Schema updated to persist the new reasoning effort field.

@coderabbitai
Copy link
Copy Markdown

coderabbitai bot commented Mar 25, 2026

📝 Walkthrough

Walkthrough

Adds persistent "reasoning effort" for AI configs and threads it through API routes, DB schema, frontend settings, service logic, and the proxy so OpenAI-compatible reasoning blocks can be conditionally injected and timeouts adjusted.

Changes

Cohort / File(s) Summary
Database Schema
server/src/db/schema.ts
Added addColumnIfMissing helper and introduced reasoning_effort TEXT to ai_configs, ensuring the column exists after schema init.
API Routes
server/src/routes/configs.ts, server/src/routes/sync.ts, server/src/routes/proxy.ts
Extended configs GET/POST/PUT to accept and return reasoningEffort; updated sync upsert to set reasoning_effort; proxy now optionally injects OpenAI-compatible reasoning: { effort: ... } into outbound bodies and selects longer timeouts when reasoning is set or using openai-responses.
Frontend Settings UI
src/components/SettingsPanel.tsx
Added reasoningEffort to AI form state, initialized/reset paths, new "Reasoning Effort" select (options: empty, none, low, medium, high, xhigh), save/restore mapping, and display in config summaries.
Service Layer
src/services/aiService.ts
Uses shared AIApiType; maps reasoningEffort (maps 'minimal''low') to an OpenAI reasoning payload when appropriate; conditionally includes reasoning in OpenAI request bodies and extends testConnection/proxy timeouts when reasoning or openai-responses is used.
Type Definitions
src/types/index.ts
Added `AIApiType = 'openai'

Sequence Diagrams

sequenceDiagram
    participant User as User
    participant UI as SettingsPanel
    participant API as API Routes
    participant DB as Database
    participant Service as aiService
    participant Proxy as Proxy
    participant OpenAI as OpenAI API

    User->>UI: Select reasoning effort and save
    UI->>API: PUT /api/configs/ai/:id { reasoningEffort }
    API->>DB: UPDATE ai_configs SET reasoning_effort = ?
    DB-->>API: OK
    API-->>UI: Success

    User->>Service: Request AI response
    Service->>DB: SELECT ai_config WHERE id=...
    DB-->>Service: { apiType, reasoning_effort }
    Service->>Service: build request body (include reasoning if applicable)
    Service->>Proxy: POST with effective body, timeout based on reasoning/apiType
    Proxy->>OpenAI: Forward request (may include reasoning)
    OpenAI-->>Proxy: Response
    Proxy-->>Service: Response
    Service-->>User: Final result
Loading

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~22 minutes

Possibly related PRs

Poem

🐰 A rabbit peeks into rows and code,
Adds a knob for how much thought to load,
From UI select to DB's quiet fight,
Requests now whisper "reason" into the night,
Hoppity hops — the AI ponders light.

🚥 Pre-merge checks | ✅ 4 | ❌ 1

❌ Failed checks (1 warning)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 0.00% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (4 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title clearly describes the main change: adding optional reasoning support for GPT-5 compatible APIs, which aligns with the PR objective of supporting the reasoning parameter.
Linked Issues check ✅ Passed The PR successfully addresses issue #62 by adding configurable reasoningEffort support across database, backend APIs, proxy forwarding, and frontend UI, enabling GPT-5 models to work with the reasoning parameter.
Out of Scope Changes check ✅ Passed All changes are directly related to adding reasoning parameter support: schema migration, config CRUD operations, proxy timeout adjustments, and UI controls for reasoning effort selection.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
📝 Generate docstrings
  • Create stacked PR
  • Commit on current branch
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Commit unit tests in branch fix/ai-reasoning-support

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Copy Markdown

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@src/types/index.ts`:
- Around line 70-71: Update the AIReasoningEffort union type to include the
missing OpenAI-supported values so it matches the API spec: modify the exported
type AIReasoningEffort (in src/types/index.ts) to add 'none' and 'xhigh'
alongside the existing 'minimal' | 'low' | 'medium' | 'high' entries.

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: 9efb49c5-cb22-4458-8de5-1354b2162ec2

📥 Commits

Reviewing files that changed from the base of the PR and between 118afb6 and 74abfff.

📒 Files selected for processing (7)
  • server/src/db/schema.ts
  • server/src/routes/configs.ts
  • server/src/routes/proxy.ts
  • server/src/routes/sync.ts
  • src/components/SettingsPanel.tsx
  • src/services/aiService.ts
  • src/types/index.ts

Comment on lines +70 to +71
export type AIApiType = 'openai' | 'openai-responses' | 'claude' | 'gemini';
export type AIReasoningEffort = 'minimal' | 'low' | 'medium' | 'high';
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

🧩 Analysis chain

🌐 Web query:

OpenAI API reasoning effort parameter values

💡 Result:

The OpenAI API reasoning effort parameter, named reasoning_effort in the Chat Completions API (/v1/chat/completions) and reasoning.effort in the Responses API (/v1/responses), controls the amount of computational effort reasoning models (e.g., o1 series, gpt-5 series) dedicate to internal reasoning before generating a response. Lower values prioritize speed and lower token usage; higher values enable more thorough reasoning at the cost of latency and tokens. Supported values (model-dependent): - none: No reasoning (fastest, lowest tokens; default for gpt-5.1+ like gpt-5.4). - minimal: Minimal reasoning (some models). - low: Light reasoning. - medium: Balanced (default for pre-gpt-5.1 models). - high: Thorough reasoning. - xhigh: Maximum reasoning (later models). Early o1 models supported low, medium, high. Newer models expand options. Use max_completion_tokens (not max_tokens) for reasoning models. Unsupported with some params like temperature when reasoning > none. Example (Chat Completions): openai.chat.completions.create( model="o1", messages=[...], reasoning_effort="high" ) Example (Responses): openai.responses.create( model="gpt-5.4", reasoning={"effort": "medium"}, input=[...] )

Citations:


Add missing reasoning effort values to match OpenAI API specification.

The AIReasoningEffort type is incomplete. The OpenAI API supports 'none' (default for newer models) and 'xhigh' (maximum reasoning for later models) in addition to the currently defined values. Update to:

export type AIReasoningEffort = 'none' | 'minimal' | 'low' | 'medium' | 'high' | 'xhigh';
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/types/index.ts` around lines 70 - 71, Update the AIReasoningEffort union
type to include the missing OpenAI-supported values so it matches the API spec:
modify the exported type AIReasoningEffort (in src/types/index.ts) to add 'none'
and 'xhigh' alongside the existing 'minimal' | 'low' | 'medium' | 'high'
entries.

@AmintaCCCP
Copy link
Copy Markdown
Owner Author

Follow-up fix pushed.

The first version used minimal as a reasoning effort value, but the actual GPT-5 compatible endpoint here supports:

  • none
  • low
  • medium
  • high
  • xhigh

This update fixes that by:

  • changing the UI/type options to the supported set
  • normalizing legacy saved value minimal -> low
  • applying the same normalization in the backend proxy, so existing stored configs do not keep failing before users re-save them

Frontend build re-checked with npm run build.

Copy link
Copy Markdown

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

🧹 Nitpick comments (1)
src/components/SettingsPanel.tsx (1)

820-842: Consider gating Reasoning Effort by API type to avoid confusing no-op configs.

The control is editable for Claude/Gemini even though it only applies to OpenAI-compatible endpoints. Disabling or hiding it for non-OpenAI types would prevent misleading settings.

Suggested refactor
+  const isOpenAICompatible = aiForm.apiType === 'openai' || aiForm.apiType === 'openai-responses';
...
-              <div>
+              <div>
                 <label className="block text-sm font-medium text-gray-700 dark:text-gray-300 mb-1">
                   {t('推理强度', 'Reasoning Effort')}
                 </label>
                 <select
                   value={aiForm.reasoningEffort}
                   onChange={(e) => setAIForm(prev => ({ ...prev, reasoningEffort: e.target.value as '' | AIReasoningEffort }))}
+                  disabled={!isOpenAICompatible}
                   className="w-full px-3 py-2 border border-gray-300 dark:border-gray-600 rounded-lg bg-white dark:bg-gray-800 text-gray-900 dark:text-white"
                 >

Also clear stale value when switching away from OpenAI-compatible apiType.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/components/SettingsPanel.tsx` around lines 820 - 842, The Reasoning
Effort select (using aiForm.reasoningEffort and setAIForm) should be hidden or
disabled unless the current api type is OpenAI-compatible; update the component
rendering around the select to check the API type (e.g., apiType /
isOpenAICompatible flag) and only render or enable the control for
OpenAI-compatible endpoints, and when switching apiType away from
OpenAI-compatible clear the stale aiForm.reasoningEffort value via
setAIForm(prev => ({ ...prev, reasoningEffort: '' })) so non-applicable settings
are removed.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@server/src/routes/proxy.ts`:
- Line 134: The timeout is currently set to 10 minutes whenever reasoningEffort
is truthy regardless of provider; change the conditional so the 10-minute path
is only used when apiType === 'openai-responses' AND reasoningEffort is truthy.
Update the expression that sets the timeout (the const timeout assignment using
apiType and reasoningEffort) to use a strict conjunction (apiType ===
'openai-responses' && !!reasoningEffort) ? 600000 : 60000 so non-OpenAI
providers never get the extended 10-minute hold.
- Around line 96-98: The current mapping only remaps 'minimal' and forwards any
other aiConfig.reasoning_effort which can cause upstream 4xxs; normalize
aiConfig.reasoning_effort (use the aiConfig.reasoning_effort symbol) against an
explicit allowed set (e.g., 'low','medium','high'), map 'minimal' to 'low', and
if the resulting value is not in that allowed set set reasoningEffort to
null/undefined so unknown values are dropped before being forwarded (update the
reasoningEffort assignment in server/src/routes/proxy.ts to perform this check).

In `@src/components/SettingsPanel.tsx`:
- Line 398: When merging restored backup settings, don't copy
cfg.reasoningEffort directly; instead normalize/validate it before assigning to
reasoningEffort so legacy or invalid values (e.g., 'minimal') are converted to
the current allowed enum. Replace the direct assignment of cfg.reasoningEffort
with a call to the same normalization routine used elsewhere (e.g.,
normalizeReasoningEffort(cfg.reasoningEffort)) or inline the same
validation/mapping logic so the restored value is coerced to a valid runtime
option.

---

Nitpick comments:
In `@src/components/SettingsPanel.tsx`:
- Around line 820-842: The Reasoning Effort select (using aiForm.reasoningEffort
and setAIForm) should be hidden or disabled unless the current api type is
OpenAI-compatible; update the component rendering around the select to check the
API type (e.g., apiType / isOpenAICompatible flag) and only render or enable the
control for OpenAI-compatible endpoints, and when switching apiType away from
OpenAI-compatible clear the stale aiForm.reasoningEffort value via
setAIForm(prev => ({ ...prev, reasoningEffort: '' })) so non-applicable settings
are removed.

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: 7c104611-a7d7-47ae-94a3-065cfc7baeb0

📥 Commits

Reviewing files that changed from the base of the PR and between 74abfff and 6fbcc01.

📒 Files selected for processing (4)
  • server/src/routes/proxy.ts
  • src/components/SettingsPanel.tsx
  • src/services/aiService.ts
  • src/types/index.ts
🚧 Files skipped from review as they are similar to previous changes (2)
  • src/types/index.ts
  • src/services/aiService.ts

Comment on lines +96 to +98
const reasoningEffort = aiConfig.reasoning_effort === 'minimal'
? 'low'
: aiConfig.reasoning_effort as string | null | undefined;
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Normalize reasoningEffort against an allowed set before use.

At Line 96, only 'minimal' is remapped. Other unexpected persisted values are still forwarded and can cause upstream 4xx failures. Normalize to allowed values and drop unknowns.

Suggested fix
-    const reasoningEffort = aiConfig.reasoning_effort === 'minimal'
-      ? 'low'
-      : aiConfig.reasoning_effort as string | null | undefined;
+    const rawReasoningEffort = typeof aiConfig.reasoning_effort === 'string'
+      ? aiConfig.reasoning_effort.trim().toLowerCase()
+      : undefined;
+    const allowedReasoningEfforts = new Set(['none', 'low', 'medium', 'high', 'xhigh']);
+    const reasoningEffort = rawReasoningEffort === 'minimal'
+      ? 'low'
+      : (rawReasoningEffort && allowedReasoningEfforts.has(rawReasoningEffort) ? rawReasoningEffort : undefined);
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
const reasoningEffort = aiConfig.reasoning_effort === 'minimal'
? 'low'
: aiConfig.reasoning_effort as string | null | undefined;
const rawReasoningEffort = typeof aiConfig.reasoning_effort === 'string'
? aiConfig.reasoning_effort.trim().toLowerCase()
: undefined;
const allowedReasoningEfforts = new Set(['none', 'low', 'medium', 'high', 'xhigh']);
const reasoningEffort = rawReasoningEffort === 'minimal'
? 'low'
: (rawReasoningEffort && allowedReasoningEfforts.has(rawReasoningEffort) ? rawReasoningEffort : undefined);
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@server/src/routes/proxy.ts` around lines 96 - 98, The current mapping only
remaps 'minimal' and forwards any other aiConfig.reasoning_effort which can
cause upstream 4xxs; normalize aiConfig.reasoning_effort (use the
aiConfig.reasoning_effort symbol) against an explicit allowed set (e.g.,
'low','medium','high'), map 'minimal' to 'low', and if the resulting value is
not in that allowed set set reasoningEffort to null/undefined so unknown values
are dropped before being forwarded (update the reasoningEffort assignment in
server/src/routes/proxy.ts to perform this check).

? { ...requestBody, reasoning: { effort: reasoningEffort } }
: requestBody;

const timeout = apiType === 'openai-responses' || !!reasoningEffort ? 600000 : 60000;
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Scope the 10-minute timeout to OpenAI-compatible reasoning paths only.

Line 134 currently extends timeout whenever reasoningEffort is truthy, even for non-OpenAI providers where no reasoning payload is sent. This can unnecessarily hold proxy resources.

Suggested fix
-    const timeout = apiType === 'openai-responses' || !!reasoningEffort ? 600000 : 60000;
+    const isOpenAICompatible = apiType === 'openai' || apiType === 'openai-responses';
+    const usesReasoning = isOpenAICompatible && !!reasoningEffort;
+    const timeout = apiType === 'openai-responses' || usesReasoning ? 600000 : 60000;
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
const timeout = apiType === 'openai-responses' || !!reasoningEffort ? 600000 : 60000;
const isOpenAICompatible = apiType === 'openai' || apiType === 'openai-responses';
const usesReasoning = isOpenAICompatible && !!reasoningEffort;
const timeout = apiType === 'openai-responses' || usesReasoning ? 600000 : 60000;
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@server/src/routes/proxy.ts` at line 134, The timeout is currently set to 10
minutes whenever reasoningEffort is truthy regardless of provider; change the
conditional so the 10-minute path is only used when apiType ===
'openai-responses' AND reasoningEffort is truthy. Update the expression that
sets the timeout (the const timeout assignment using apiType and
reasoningEffort) to use a strict conjunction (apiType === 'openai-responses' &&
!!reasoningEffort) ? 600000 : 60000 so non-OpenAI providers never get the
extended 10-minute hold.

customPrompt: cfg.customPrompt,
useCustomPrompt: cfg.useCustomPrompt,
concurrency: cfg.concurrency,
reasoningEffort: cfg.reasoningEffort,
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Normalize restored reasoningEffort during backup merge.

At Line 398, restored values are copied as-is. Legacy backups containing 'minimal' (or invalid values) will remain in state and UI. Normalize here as well to keep behavior consistent without requiring manual edit/re-save.

Suggested fix
-                  reasoningEffort: cfg.reasoningEffort,
+                  reasoningEffort:
+                    cfg.reasoningEffort === 'minimal'
+                      ? 'low'
+                      : (['none', 'low', 'medium', 'high', 'xhigh'].includes(cfg.reasoningEffort || '')
+                          ? cfg.reasoningEffort
+                          : undefined),
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
reasoningEffort: cfg.reasoningEffort,
reasoningEffort:
cfg.reasoningEffort === 'minimal'
? 'low'
: (['none', 'low', 'medium', 'high', 'xhigh'].includes(cfg.reasoningEffort || '')
? cfg.reasoningEffort
: undefined),
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/components/SettingsPanel.tsx` at line 398, When merging restored backup
settings, don't copy cfg.reasoningEffort directly; instead normalize/validate it
before assigning to reasoningEffort so legacy or invalid values (e.g.,
'minimal') are converted to the current allowed enum. Replace the direct
assignment of cfg.reasoningEffort with a call to the same normalization routine
used elsewhere (e.g., normalizeReasoningEffort(cfg.reasoningEffort)) or inline
the same validation/mapping logic so the restored value is coerced to a valid
runtime option.

@AmintaCCCP AmintaCCCP merged commit 8741474 into main Mar 25, 2026
5 checks passed
@AmintaCCCP AmintaCCCP deleted the fix/ai-reasoning-support branch March 25, 2026 08:32
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

AI服务配置

1 participant