Fix agent chat CORS and hydration mismatches#9
Fix agent chat CORS and hydration mismatches#9tianheil3 wants to merge 1 commit intomsgbyte:mainfrom
Conversation
📝 WalkthroughWalkthroughPR introduces a new API route for proxying chat completions requests, extracts floating particle data into a reusable module, refactors navigation to conditionally render external links, improves theme toggle hydration safety, and adds utility functions for external URL detection and LLM chat URL generation. Changes
Sequence DiagramsequenceDiagram
actor Client
participant APIRoute as API Route<br/>/chat
participant LLMClient as URL Generator
participant Upstream as Upstream<br/>Endpoint
Client->>APIRoute: POST with baseUrl param
APIRoute->>LLMClient: getChatCompletionsUrl(baseUrl)
LLMClient->>LLMClient: normalize & check if relative
LLMClient-->>APIRoute: proxy path with encoded baseUrl
APIRoute->>Upstream: POST /chat/completions<br/>(with forwarded auth)
alt Has Response Body
Upstream-->>APIRoute: stream response
APIRoute-->>Client: stream with headers<br/>(keep-alive, no-cache)
else No Body
Upstream-->>APIRoute: response without body
APIRoute-->>Client: return text + status
end
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~25 minutes Poem
🚥 Pre-merge checks | ✅ 1 | ❌ 2❌ Failed checks (2 warnings)
✅ Passed checks (1 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
📝 Coding Plan
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment Tip CodeRabbit can use TruffleHog to scan for secrets in your code with verification capabilities.Add a TruffleHog config file (e.g. trufflehog-config.yml, trufflehog.yml) to your project to customize detectors and scanning behavior. The tool runs only when a config file is present. |
There was a problem hiding this comment.
Actionable comments posted: 4
🧹 Nitpick comments (1)
apps/web/src/components/landing/hero-particles.ts (1)
10-21: Protect particle data from accidental mutation.
getFloatingParticles()currently exposes shared mutable module state. Any caller can mutate the returned array/items and affect future renders.♻️ Proposed hardening
-const FLOATING_PARTICLES: FloatingParticle[] = [ +const FLOATING_PARTICLES: ReadonlyArray<Readonly<FloatingParticle>> = [ { id: 0, size: 3.1, x: 14, y: 18, duration: 24, delay: -3 }, { id: 1, size: 4.2, x: 27, y: 62, duration: 31, delay: -11 }, { id: 2, size: 2.8, x: 43, y: 26, duration: 20, delay: -7 }, { id: 3, size: 4.6, x: 61, y: 74, duration: 28, delay: -15 }, { id: 4, size: 3.4, x: 78, y: 33, duration: 22, delay: -5 }, { id: 5, size: 2.5, x: 89, y: 57, duration: 26, delay: -18 }, ]; -export function getFloatingParticles() { - return FLOATING_PARTICLES; +export function getFloatingParticles(): FloatingParticle[] { + return FLOATING_PARTICLES.map((p) => ({ ...p })); }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/web/src/components/landing/hero-particles.ts` around lines 10 - 21, The module currently returns the shared mutable array FLOATING_PARTICLES from getFloatingParticles(), allowing callers to mutate module state; change getFloatingParticles to return an immutable copy instead (e.g., map over FLOATING_PARTICLES and return new objects or a deep-cloned array, or Object.freeze each returned object) so callers cannot modify the original data, and ensure FLOATING_PARTICLES remains unmodified in all uses.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@apps/web/src/app/api/ai/agent/chat/route.ts`:
- Around line 21-28: The fetch to getUpstreamUrl in route.ts has no timeout and
can hang; wrap the call with an AbortController, pass controller.signal into the
fetch options for the upstreamRequest that produces upstreamResponse, and set a
setTimeout (e.g., 30s) to call controller.abort(); clear the timeout after fetch
completes; also handle the abort error path around the code that processes
upstreamResponse (catch AbortError and return a 504 or similar) so aborted
requests free resources and return a proper error.
- Around line 8-28: The route currently reads an unvalidated baseUrl and
server-fetches it (baseUrl, getUpstreamUrl, fetch), enabling SSRF/proxying; fix
by validating baseUrl before using it: ensure it uses an allowed protocol (e.g.,
http or https), parse it with URL and reject non-absolute or non-http(s) values,
then enforce an allowlist (hostname or host pattern) and/or block private IP
ranges (RFC1918/localhost/169.254/::1) before calling getUpstreamUrl or fetch;
if validation fails, return a 400/403 JSON response and do not perform the
upstream fetch.
In `@apps/web/src/components/theme-toggle.tsx`:
- Around line 35-37: The ThemeToggle click handler can run before mount because
currentTheme is undefined; update the ThemeToggle component to track mount state
(e.g., isMounted via useEffect/useRef) and guard interactions: in the onClick
handler for the toggle (and/or set the control disabled or aria-disabled when
not mounted) return early if !isMounted or currentTheme == null/undefined so you
don't flip to "dark" by default; ensure you also skip calling onToggle when
gated so setTheme and onToggle are only invoked after mount.
In `@apps/web/src/lib/ai/agent/llm-client.ts`:
- Around line 48-51: The code builds a proxy query including an
attacker-controlled absolute baseUrl (normalizedBaseUrl) and returns
`/api/ai/agent/chat?${params}`, enabling SSRF/open-proxy; change this to never
forward arbitrary absolute URLs from the client and instead send either a
validated hostname token or a canonical key: validate normalizedBaseUrl against
a server-side whitelist of allowed origins (or convert it to a pre-approved
lookup key) before adding it to URLSearchParams, or drop absolute URLs entirely
and only send relative/known identifiers; ensure the server route
`/api/ai/agent/chat` enforces the same whitelist and rejects any requests with
unrecognized normalizedBaseUrl values so upstream targets cannot be
attacker-controlled.
---
Nitpick comments:
In `@apps/web/src/components/landing/hero-particles.ts`:
- Around line 10-21: The module currently returns the shared mutable array
FLOATING_PARTICLES from getFloatingParticles(), allowing callers to mutate
module state; change getFloatingParticles to return an immutable copy instead
(e.g., map over FLOATING_PARTICLES and return new objects or a deep-cloned
array, or Object.freeze each returned object) so callers cannot modify the
original data, and ensure FLOATING_PARTICLES remains unmodified in all uses.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 412d1a30-df46-49ff-9089-fe9633f7d8e3
📒 Files selected for processing (11)
apps/web/src/app/api/ai/agent/chat/route.tsapps/web/src/components/landing/__tests__/hero-particles.test.tsapps/web/src/components/landing/hero-particles.tsapps/web/src/components/landing/hero.tsxapps/web/src/components/theme-toggle.tsxapps/web/src/lib/__tests__/navigation-utils.test.tsapps/web/src/lib/ai/agent/__tests__/llm-client.test.tsapps/web/src/lib/ai/agent/llm-client.tsapps/web/src/lib/navigation-utils.tsapps/web/src/lib/navigation.tsapps/web/src/lib/navigation.tsx
💤 Files with no reviewable changes (1)
- apps/web/src/lib/navigation.ts
| const baseUrl = request.nextUrl.searchParams.get("baseUrl"); | ||
|
|
||
| if (!baseUrl) { | ||
| return NextResponse.json( | ||
| { error: "Missing baseUrl query parameter" }, | ||
| { status: 400 }, | ||
| ); | ||
| } | ||
|
|
||
| try { | ||
| const body = await request.text(); | ||
| const authorization = request.headers.get("authorization"); | ||
|
|
||
| const upstreamResponse = await fetch(getUpstreamUrl(baseUrl), { | ||
| method: "POST", | ||
| headers: { | ||
| "Content-Type": "application/json", | ||
| ...(authorization ? { Authorization: authorization } : {}), | ||
| }, | ||
| body, | ||
| }); |
There was a problem hiding this comment.
Block SSRF/open-proxy behavior from untrusted baseUrl.
Line 8 accepts arbitrary baseUrl and Line 21 fetches it server-side. This allows proxying to internal/private endpoints unless constrained.
🔐 Proposed mitigation (protocol + allowlist validation)
+const ALLOWED_AGENT_CHAT_ORIGINS = new Set(
+ (process.env.AGENT_CHAT_ALLOWED_ORIGINS ?? "")
+ .split(",")
+ .map((v) => v.trim())
+ .filter(Boolean),
+);
+
function getUpstreamUrl(baseUrl: string) {
- return `${baseUrl.replace(/\/+$/, "")}/chat/completions`;
+ const parsed = new URL(baseUrl);
+ if (parsed.protocol !== "https:" && parsed.protocol !== "http:") {
+ throw new Error("Invalid protocol");
+ }
+ if (
+ ALLOWED_AGENT_CHAT_ORIGINS.size > 0 &&
+ !ALLOWED_AGENT_CHAT_ORIGINS.has(parsed.origin)
+ ) {
+ throw new Error("Origin not allowed");
+ }
+ parsed.pathname = `${parsed.pathname.replace(/\/+$/, "")}/chat/completions`;
+ parsed.search = "";
+ parsed.hash = "";
+ return parsed.toString();
}🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@apps/web/src/app/api/ai/agent/chat/route.ts` around lines 8 - 28, The route
currently reads an unvalidated baseUrl and server-fetches it (baseUrl,
getUpstreamUrl, fetch), enabling SSRF/proxying; fix by validating baseUrl before
using it: ensure it uses an allowed protocol (e.g., http or https), parse it
with URL and reject non-absolute or non-http(s) values, then enforce an
allowlist (hostname or host pattern) and/or block private IP ranges
(RFC1918/localhost/169.254/::1) before calling getUpstreamUrl or fetch; if
validation fails, return a 400/403 JSON response and do not perform the upstream
fetch.
| const upstreamResponse = await fetch(getUpstreamUrl(baseUrl), { | ||
| method: "POST", | ||
| headers: { | ||
| "Content-Type": "application/json", | ||
| ...(authorization ? { Authorization: authorization } : {}), | ||
| }, | ||
| body, | ||
| }); |
There was a problem hiding this comment.
Add an upstream timeout to prevent hung proxy requests.
The proxy call currently has no timeout; stalled upstream connections can hold server resources indefinitely.
⏱️ Proposed timeout guard
- const upstreamResponse = await fetch(getUpstreamUrl(baseUrl), {
- method: "POST",
- headers: {
- "Content-Type": "application/json",
- ...(authorization ? { Authorization: authorization } : {}),
- },
- body,
- });
+ const controller = new AbortController();
+ const timeout = setTimeout(() => controller.abort(), 15_000);
+ const upstreamResponse = await fetch(getUpstreamUrl(baseUrl), {
+ method: "POST",
+ headers: {
+ "Content-Type": "application/json",
+ ...(authorization ? { Authorization: authorization } : {}),
+ },
+ body,
+ signal: controller.signal,
+ });
+ clearTimeout(timeout);📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| const upstreamResponse = await fetch(getUpstreamUrl(baseUrl), { | |
| method: "POST", | |
| headers: { | |
| "Content-Type": "application/json", | |
| ...(authorization ? { Authorization: authorization } : {}), | |
| }, | |
| body, | |
| }); | |
| const controller = new AbortController(); | |
| const timeout = setTimeout(() => controller.abort(), 15_000); | |
| const upstreamResponse = await fetch(getUpstreamUrl(baseUrl), { | |
| method: "POST", | |
| headers: { | |
| "Content-Type": "application/json", | |
| ...(authorization ? { Authorization: authorization } : {}), | |
| }, | |
| body, | |
| signal: controller.signal, | |
| }); | |
| clearTimeout(timeout); |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@apps/web/src/app/api/ai/agent/chat/route.ts` around lines 21 - 28, The fetch
to getUpstreamUrl in route.ts has no timeout and can hang; wrap the call with an
AbortController, pass controller.signal into the fetch options for the
upstreamRequest that produces upstreamResponse, and set a setTimeout (e.g., 30s)
to call controller.abort(); clear the timeout after fetch completes; also handle
the abort error path around the code that processes upstreamResponse (catch
AbortError and return a 504 or similar) so aborted requests free resources and
return a proper error.
| onClick={(e) => { | ||
| setTheme(currentTheme === "dark" ? "light" : "dark"); | ||
| onToggle?.(e); |
There was a problem hiding this comment.
Guard pre-mount clicks to avoid incorrect first toggle behavior.
On Line 36, currentTheme is undefined before mount, so an early click always sets "dark". Consider disabling interaction until mounted.
💡 Suggested tweak
<Button
size="icon"
variant="ghost"
className={cn("size-8", className)}
+ disabled={!mounted}
onClick={(e) => {
setTheme(currentTheme === "dark" ? "light" : "dark");
onToggle?.(e);
}}
>🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@apps/web/src/components/theme-toggle.tsx` around lines 35 - 37, The
ThemeToggle click handler can run before mount because currentTheme is
undefined; update the ThemeToggle component to track mount state (e.g.,
isMounted via useEffect/useRef) and guard interactions: in the onClick handler
for the toggle (and/or set the control disabled or aria-disabled when not
mounted) return early if !isMounted or currentTheme == null/undefined so you
don't flip to "dark" by default; ensure you also skip calling onToggle when
gated so setTheme and onToggle are only invoked after mount.
| const params = new URLSearchParams({ | ||
| baseUrl: normalizedBaseUrl, | ||
| }); | ||
| return `/api/ai/agent/chat?${params.toString()}`; |
There was a problem hiding this comment.
Critical: arbitrary upstream proxy target enables SSRF/open-proxy abuse.
On Line 48–Line 51, any absolute baseUrl is forwarded to the proxy query. With the current proxy route behavior (server-side fetch using that query param), this allows attacker-controlled upstream targets unless strict validation is enforced server-side.
🔒 Hardening direction
# In apps/web/src/app/api/ai/agent/chat/route.ts (server-side validation)
+const ALLOWED_HOSTS = new Set(["api.openai.com"]);
+
+function parseAndValidateBaseUrl(raw: string): URL | null {
+ try {
+ const u = new URL(raw);
+ if (u.protocol !== "https:") return null;
+ if (!ALLOWED_HOSTS.has(u.hostname)) return null;
+ return u;
+ } catch {
+ return null;
+ }
+}
+
-const baseUrl = request.nextUrl.searchParams.get("baseUrl");
+const rawBaseUrl = request.nextUrl.searchParams.get("baseUrl");
+if (!rawBaseUrl) {
+ return NextResponse.json({ error: "Missing baseUrl query parameter" }, { status: 400 });
+}
+const validated = parseAndValidateBaseUrl(rawBaseUrl);
+if (!validated) {
+ return NextResponse.json({ error: "Invalid baseUrl" }, { status: 400 });
+}🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@apps/web/src/lib/ai/agent/llm-client.ts` around lines 48 - 51, The code
builds a proxy query including an attacker-controlled absolute baseUrl
(normalizedBaseUrl) and returns `/api/ai/agent/chat?${params}`, enabling
SSRF/open-proxy; change this to never forward arbitrary absolute URLs from the
client and instead send either a validated hostname token or a canonical key:
validate normalizedBaseUrl against a server-side whitelist of allowed origins
(or convert it to a pre-approved lookup key) before adding it to
URLSearchParams, or drop absolute URLs entirely and only send relative/known
identifiers; ensure the server route `/api/ai/agent/chat` enforces the same
whitelist and rejects any requests with unrecognized normalizedBaseUrl values so
upstream targets cannot be attacker-controlled.
Summary
Testing
cd apps/web && bun test src/lib/ai/agent/__tests__/llm-client.test.tscd apps/web && bun test src/lib/__tests__/navigation-utils.test.ts src/components/landing/__tests__/hero-particles.test.tsNODE_ENV=production NEXT_PUBLIC_SITE_URL=http://localhost:4100 UPSTASH_REDIS_REST_URL=http://localhost:8079 UPSTASH_REDIS_REST_TOKEN=cutia_redis_token bun run build:webSummary by CodeRabbit
New Features
Bug Fixes
Refactor
Tests