Skip to content

Conversation

@dingyi222666
Copy link
Member

Summary

This PR updates multiple adapters with latest models and adds new features including image input support, reasoning effort model variants, and model list expansions.

New Features

  • Wenxin Adapter: Added image input support for vision-enabled models with async image content processing
  • Shared Adapter: New expandReasoningEffortModelVariants utility for consistent reasoning effort model handling
  • Doubao Adapter: Automatic reasoning effort model variant expansion (non-thinking, minimal, low, medium, high, xhigh)

Model Updates

  • Zhipu: Added GLM-4.7-FlashX model (200k tokens)
  • Wenxin: Added ERNIE 5.0 thinking models and ERNIE 4.5 variants with vision support
  • Spark: Updated spark-x1 to spark-x1.5 (128k tokens)
  • Claude: Updated to latest models (3.7 sonnet thinking, sonnet-4.5, opus-4.5, haiku-4.5)
  • Qwen: Added qwen3-max thinking variants (262k tokens)
  • Doubao: Added latest models (seed-1-8, deepseek-v3-2, glm-4-7, kimi-k2-thinking)

Other Changes

  • Extended image input model detection to include kimi-k2.5
  • Refactored reasoning effort parsing with standardized parseOpenAIModelNameWithReasoningEffort
  • Improved thinking model detection for models with 'thinking' suffix
  • Removed deprecated Claude 3 models

Add GLM-4.7-FlashX model with 200k token context window to the list of supported models in the Zhipu adapter.
Add support for vision-enabled models with image input capability. Expand model list to include new ERNIE 5.0 thinking models and ERNIE 4.5 variants. Update message types to support both text and image content formats.

- Add ModelCapabilities.ImageInput support for vision models
- Update langchainMessageToWenXinMessage to handle image content async
- Support both string and mixed text/image message content
- Add thinking model detection for ERNIE 5.0 thinking models
- Include new ERNIE model variants (5.0 thinking, 4.5 turbo VL, etc.)
…variants

Refactor reasoning effort model handling with centralized variant expansion. Update Doubao adapter with latest models and improve thinking model detection using standardized parsing.

- Add expandReasoningEffortModelVariants utility for consistent model variant generation
- Update parseOpenAIModelNameWithReasoningEffort usage in Doubao requester
- Add latest Doubao models (seed-1-8, deepseek-v3-2, glm-4-7, kimi-k2-thinking)
- Expand reasoning effort model list with new variants (non-thinking, minimal, low, medium, high, xhigh)
- Add reasoning effort models configuration for automatic variant expansion
- Extend image input support to doubao-seed-1-8 models
Replace deprecated spark-x1 model with spark-x1.5, maintaining 128k token context and tool call capabilities.
…i-k2.5 image support

Update Claude model list to latest versions and add kimi-k2.5 to vision model matchers.

- Remove deprecated Claude 3 models (sonnet-20240620, opus-20240229, sonnet-20240229, haiku-20240307)
- Add latest Claude models (3.7 sonnet thinking, sonnet-4.5, opus-4.5, opus-4-1)
- Replace claude-3-5-haiku with claude-haiku-4-5-20251001
- Add kimi-k2.5 to image input model matchers
Add qwen3-max thinking and non-thinking variants with 262k token context window support.
@coderabbitai
Copy link
Contributor

coderabbitai bot commented Feb 3, 2026

Warning

Rate limit exceeded

@dingyi222666 has exceeded the limit for the number of commits that can be reviewed per hour. Please wait 15 minutes and 26 seconds before requesting another review.

⌛ How to resolve this issue?

After the wait time has elapsed, a review can be triggered using the @coderabbitai review command as a PR comment. Alternatively, push new commits to this PR.

We recommend that you space out your commits to avoid hitting the rate limit.

🚦 How do rate limits work?

CodeRabbit enforces hourly rate limits for each developer per organization.

Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout.

Please see our FAQ for further information.

概述

该PR跨多个AI适配器包更新了模型列表和模型处理逻辑,引入了推理努力模型变体的统一扩展机制,扩展了图像输入支持,并更新了消息类型定义以支持更丰富的内容结构。

变化

内聚组合 / 文件 摘要
Claude适配器模型更新
packages/adapter-claude/src/client.ts
移除三个较旧Claude模型(claude-3-5-sonnet-20240620、claude-3-opus-20240229、claude-3-sonnet-20240229),新增两个替代模型(claude-haiku-4-5-20251001、claude-3-5-haiku-20241022)。
Doubao适配器模型和推理努力扩展
packages/adapter-doubao/src/client.ts, packages/adapter-doubao/src/requester.ts
新增pushang-seed-1-8-251228模型,使用expandReasoningEffortModelVariants扩展推理努力模型变体,并将模型名解析逻辑替换为parseOpenAIModelNameWithReasoningEffort专用解析器。
QWen和Spark适配器模型更新
packages/adapter-qwen/src/client.ts, packages/adapter-spark/src/client.ts
QWen新增两个thinking模型变体;Spark将spark-x1替换为spark-x1.5并更新ToolCall能力过滤条件。
Wenxin适配器扩展
packages/adapter-wenxin/src/client.ts, packages/adapter-wenxin/src/types.ts, packages/adapter-wenxin/src/utils.ts, packages/adapter-wenxin/src/requester.ts
新增多个ERNIE模型变体,扩展WenxinMessageRole联合类型,新增WenxinMessageContent类型支持富文本,添加tool_call_id字段,将langchainMessageToWenXinMessage改为异步函数以支持图像处理。
Zhipu适配器模型更新
packages/adapter-zhipu/src/client.ts
新增GLM-4.7-FlashX模型条目。
共享适配器推理努力支持
packages/shared-adapter/src/client.ts, packages/shared-adapter/src/requester.ts
新增reasoningEffortModelSuffixes常量和expandReasoningEffortModelVariants函数用于统一处理模型变体,扩展imageModelMatchers以支持kimi-k2.5。

代码审查工作量估计

🎯 3 (中等复杂度) | ⏱️ ~25 分钟

可能相关的PR

诗歌

🐰 模型列表在更新,变体随之展开展,
推理之力在此加,图像之窗已打开,
消息承载新内容,异步之流更优雅~

🚥 Pre-merge checks | ✅ 2 | ❌ 1
❌ Failed checks (1 warning)
Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 0.00% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (2 passed)
Check name Status Explanation
Title check ✅ Passed 标题准确概括了PR的主要变更,即更新多个适配器以支持最新模型和功能。
Description check ✅ Passed 描述清晰详细地列出了所有主要变更,包括新功能和各个适配器的模型更新,与变更集高度相关。

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch feat/adapter-updates

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello @dingyi222666, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request significantly enhances the platform's AI model integration by bringing various adapters up-to-date with the latest models and introducing new capabilities. The changes aim to broaden the range of supported models, improve multimodal interaction through image input, and standardize the handling of model variants, ultimately providing users with access to more powerful and flexible AI functionalities.

Highlights

  • Multi-Adapter Model Updates: Numerous AI model adapters (Claude, Doubao, Qwen, Spark, Wenxin, Zhipu) have been updated to include their latest available models and variants, ensuring access to the most current and capable LLMs.
  • Image Input Support: The Wenxin Adapter now supports image input for vision-enabled models, including asynchronous image content processing, enhancing multimodal interaction capabilities.
  • Reasoning Effort Model Handling: A new utility expandReasoningEffortModelVariants has been introduced in the shared adapter for consistent handling and expansion of reasoning effort model variants across different platforms. The Doubao adapter now leverages this for automatic expansion of its reasoning effort models.
  • Deprecated Model Removal: Outdated Claude 3 models have been removed from the Claude adapter to streamline the available model list and ensure users are directed to the latest versions.
  • Enhanced Model Detection: Image input model detection has been extended to include kimi-k2.5, and the logic for detecting 'thinking' models has been improved and standardized using parseOpenAIModelNameWithReasoningEffort.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Changelog
  • packages/adapter-claude/src/client.ts
    • Removed deprecated Claude 3.5 Sonnet, Claude 3 Opus, and Claude 3 Sonnet models.
    • Added new Claude models: claude-3-5-sonnet-20241022, claude-3-7-sonnet-20250219, claude-3-7-sonnet-thinking-20250219, claude-haiku-4-5-20251001.
  • packages/adapter-doubao/src/client.ts
    • Imported expandReasoningEffortModelVariants for dynamic model variant generation.
    • Added new Doubao models: doubao-seed-1-8-251228, doubao-seed-1-6-251015, doubao-seed-1-6-lite-251015, doubao-seed-1-6-flash-250828, deepseek-v3-2-251201, glm-4-7-251222, kimi-k2-thinking-251104.
    • Updated imageInputSupportModels to include doubao-seed-1-8.
    • Implemented automatic expansion of reasoning effort model variants for specific Doubao models using the new utility.
  • packages/adapter-doubao/src/requester.ts
    • Imported parseOpenAIModelNameWithReasoningEffort.
    • Refactored the logic for detecting and handling 'thinking' models to use the standardized parseOpenAIModelNameWithReasoningEffort utility.
  • packages/adapter-qwen/src/client.ts
    • Added qwen3-max thinking and non-thinking variants: qwen3-max-2026-01-23-thinking and qwen3-max-2026-01-23-non-thinking.
  • packages/adapter-spark/src/client.ts
    • Updated spark-x1 to spark-x1.5 in the model list.
    • Updated model capabilities to reflect the spark-x1.5 model.
  • packages/adapter-wenxin/src/client.ts
    • Imported ModelCapabilities and supportImageInput.
    • Added numerous new ERNIE 5.0 thinking models and ERNIE 4.5 variants, including vision-enabled ones.
    • Enabled ModelCapabilities.ImageInput for models that support image input using the supportImageInput utility.
    • Extended thinking model detection to include models with a 'thinking' suffix.
  • packages/adapter-wenxin/src/requester.ts
    • Modified langchainMessageToWenXinMessage function signature to accept plugin and model parameters.
    • Updated the call to langchainMessageToWenXinMessage to pass the new parameters, enabling asynchronous image content processing.
  • packages/adapter-wenxin/src/types.ts
    • Introduced WenxinMessageContent type to allow message content to be either a string or an array of text/image_url objects, supporting multimodal input.
    • Updated WenxinMessage interface to use WenxinMessageContent for its content property.
  • packages/adapter-wenxin/src/utils.ts
    • Imported MessageContentImageUrl, fetchImageUrl, supportImageInput, ChatLunaPlugin, and isMessageContentImageUrl.
    • Transformed langchainMessageToWenXinMessage into an async function.
    • Implemented logic to process image URLs within messages, fetching them asynchronously and converting them into the appropriate image_url format for Wenxin models, conditional on model image input support.
  • packages/adapter-zhipu/src/client.ts
    • Added the GLM-4.7-FlashX model with 200,000 tokens.
  • packages/shared-adapter/src/client.ts
    • Defined reasoningEffortModelSuffixes for various thinking model variants.
    • Introduced expandReasoningEffortModelVariants utility function to generate all reasoning effort variants for a given model name.
    • Extended imageModelMatchers to include kimi-k2.5 for image input detection.
  • packages/shared-adapter/src/requester.ts
    • Imported expandReasoningEffortModelVariants.
    • Refactored the internal getModels function to use expandReasoningEffortModelVariants for dynamically adding all reasoning effort model variants.
Activity
  • No specific activity (comments, reviews, or progress updates) has been provided for this pull request beyond its initial creation.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request updates multiple adapters by adding new models, removing deprecated ones, and implementing new features such as image input support for the Wenxin adapter and reasoning effort model variants. The refactoring to centralize model variant generation improves maintainability. A security review found no high-severity or critical vulnerabilities. I suggest improving error logging in the new image handling logic.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
packages/adapter-qwen/src/client.ts (1)

175-179: ⚠️ Potential issue | 🟠 Major

修正非思考模型被误判为 thinking。
新增的 *-non-thinking 名称包含 thinking 子串,当前判定会把它们当作思考模型,影响行为分支。建议显式排除 non-thinking

🔧 建议修改
-                isThinkModel:
-                    model.includes('reasoner') ||
-                    model.includes('r1') ||
-                    model.includes('thinking') ||
-                    model.includes('qwq')
+                isThinkModel:
+                    model.includes('reasoner') ||
+                    model.includes('r1') ||
+                    (model.includes('thinking') &&
+                        !model.includes('non-thinking')) ||
+                    model.includes('qwq')
🤖 Fix all issues with AI agents
In `@packages/adapter-wenxin/src/utils.ts`:
- Around line 72-128: The code assumes images is an array and rawMessage.content
is a string; add defensive normalization: verify images with
Array.isArray(images) before using images.map and treat non-array/non-null
images as ignored; for rawMessage.content (used when building msg.content)
normalize to a string by checking if rawMessage.content is a string else if it's
an array (ContentBlock[]) join or extract text parts to produce a safe string
before pushing into msg.content; keep using supportImageInput, fetchImageUrl and
isMessageContentImageUrl logic but guard all map calls and casts (images,
rawMessage.content, msg.content) with explicit type checks and fallbacks to
avoid runtime errors.

…es handling

Add type-safe normalization helpers to prevent runtime errors when processing message content and images:

- Add normalizeContentToString() to safely convert content from string or ContentBlock[] to string
- Add extractImages() to safely validate and extract string[] from images with type guards
- Use Array.isArray() to verify images before mapping, treating invalid types as empty array
- Handle content normalization for all code paths (initial setup and array processing)
- Add proper fallbacks for unknown types to prevent crashes
- Maintain existing supportImageInput, fetchImageUrl, and isMessageContentImageUrl logic

This guards against runtime errors from unexpected type casting and array operations.
Format long conditional checks and function calls for better readability across multiple lines.
@dingyi222666 dingyi222666 merged commit 673705d into v1-dev Feb 3, 2026
4 of 5 checks passed
@dingyi222666 dingyi222666 deleted the feat/adapter-updates branch February 3, 2026 13:35
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant