-
-
Notifications
You must be signed in to change notification settings - Fork 44
feat: adapter updates for latest models and features #708
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Add GLM-4.7-FlashX model with 200k token context window to the list of supported models in the Zhipu adapter.
Add support for vision-enabled models with image input capability. Expand model list to include new ERNIE 5.0 thinking models and ERNIE 4.5 variants. Update message types to support both text and image content formats. - Add ModelCapabilities.ImageInput support for vision models - Update langchainMessageToWenXinMessage to handle image content async - Support both string and mixed text/image message content - Add thinking model detection for ERNIE 5.0 thinking models - Include new ERNIE model variants (5.0 thinking, 4.5 turbo VL, etc.)
…variants Refactor reasoning effort model handling with centralized variant expansion. Update Doubao adapter with latest models and improve thinking model detection using standardized parsing. - Add expandReasoningEffortModelVariants utility for consistent model variant generation - Update parseOpenAIModelNameWithReasoningEffort usage in Doubao requester - Add latest Doubao models (seed-1-8, deepseek-v3-2, glm-4-7, kimi-k2-thinking) - Expand reasoning effort model list with new variants (non-thinking, minimal, low, medium, high, xhigh) - Add reasoning effort models configuration for automatic variant expansion - Extend image input support to doubao-seed-1-8 models
Replace deprecated spark-x1 model with spark-x1.5, maintaining 128k token context and tool call capabilities.
…i-k2.5 image support Update Claude model list to latest versions and add kimi-k2.5 to vision model matchers. - Remove deprecated Claude 3 models (sonnet-20240620, opus-20240229, sonnet-20240229, haiku-20240307) - Add latest Claude models (3.7 sonnet thinking, sonnet-4.5, opus-4.5, opus-4-1) - Replace claude-3-5-haiku with claude-haiku-4-5-20251001 - Add kimi-k2.5 to image input model matchers
Add qwen3-max thinking and non-thinking variants with 262k token context window support.
|
Warning Rate limit exceeded
⌛ How to resolve this issue?After the wait time has elapsed, a review can be triggered using the We recommend that you space out your commits to avoid hitting the rate limit. 🚦 How do rate limits work?CodeRabbit enforces hourly rate limits for each developer per organization. Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout. Please see our FAQ for further information. 概述该PR跨多个AI适配器包更新了模型列表和模型处理逻辑,引入了推理努力模型变体的统一扩展机制,扩展了图像输入支持,并更新了消息类型定义以支持更丰富的内容结构。 变化
代码审查工作量估计🎯 3 (中等复杂度) | ⏱️ ~25 分钟 可能相关的PR
诗歌
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
Summary of ChangesHello @dingyi222666, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request significantly enhances the platform's AI model integration by bringing various adapters up-to-date with the latest models and introducing new capabilities. The changes aim to broaden the range of supported models, improve multimodal interaction through image input, and standardize the handling of model variants, ultimately providing users with access to more powerful and flexible AI functionalities. Highlights
🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console. Changelog
Activity
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
This pull request updates multiple adapters by adding new models, removing deprecated ones, and implementing new features such as image input support for the Wenxin adapter and reasoning effort model variants. The refactoring to centralize model variant generation improves maintainability. A security review found no high-severity or critical vulnerabilities. I suggest improving error logging in the new image handling logic.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
packages/adapter-qwen/src/client.ts (1)
175-179:⚠️ Potential issue | 🟠 Major修正非思考模型被误判为 thinking。
新增的*-non-thinking名称包含thinking子串,当前判定会把它们当作思考模型,影响行为分支。建议显式排除non-thinking。🔧 建议修改
- isThinkModel: - model.includes('reasoner') || - model.includes('r1') || - model.includes('thinking') || - model.includes('qwq') + isThinkModel: + model.includes('reasoner') || + model.includes('r1') || + (model.includes('thinking') && + !model.includes('non-thinking')) || + model.includes('qwq')
🤖 Fix all issues with AI agents
In `@packages/adapter-wenxin/src/utils.ts`:
- Around line 72-128: The code assumes images is an array and rawMessage.content
is a string; add defensive normalization: verify images with
Array.isArray(images) before using images.map and treat non-array/non-null
images as ignored; for rawMessage.content (used when building msg.content)
normalize to a string by checking if rawMessage.content is a string else if it's
an array (ContentBlock[]) join or extract text parts to produce a safe string
before pushing into msg.content; keep using supportImageInput, fetchImageUrl and
isMessageContentImageUrl logic but guard all map calls and casts (images,
rawMessage.content, msg.content) with explicit type checks and fallbacks to
avoid runtime errors.
…es handling Add type-safe normalization helpers to prevent runtime errors when processing message content and images: - Add normalizeContentToString() to safely convert content from string or ContentBlock[] to string - Add extractImages() to safely validate and extract string[] from images with type guards - Use Array.isArray() to verify images before mapping, treating invalid types as empty array - Handle content normalization for all code paths (initial setup and array processing) - Add proper fallbacks for unknown types to prevent crashes - Maintain existing supportImageInput, fetchImageUrl, and isMessageContentImageUrl logic This guards against runtime errors from unexpected type casting and array operations.
Format long conditional checks and function calls for better readability across multiple lines.
Summary
This PR updates multiple adapters with latest models and adds new features including image input support, reasoning effort model variants, and model list expansions.
New Features
expandReasoningEffortModelVariantsutility for consistent reasoning effort model handlingModel Updates
Other Changes
parseOpenAIModelNameWithReasoningEffort