Skip to content

Conversation

@lwangverizon
Copy link
Contributor

Please ensure you have read the contribution guide before creating a pull request.

Link to Issue or Description of Change

1. Link to an existing issue (if applicable):

Allow google search tool to set different model #4135

2. Or, if no issue exists, describe the change:

If applicable, please follow the issue templates to provide as much detail as
possible.

Problem:
Currently, the Google Search tool inherits and uses the same LLM model set from the parent agent for processing and summarizing search results. This creates a limitation for users who wish to decouple the agent's reasoning model from the model used for search summarization (e.g., for cost optimization or using a lightweight model for simpler summarization tasks).

Solution:
I have updated the Google Search tool to accept an optional LLM model parameter.
Custom Model: Users can now explicitly specify which model should be used for processing search results.
Default Behavior: If no model is specified, the tool defaults to the parent agent's model, ensuring backward compatibility.

    # If a custom model is specified, use it instead of the original model
    if self.model is not None:
      llm_request.model = self.model

Testing Plan

Added a new test case test_process_llm_request_with_custom_model in [test_google_search_tool.py] that verifies:

When a custom model parameter is provided to GoogleSearchTool, it overrides the model from the incoming llm_request during process_llm_request
The tool correctly uses the custom model for LLM calls while maintaining other request parameters

Unit Tests:

  • I have added or updated unit tests for my change.
  • All unit tests pass locally.

(base) wanglu2:adk-python/ (feature/allow-google-search-tool-set-different-llm✗) $ uv run pytest ./tests/unittests/tools/test_google_search_tool.py [22:07:32]
======================================================================== test session starts ========================================================================
platform darwin -- Python 3.13.1, pytest-9.0.2, pluggy-1.6.0
rootdir: /Users/wanglu2/Documents/Git/adk-python
configfile: pyproject.toml
plugins: mock-3.15.1, anyio-4.12.0, xdist-3.8.0, asyncio-1.3.0, langsmith-0.6.0
asyncio: mode=Mode.AUTO, debug=False, asyncio_default_fixture_loop_scope=function, asyncio_default_test_loop_scope=function
collected 21 items

tests/unittests/tools/test_google_search_tool.py ..................... [100%]

======================================================================== 21 passed in 7.91s =========================================================================

Checklist

  • I have read the CONTRIBUTING.md document.
  • I have performed a self-review of my own code.
  • I have commented my code, particularly in hard-to-understand areas.
  • I have added tests that prove my fix is effective or that my feature works.
  • New and existing unit tests pass locally with my changes.
  • I have manually tested my changes end-to-end.
  • Any dependent changes have been merged and published in downstream modules.

Additional context

@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello @lwangverizon, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request enhances the GoogleSearchTool by introducing an optional parameter that allows users to specify a dedicated Large Language Model for processing search results. This provides greater control over LLM usage, enabling scenarios where a different model might be preferred for search summarization, such as for cost efficiency or specific task requirements, while ensuring existing functionality remains unchanged by defaulting to the agent's model if no custom model is provided.

Highlights

  • Custom LLM Model for Google Search Tool: The GoogleSearchTool now accepts an optional model parameter during initialization, allowing users to specify a distinct LLM for processing and summarizing search results, independent of the main agent's model.
  • Decoupling LLM Usage: This change provides flexibility for users to optimize costs or use lightweight models for search summarization by decoupling the LLM used for search from the primary agent's reasoning model.
  • Backward Compatibility: The tool maintains backward compatibility; if no custom model is explicitly provided, it defaults to using the parent agent's LLM model as before.
  • New Unit Tests: New unit tests have been added to verify that the custom model parameter correctly overrides the LLM request model and that the default behavior is preserved when no custom model is specified.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

@adk-bot adk-bot added the tools [Component] This issue is related to tools label Jan 13, 2026
Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request effectively adds the capability to specify a custom model for the GoogleSearchTool, decoupling it from the parent agent's model. The implementation is straightforward and correct, and the new functionality is well-covered by unit tests. My review includes a couple of suggestions for improvement: one regarding a potentially unused parameter in the GoogleSearchTool's initializer, and another to refactor the new tests to reduce code duplication and improve maintainability.

def __init__(
self,
*,
bypass_multi_tools_limit: bool = False,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The bypass_multi_tools_limit parameter appears to be unused within the GoogleSearchTool class. The logic in process_llm_request for Gemini 1.x models unconditionally raises a ValueError if other tools are present, and this check does not consult bypass_multi_tools_limit. For Gemini 2.x+ models, multiple tools are supported by default, making the flag seem redundant there as well.

If this parameter is obsolete, consider removing it and the corresponding instance attribute self.bypass_multi_tools_limit to improve code clarity. If it has a purpose that is not immediately apparent, adding a more detailed explanation in the docstring would be helpful.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

i will let ADK team decide if this can be removed

@lwangverizon lwangverizon marked this pull request as ready for review January 13, 2026 16:20
llm_request: LlmRequest,
) -> None:
# If a custom model is specified, use it instead of the original model
if self.model is not None:
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This will actually override the model to which Agent talks to.

@ryanaiagent ryanaiagent self-assigned this Jan 14, 2026
@ryanaiagent ryanaiagent added the request clarification [Status] The maintainer need clarification or more information from the author label Jan 14, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

request clarification [Status] The maintainer need clarification or more information from the author tools [Component] This issue is related to tools

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants