Skip to content

Conversation

@stonyme
Copy link

@stonyme stonyme commented Jan 5, 2026

Fixes LiteLLMEmbeddingEngine not passing EMBEDDING_DIMENSIONS to litellm.

Changes:

  • Added dimensions=self.dimensions parameter to the litellm.aembedding() call in the embed_text() method of LiteLLMEmbeddingEngine.py.

Why?
This ensures that custom embedding dimensions configured in cognee (e.g., 512 instead of default 3072) are respected by LiteLLM, preventing dimension mismatches in downstream vector databases like Qdrant or Pinecone.

Impact:

  • Actual embeddings now match the configured self.dimensions.
  • Only affects production mode; mock mode remains unchanged.
  • Tested with OpenAI models that support the dimensions parameter.

Closes #<Issue_Number> (replace with actual issue number, e.g., #1961).

Summary by CodeRabbit

  • Bug Fixes
    • Enhanced validation for dimensions parameter in the embedding engine to ensure only positive integer values are accepted, raising an error for invalid inputs.
    • Improved parameter handling to properly forward dimensions configuration to embedding operations when provided.

✏️ Tip: You can customize this high-level summary in your review settings.

Fixes LiteLLMEmbeddingEngine not passing EMBEDDING_DIMENSIONS to litellm.

**Changes:**
- Added `dimensions=self.dimensions` parameter to the `litellm.aembedding()` call in the `embed_text()` method of `LiteLLMEmbeddingEngine.py`.

**Why?**
This ensures that custom embedding dimensions configured in cognee (e.g., 512 instead of default 3072) are respected by LiteLLM, preventing dimension mismatches in downstream vector databases like Qdrant or Pinecone.

**Impact:**
- Actual embeddings now match the configured `self.dimensions`.
- Only affects production mode; mock mode remains unchanged.
- Tested with OpenAI models that support the `dimensions` parameter.

Fixes topoteretes#1961 .
…ons-passing-issue-#1961

Add dimensions parameter to embedding request
Remove dimensions initialization from constructor and add validation for dimensions if provided.
…teLLMEmbeddingEngine

Refactor dimensions handling in LiteLLMEmbeddingEngine
@pull-checklist
Copy link

pull-checklist bot commented Jan 5, 2026

Please make sure all the checkboxes are checked:

  • I have tested these changes locally.
  • I have reviewed the code changes.
  • I have added end-to-end and unit tests (if applicable).
  • I have updated the documentation and README.md file (if necessary).
  • I have removed unnecessary code and debug statements.
  • PR title is clear and follows the convention.
  • I have tagged reviewers or team members for feedback.

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Jan 5, 2026

Walkthrough

The PR enhances LiteLLMEmbeddingEngine by adding validation for the dimensions parameter in the constructor and conditionally forwarding it to litellm.aembedding during embedding operations. When provided, dimensions must be a positive integer.

Changes

Cohort / File(s) Summary
Embedding Engine Enhancement
cognee/infrastructure/databases/vector/embeddings/LiteLLMEmbeddingEngine.py
Added constructor validation for dimensions parameter (must be positive integer if provided); replaced unconditional assignment with guarded assignment; modified embed_text to conditionally include dimensions in kwargs passed to litellm.aembedding call.

Estimated code review effort

🎯 2 (Simple) | ⏱️ ~8 minutes

Possibly related issues

Possibly related PRs

Suggested reviewers

  • Vasilije1990

Pre-merge checks and finishing touches

❌ Failed checks (1 warning)
Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 50.00% which is insufficient. The required threshold is 80.00%. You can run @coderabbitai generate docstrings to improve docstring coverage.
✅ Passed checks (2 passed)
Check name Status Explanation
Title check ✅ Passed The title clearly and specifically describes the main fix: passing EMBEDDING_DIMENSIONS to litellm to make custom embedding dimensions effective.
Description check ✅ Passed The description covers the core changes and reasoning, but lacks complete fulfillment of the template's Acceptance Criteria, Type of Change checkbox, and Pre-submission Checklist sections.
✨ Finishing touches
  • 📝 Generate docstrings

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link

@github-actions github-actions bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hello @stonyme, thank you for submitting a PR! We will respond as soon as possible.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

Fix all issues with AI Agents 🤖
In @cognee/infrastructure/databases/vector/embeddings/LiteLLMEmbeddingEngine.py:
- Around line 81-84: The constructor validation in LiteLLMEmbeddingEngine allows
dimensions to be None which later causes a TypeError in mock mode when building
embeddings (e.g., the list replication at the mock path around the code using
self.dimensions); fix by rejecting None: in the __init__ of
LiteLLMEmbeddingEngine enforce that dimensions is an int > 0 (raise ValueError
if dimensions is None or not a positive int) so mock-mode code that uses
self.dimensions never receives None.
- Around line 118-120: The code in LiteLLMEmbeddingEngine where kwargs is built
and litellm is called must handle unsupported `dimensions` to avoid uncaught
litellm.exceptions.UnsupportedParamsError: either add a try/except around the
litellm embedding call to catch litellm.exceptions.UnsupportedParamsError (in
addition to ContextWindowExceededError) and log/raise a clear message, or before
adding `dimensions` check model compatibility inside LiteLLMEmbeddingEngine (use
a whitelist of OpenAI/Azure text-embedding-3 model names) and only include
`kwargs['dimensions']` for those models, or set `litellm.drop_params = True`
prior to calling the API to silently remove unsupported params; implement one of
these strategies in the method that constructs kwargs and invokes litellm so
UnsupportedParamsError is avoided or handled.
📜 Review details

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

Disabled knowledge base sources:

  • Linear integration is disabled by default for public repositories

You can enable these sources in your CodeRabbit configuration.

📥 Commits

Reviewing files that changed from the base of the PR and between b339529 and bd88f77.

📒 Files selected for processing (1)
  • cognee/infrastructure/databases/vector/embeddings/LiteLLMEmbeddingEngine.py
🧰 Additional context used
📓 Path-based instructions (3)
**/*.py

📄 CodeRabbit inference engine (AGENTS.md)

**/*.py: Use 4-space indentation in Python code
Use snake_case for Python module and function names
Use PascalCase for Python class names
Use ruff format before committing Python code
Use ruff check for import hygiene and style enforcement with line-length 100 configured in pyproject.toml
Prefer explicit, structured error handling in Python code

Files:

  • cognee/infrastructure/databases/vector/embeddings/LiteLLMEmbeddingEngine.py

⚙️ CodeRabbit configuration file

**/*.py: When reviewing Python code for this project:

  1. Prioritize portability over clarity, especially when dealing with cross-Python compatibility. However, with the priority in mind, do still consider improvements to clarity when relevant.
  2. As a general guideline, consider the code style advocated in the PEP 8 standard (excluding the use of spaces for indentation) and evaluate suggested changes for code style compliance.
  3. As a style convention, consider the code style advocated in CEP-8 and evaluate suggested changes for code style compliance.
  4. As a general guideline, try to provide any relevant, official, and supporting documentation links to any tool's suggestions in review comments. This guideline is important for posterity.
  5. As a general rule, undocumented function definitions and class definitions in the project's Python code are assumed incomplete. Please consider suggesting a short summary of the code for any of these incomplete definitions as docstrings when reviewing.

Files:

  • cognee/infrastructure/databases/vector/embeddings/LiteLLMEmbeddingEngine.py
cognee/**/*.py

📄 CodeRabbit inference engine (AGENTS.md)

Use shared logging utilities from cognee.shared.logging_utils in Python code

Files:

  • cognee/infrastructure/databases/vector/embeddings/LiteLLMEmbeddingEngine.py
cognee/{modules,infrastructure,tasks}/**/*.py

📄 CodeRabbit inference engine (AGENTS.md)

Co-locate feature-specific helpers under their respective package (modules/, infrastructure/, or tasks/)

Files:

  • cognee/infrastructure/databases/vector/embeddings/LiteLLMEmbeddingEngine.py

Comment on lines +81 to +84
if dimensions is not None:
if not isinstance(dimensions, int) or dimensions <= 0:
raise ValueError("dimensions must be a positive integer")
self.dimensions = dimensions
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Handle the dimensions=None case for mock mode.

The validation allows dimensions=None to pass through (the check on line 81 only validates when dimensions is not None). However, if dimensions=None is explicitly passed, line 114 in mock mode will fail with TypeError when attempting [0.0] * None.

Consider either rejecting None values in validation or ensuring mock mode handles None gracefully (e.g., falling back to a default dimension).

🔎 Suggested fix: reject None or use default in mock mode

Option 1: Reject None in validation

-        if dimensions is not None:
-            if not isinstance(dimensions, int) or dimensions <= 0:
-                raise ValueError("dimensions must be a positive integer")
+        if dimensions is None:
+            dimensions = 3072  # Use default
+        if not isinstance(dimensions, int) or dimensions <= 0:
+            raise ValueError("dimensions must be a positive integer")
         self.dimensions = dimensions

Option 2: Handle None in mock mode

         if self.mock:
-            response = {"data": [{"embedding": [0.0] * self.dimensions} for _ in text]}
+            dim = self.dimensions if self.dimensions is not None else 3072
+            response = {"data": [{"embedding": [0.0] * dim} for _ in text]}
             return [data["embedding"] for data in response["data"]]

Committable suggestion skipped: line range outside the PR's diff.

🤖 Prompt for AI Agents
In @cognee/infrastructure/databases/vector/embeddings/LiteLLMEmbeddingEngine.py
around lines 81-84, The constructor validation in LiteLLMEmbeddingEngine allows
dimensions to be None which later causes a TypeError in mock mode when building
embeddings (e.g., the list replication at the mock path around the code using
self.dimensions); fix by rejecting None: in the __init__ of
LiteLLMEmbeddingEngine enforce that dimensions is an int > 0 (raise ValueError
if dimensions is None or not a positive int) so mock-mode code that uses
self.dimensions never receives None.

Comment on lines +118 to +120
kwargs = {}
if self.dimensions is not None:
kwargs['dimensions'] = self.dimensions
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

🧩 Analysis chain

🏁 Script executed:

# First, locate and read the specific file
find . -name "LiteLLMEmbeddingEngine.py" -type f

Repository: topoteretes/cognee

Length of output: 139


🏁 Script executed:

# Read the file to understand the context around lines 118-120 and 127
cat -n cognee/infrastructure/databases/vector/embeddings/LiteLLMEmbeddingEngine.py | head -150 | tail -50

Repository: topoteretes/cognee

Length of output: 2488


🌐 Web query:

litellm aembedding dimensions parameter model support

💡 Result:

Short answer: LiteLLM provides an async embedding function aembedding(model=..., input=...) and accepts an optional dimensions integer parameter to request a specific output size — but that parameter is only supported for certain providers/models (not all). In the official docs: dimensions is supported for OpenAI/Azure text-embedding-3 models (and generally only when the upstream provider/model supports it). [1][2]

Sources:

  • LiteLLM docs — async embedding and optional dimensions (OpenAI/Azure text-embedding-3 support). [1]
  • LlamaIndex integration notes — LitellmEmbedding exposes a dimensions arg and notes support for text-embedding-3 models. [2]

🌐 Web query:

litellm error handling unsupported parameters

💡 Result:

LiteLLM rejects parameters a target provider/model doesn’t support and raises UnsupportedParamsError by default. Workarounds:

  • Globally drop unsupported params: set litellm.drop_params = True (or litellm_settings: drop_params: true for proxies). This causes LiteLLM to silently remove unsupported params instead of raising. [1]
  • Whitelist specific OpenAI-style params per request: include allowed_openai_params = ["param_name"] in the payload to let those pass through. (Used in practice when a provider actually supports the param but LiteLLM blocks it.) [2]
  • As a last resort, strip unsupported keys yourself before calling litellm.completion (or monkey-patch) if you need custom behavior. Community issues show examples and bugs where params like tool_choice, seed, temperature, n, etc. were flagged and these workarounds were used. [2][3]

References:

  • LiteLLM docs — Drop unsupported params. [1]
  • GitHub issues demonstrating UnsupportedParamsError and the allowed_openai_params workaround. [2][3]

Add error handling for unsupported dimensions parameter with non-OpenAI models.

The dimensions parameter is only supported by OpenAI/Azure text-embedding-3 models. When passed to unsupported models (Gemini, Mistral, HuggingFace, etc.), litellm raises UnsupportedParamsError, which is not currently caught. The code only catches ContextWindowExceededError (line 132).

To fix this, either:

  1. Add exception handling for litellm.exceptions.UnsupportedParamsError, or
  2. Set litellm.drop_params = True to silently remove unsupported parameters, or
  3. Add model compatibility checks before passing dimensions and validate against a whitelist of supported models
🤖 Prompt for AI Agents
In @cognee/infrastructure/databases/vector/embeddings/LiteLLMEmbeddingEngine.py
around lines 118-120, The code in LiteLLMEmbeddingEngine where kwargs is built
and litellm is called must handle unsupported `dimensions` to avoid uncaught
litellm.exceptions.UnsupportedParamsError: either add a try/except around the
litellm embedding call to catch litellm.exceptions.UnsupportedParamsError (in
addition to ContextWindowExceededError) and log/raise a clear message, or before
adding `dimensions` check model compatibility inside LiteLLMEmbeddingEngine (use
a whitelist of OpenAI/Azure text-embedding-3 model names) and only include
`kwargs['dimensions']` for those models, or set `litellm.drop_params = True`
prior to calling the API to silently remove unsupported params; implement one of
these strategies in the method that constructs kwargs and invokes litellm so
UnsupportedParamsError is avoided or handled.

@stonyme stonyme closed this Jan 5, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant