Skip to content

Support Filter Chains on type: model listeners (LLM provider proxy) #729

@M4n5ter

Description

@M4n5ter

Hi, thanks for the project.
Right now it looks like filter_chain can only be attached to agents under type: agent listeners. Is it possible (or planned) to support Filter Chains on type: model listeners as well?

Our use case is to run input guardrails (and optionally non-streaming output checks) as a transparent gateway for OpenAI-compatible requests (Chat Completions first, later Responses and Anthropic Messages), without forcing users to route through an “agent” layer.

If this isn’t supported, what’s the recommended pattern for “model proxy + guardrails” without introducing an extra passthrough agent hop?

Maybe like this?

version: v0.3.0
filters:
  - id: content_guard
    url: http://localhost:10500
    type: http
  - id: audit_logger
    url: http://localhost:10501
    type: mcp
model_providers:
  - model: openai/gpt-5.2
    access_key: $OPENAI_API_KEY
    filter_chain:
      - content_guard
      - audit_logger
listeners:
  - type: model
    name: llm_gateway
    port: 12000

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions