Hi, thanks for the project.
Right now it looks like filter_chain can only be attached to agents under type: agent listeners. Is it possible (or planned) to support Filter Chains on type: model listeners as well?
Our use case is to run input guardrails (and optionally non-streaming output checks) as a transparent gateway for OpenAI-compatible requests (Chat Completions first, later Responses and Anthropic Messages), without forcing users to route through an “agent” layer.
If this isn’t supported, what’s the recommended pattern for “model proxy + guardrails” without introducing an extra passthrough agent hop?
Maybe like this?
version: v0.3.0
filters:
- id: content_guard
url: http://localhost:10500
type: http
- id: audit_logger
url: http://localhost:10501
type: mcp
model_providers:
- model: openai/gpt-5.2
access_key: $OPENAI_API_KEY
filter_chain:
- content_guard
- audit_logger
listeners:
- type: model
name: llm_gateway
port: 12000
Hi, thanks for the project.
Right now it looks like filter_chain can only be attached to agents under type: agent listeners. Is it possible (or planned) to support Filter Chains on type: model listeners as well?
Our use case is to run input guardrails (and optionally non-streaming output checks) as a transparent gateway for OpenAI-compatible requests (Chat Completions first, later Responses and Anthropic Messages), without forcing users to route through an “agent” layer.
If this isn’t supported, what’s the recommended pattern for “model proxy + guardrails” without introducing an extra passthrough agent hop?
Maybe like this?