Consolidated model evaluation framework for LLM benchmarking with Ollama
-
Updated
Apr 2, 2026 - Python
Consolidated model evaluation framework for LLM benchmarking with Ollama
Factor(UT): Controlling Untrusted AI Decomposers — AAAI 2026 workshop paper on monitoring untrusted decomposition in code generation workflows.
Automated prompt optimization for Inspect AI via structured failure analysis
Run inspect_ai evals via Claude Code CLI — use your Claude subscription instead of per-token API billing
Add a description, image, and links to the inspect-ai topic page so that developers can more easily learn about it.
To associate your repository with the inspect-ai topic, visit your repo's landing page and select "manage topics."