| Version | Supported |
|---|---|
| 0.x.x | ✅ |
We take security vulnerabilities in Augustus seriously. If you discover a security issue, please report it responsibly.
Do NOT open a public GitHub issue for security vulnerabilities.
Instead, please report security vulnerabilities by emailing:
Include the following information in your report:
- Description: A clear description of the vulnerability
- Impact: What an attacker could achieve by exploiting this vulnerability
- Reproduction Steps: Step-by-step instructions to reproduce the issue
- Affected Versions: Which versions of Augustus are affected
- Suggested Fix: If you have a suggested fix, please include it
- Acknowledgment: We will acknowledge receipt of your report within 48 hours
- Assessment: We will assess the vulnerability and determine its severity within 7 days
- Updates: We will keep you informed of our progress toward a fix
- Credit: We will credit you in the release notes (unless you prefer to remain anonymous)
- We follow a 90-day coordinated disclosure timeline
- We will work with you to understand and resolve the issue
- We will not take legal action against researchers who follow this policy
When using Augustus for security testing:
- Never commit API keys to version control
- Use environment variables or secure secret management
- Rotate API keys regularly
- Use separate keys for development and production testing
# Good: Environment variables
export OPENAI_API_KEY="sk-..."
augustus scan openai.OpenAI --probe dan.Dan
# Bad: Hardcoded in command or config
augustus scan openai.OpenAI --config '{"api_key":"sk-..."}' # Don't do this- Only test LLM systems you own or have explicit permission to test
- Obtain written authorization before testing third-party systems
- Document your testing scope and methodology
- Scan results may contain sensitive information
- Store output files securely
- Do not share results publicly without redacting sensitive data
Augustus intentionally generates adversarial prompts to test LLM security. This means:
- Responses from tested LLMs may contain harmful, offensive, or inappropriate content
- This is expected behavior when testing for vulnerabilities
- Review output in a secure, isolated environment
- Augustus makes network requests to LLM provider APIs
- Ensure your network policies allow outbound HTTPS connections
- Consider using a dedicated testing environment
Security updates will be released as patch versions. Subscribe to GitHub releases to be notified of updates.
For security-related questions that are not vulnerability reports, you can:
- Open a GitHub Discussion
- Email security@praetorian.com
Thank you for helping keep Augustus and its users secure.