Target Component
External Integrations (LLM/Search APIs)
Enhancement Description
Hi maintainers,
First of all, thank you for building PentAGI. It is a very powerful framework for autonomous security testing.
Currently PentAGI requires users to configure LLM providers using API keys (OpenAI, Gemini, Anthropic, etc.). While this works well, many developers already have LLM access through subscription-based web accounts, such as:
ChatGPT (OpenAI subscription)
Google Gemini (Google account / Gemini Advanced)
In those cases, users cannot easily reuse their existing access because PentAGI expects API keys.
Some newer agent frameworks such as OpenClaw support OAuth-based authentication, allowing users to log in with their account and then route requests through a local gateway.
Technical Details
I implemented a local project that authenticates to ChatGPT via OAuth and calls Codex models using the internal ChatGPT backend endpoint.
The request endpoint used is:
https://chatgpt.com/backend-api/codex/responses
Authentication is handled using an OAuth access token stored locally (for example in ~/.codex/auth.json).
However, the request format used by this endpoint is different from the OpenAI API format currently expected by PentAGI.
Example payload structure used by the Codex backend:
{
"model": "...",
"instructions": "...",
"input": "...",
"store": true,
"stream": true,
"temperature": 0.2
}
This differs from the standard OpenAI-compatible request shape such as:
POST /v1/chat/completions
{
"model": "...",
"messages": [...]
}
Because PentAGI currently assumes OpenAI-compatible providers, this endpoint cannot be used directly even though it exposes powerful Codex models through OAuth authentication.
Designs and Mockups
One possible solution would be to introduce a pluggable LLM provider adapter layer.
Architecture example:
PentAGI
↓
LLM provider adapter
↓
Custom endpoint (Codex backend / OAuth providers)
↓
Model response
This would allow PentAGI to support providers that are not strictly OpenAI-compatible while still keeping the existing OpenAI provider implementation unchanged.
Alternatively, PentAGI could allow defining:
- custom request endpoint
- custom request payload mapping
- custom authentication headers
This would enable integration with endpoints such as:
https://chatgpt.com/backend-api/codex/responses
Alternative Solutions
Another approach would be to use an adapter/proxy service.
PentAGI
↓
OpenAI-compatible proxy
↓
OAuth-authenticated provider
↓
Codex / Gemini / other models
The proxy would translate:
PentAGI OpenAI-style requests
into
Codex backend request format.
This approach keeps PentAGI unchanged while enabling additional provider integrations.
Verification
Target Component
External Integrations (LLM/Search APIs)
Enhancement Description
Hi maintainers,
First of all, thank you for building PentAGI. It is a very powerful framework for autonomous security testing.
Currently PentAGI requires users to configure LLM providers using API keys (OpenAI, Gemini, Anthropic, etc.). While this works well, many developers already have LLM access through subscription-based web accounts, such as:
ChatGPT (OpenAI subscription)
Google Gemini (Google account / Gemini Advanced)
In those cases, users cannot easily reuse their existing access because PentAGI expects API keys.
Some newer agent frameworks such as OpenClaw support OAuth-based authentication, allowing users to log in with their account and then route requests through a local gateway.
Technical Details
I implemented a local project that authenticates to ChatGPT via OAuth and calls Codex models using the internal ChatGPT backend endpoint.
The request endpoint used is:
https://chatgpt.com/backend-api/codex/responses
Authentication is handled using an OAuth access token stored locally (for example in ~/.codex/auth.json).
However, the request format used by this endpoint is different from the OpenAI API format currently expected by PentAGI.
Example payload structure used by the Codex backend:
{
"model": "...",
"instructions": "...",
"input": "...",
"store": true,
"stream": true,
"temperature": 0.2
}
This differs from the standard OpenAI-compatible request shape such as:
POST /v1/chat/completions
{
"model": "...",
"messages": [...]
}
Because PentAGI currently assumes OpenAI-compatible providers, this endpoint cannot be used directly even though it exposes powerful Codex models through OAuth authentication.
Designs and Mockups
One possible solution would be to introduce a pluggable LLM provider adapter layer.
Architecture example:
PentAGI
↓
LLM provider adapter
↓
Custom endpoint (Codex backend / OAuth providers)
↓
Model response
This would allow PentAGI to support providers that are not strictly OpenAI-compatible while still keeping the existing OpenAI provider implementation unchanged.
Alternatively, PentAGI could allow defining:
This would enable integration with endpoints such as:
https://chatgpt.com/backend-api/codex/responses
Alternative Solutions
Another approach would be to use an adapter/proxy service.
PentAGI
↓
OpenAI-compatible proxy
↓
OAuth-authenticated provider
↓
Codex / Gemini / other models
The proxy would translate:
PentAGI OpenAI-style requests
into
Codex backend request format.
This approach keeps PentAGI unchanged while enabling additional provider integrations.
Verification