Skip to content

[Enhancement]: Support OAuth-based LLM authentication (ChatGPT / Google Gemini) or external OAuth gateways #208

@CharanRayudu

Description

@CharanRayudu

Target Component

External Integrations (LLM/Search APIs)

Enhancement Description

Hi maintainers,

First of all, thank you for building PentAGI. It is a very powerful framework for autonomous security testing.

Currently PentAGI requires users to configure LLM providers using API keys (OpenAI, Gemini, Anthropic, etc.). While this works well, many developers already have LLM access through subscription-based web accounts, such as:

ChatGPT (OpenAI subscription)

Google Gemini (Google account / Gemini Advanced)

In those cases, users cannot easily reuse their existing access because PentAGI expects API keys.

Some newer agent frameworks such as OpenClaw support OAuth-based authentication, allowing users to log in with their account and then route requests through a local gateway.

Technical Details

I implemented a local project that authenticates to ChatGPT via OAuth and calls Codex models using the internal ChatGPT backend endpoint.

The request endpoint used is:

https://chatgpt.com/backend-api/codex/responses

Authentication is handled using an OAuth access token stored locally (for example in ~/.codex/auth.json).

However, the request format used by this endpoint is different from the OpenAI API format currently expected by PentAGI.

Example payload structure used by the Codex backend:

{
"model": "...",
"instructions": "...",
"input": "...",
"store": true,
"stream": true,
"temperature": 0.2
}

This differs from the standard OpenAI-compatible request shape such as:

POST /v1/chat/completions

{
"model": "...",
"messages": [...]
}

Because PentAGI currently assumes OpenAI-compatible providers, this endpoint cannot be used directly even though it exposes powerful Codex models through OAuth authentication.

Designs and Mockups

One possible solution would be to introduce a pluggable LLM provider adapter layer.

Architecture example:

PentAGI

LLM provider adapter

Custom endpoint (Codex backend / OAuth providers)

Model response

This would allow PentAGI to support providers that are not strictly OpenAI-compatible while still keeping the existing OpenAI provider implementation unchanged.

Alternatively, PentAGI could allow defining:

  • custom request endpoint
  • custom request payload mapping
  • custom authentication headers

This would enable integration with endpoints such as:

https://chatgpt.com/backend-api/codex/responses

Alternative Solutions

Another approach would be to use an adapter/proxy service.

PentAGI

OpenAI-compatible proxy

OAuth-authenticated provider

Codex / Gemini / other models

The proxy would translate:

PentAGI OpenAI-style requests

into

Codex backend request format.

This approach keeps PentAGI unchanged while enabling additional provider integrations.

Verification

  • I have checked that this enhancement hasn't been already proposed
  • This enhancement aligns with PentAGI's goal of autonomous penetration testing
  • I have considered the security implications of this enhancement
  • I have provided clear use cases and benefits

Metadata

Metadata

Assignees

Labels

enhancementNew feature or request

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions