A lightweight desktop chat application that integrates with OpenAI models and provides a simple GUI for chatting, importing documents for context, exporting chat history, and generating images. The UI is implemented in Python using Tkinter and the OpenAI Python SDK.
- Default assistant name:
Jeeves(configurable inconfig.py) - Default model:
gpt-5-mini(configurable inconfig.py)
- Chat with an OpenAI model using a desktop GUI (Tkinter).
- Streaming assistant responses in the chat window for a responsive feel.
- Threaded networking so the UI remains responsive.
- Simple animated "Thinking..." indicator while requests are in flight.
- Export chat history to a timestamped
.txtfile.
- Import files for context:
.txt,.md,.docx,.pdf,.xlsx, and any text-based file. (preview shown and file appended to conversation).- Any unsupported files the program will attempt to read as a text file and give and error if unsuccessful.
- Import images:
.png,.jpg,.jpeg,.webp,.gif,.bmp(other formats may be partially supported but may fail compression).- Images will be compressed upon uploading.
- Any unsupported images the program will attempt to read but may not be able to compressed.
- Generate images from prompts (256x256, 512x512, 1024x1024); images saved to disk and displayed inline.
- Model selection menu: switch which chat model you are talking to at runtime without losing conversation context.
- Switching models preserves the conversation context: the new model receives the full conversation history for subsequent requests.
- Simple configuration via
config.py. - In-app management of API Key through menu - Set, remove, or test API Key
- Python 3.8+ (3.10+ recommended)
- Network access to the OpenAI API and an API key (see OpenAI Docs)
- Tkinter (usually included with Python; see OS-specific section below if missing)
- See
requirements.txtfor Python dependencies and testing tools
- Clone the repo:
git clone https://github.com/DavidMiles1925/openai_chat
cd openai_chat- Install dependencies:
pip install -r requirements.txt-
(Optional) If you'd like to change any settings, configure the app using the information in Configuring the Program
-
Run the program using the commands in Running the App
-
IMPORTANT!! Set your API Key using the instructions in the section below titled Set the API Key
In the console:
python ai_gui.pyThe console will open and there will be a brief pause as the program loads. The GUI will then open. Be sure to set your API key, then type messages in the bottom input area and press "Send".
NOTE: You will need an active OpenAI key. See the OpenAI Quickstart Page for more information.
You can provide the app with an OpenAI API key in two ways:
Option 1. In-app (RECOMMENDED: Within the app, click the Key Menu and select Set API Key)
Option 2. (ONLY IF you need to conceal the key from the user.) Environment variable (OPENAI_KEY). See Advanced API Key Management
config.py is used to control basic settings in the app.
NOTE: All model names must match OpenAI documentaion EXACTLY!
| Constant | Description | Default Value |
|---|---|---|
| DEFAULT_MODEL_VERSION | The model that will be selected for use when the program is opened. | gpt-5-mini |
| AVAILABLE_MODELS | The models that will appear in the list of chat models to choose from. | gpt-5, gpt-5-mini, gpt-5-nano, gpt-4o-mini |
| VISION_CAPABLE_MODELS | Models that can accept images as uploads. | gpt-5, gpt-5-mini, gpt-5-nano, gpt-4o-mini |
| IMAGE_MODEL_VERSION | The model that will be used for image generation. | gpt-image-1 |
| MAX_UPLOADED_IMAGE_DIMENSION | The maximum dimension for uplodaed images. Images will be compressed to this size. | 500 |
| ASSISTANT_NAME | This will be what the agent refers to itself as. | Jeeves |
| OPENING_PROMPT | This is how the model is told to behave. This is how you can tailor the tone and feel of the conversation. | You are chatting with a user. Respond helpfully. Your name is {ASSISTANT_NAME} |
- Chat area: top pane shows the conversation (read-only). Assistant messages stream into this area as they arrive.
- Input box: type your message in the lower text box and click "Send".
- Send button: disabled while a request is in progress.
- Status label: shows "Idle" or "Thinking..." with animated dots.
- Menu → File:
- Export Chat: saves a timestamped
.txtfile containing the chat contents. Saves the file in the directory the program was run from. - Import File(s): choose file(s)
.txt,.md,.docx, or.pdf. The content is appended to the conversation as a user message, and a truncated preview is shown in the chat. - Generate Image...: open an image prompt dialog to generate images (saved to folder and inserted into chat).
- Attatch Image(s): choose image(s):
.png,.jpg,.jpeg,.webp,.gif,.bmp(other formats may be partially supported but may fail compression). - Clear Attatched Images: Clears any images that have not yet been sent from the memory.
- Export Chat: saves a timestamped
- Menu → Model:
- Select which chat model to use for subsequent messages.
- Available models (menu items): gpt-5, gpt-5-mini, gpt-5-nano, gpt-4o-mini.
- The selected model is used for requests when you press "Send". The app keeps the conversation intact when switching models; the next model receives the full conversation history.
- Menu → Key:
- Set API Key: Sets the API key with option to use only for session of to persist.
- Remove API Key: Clears the API key.
- Test API Key: Sends small API call to validate the key.
- The model can be switched via the
Modelmenu.- Changing the selected model from the menu does not clear or alter the conversation history — it only changes which model will receive the next request.
- The GUI's Model menu uses the list of available model names. See the Model Guide
- On startup, the app will prefer the
DEFAULT_MODEL_VERSIONfromconfig.pyas the initial selection if it matches one of the available menu options. - Image generation still uses
IMAGE_MODEL_VERSIONfromconfig.py(unchanged).
- The app warns if a file exceeds an arbitrary threshold (default ~200,000 characters in code). Models have token limits; consider summarizing or chunking large files before sending.
- Imported documents are appended as a user message so the model can use them as context for subsequent messages.
- The preview for imported files is truncated beyond a limit (configurable in code).
File Import Methodology
- Text files:
.txt,.md— read as UTF-8 (with replacement on encoding errors).- Word documents:
.docx— read usingpython-docx(paragraphs joined with newline).- PDF:
pypdfand joined with double newlines.- Excel:
.xlsx— read usingpandasandopenpyxl.- Unknown extensions: attempt to read as text and display an error if not readable.
- The app uses the Images API (config references model
gpt-image-1). - Sizes: 256x256, 512x512, 1024x1024.
- The app attempts to handle responses containing either base64 (
b64_json) or a URL and saves images to the selected folder asimage-YYYYmmdd-HHMMSS.png. - Images are displayed inline; the app keeps references to Tk PhotoImage objects to prevent garbage collection.
- Image generation is not affected by the Model menu and continues to use
IMAGE_MODEL_VERSIONfromconfig.py.
When it shines
- Multi-file reasoning (e.g., “Here’s my repo, fix X” and it remembers relationships between files).
- Deep debugging of subtle logic or performance issues.
- Refactoring big chunks of code while keeping style consistent.
- Explaining advanced algorithms in detail before coding them.
Downsides
- More expensive per call.
- Slower than mini/nano, so quick “one-liner” requests feel overkill.
When it shines
- Day-to-day Python work: writing functions, fixing bugs, writing tests.
- Understanding short/medium snippets and producing clean solutions.
- Good balance of cost vs correctness — often 90–95% as good as full gpt-5 for typical coding tasks.
Limits
- Can stumble on very tricky multi-step logic or nuanced cross-file dependencies.
- More likely than full gpt-5 to “hallucinate” library methods that don’t exist (but still way better than nano).
When it shines
- Super quick “generate boilerplate” tasks:
- Create a FastAPI endpoint skeleton.
- Make a pandas dataframe example.
- Write a regex for X.
- Very cheap for iterative trial-and-error stuff.
Limits
- Reasoning depth noticeably lower — more manual correction required.
- Not great at debugging tricky problems or explaining advanced concepts.
- Shorter context means you can’t dump in a whole module.
- Better than gpt-5-nano at multi-step reasoning and edge cases.
- Comparable to gpt-5-mini for many everyday Python tasks — especially short-to-medium ones.
- Behind gpt-5-mini and gpt-5 in:
- Handling very large contexts.
- Understanding complex, intertwined logic across files.
- Producing fully correct answers on first try for tricky bugs.
- GPT-4o-mini is very fast — close to gpt-5-nano speeds.
- GPT-5-mini is slightly slower but still quick enough for most dev work.
- GPT-5 is noticeably slower for small requests.
Current OpenAI pricing at the time of writing (rounded per 1M tokens):
-
GPT-5 (full): ~$1.25 input / ~$10.00 output
-
GPT-5-mini: ~$0.25 input / ~$2.00 output
-
GPT-4o-mini: ~$0.15 input / ~$0.60 output
-
GPT-5-nano: ~$0.05 input / ~$0.40 output
Default to gpt-5-mini for most coding help — best cost/accuracy trade-off.
Switch to gpt-5 only when:
- You’re stuck debugging something subtle and already tried mini.
- You’re doing big multi-file refactors.
- You need deeper conceptual explanation.
Use gpt-5-nano for:
- Tiny, low-risk generation tasks.
- Rapid back-and-forth where you’ll manually review anyway.
The enviroment variable that the program will look for (after checking keyring) is called OPENAI_KEY
Set API Key
setx OPENAI_KEY "sk-..."Remove API Key
[Environment]::SetEnvironmentVariable("OPENAI_KEY", $null, "User")- Set your API key in shell:
export OPENAI_KEY="sk-..."Set your API key:
export OPENAI_KEY="sk-..."Install tkinter:
- Debian / Ubuntu:
sudo apt update
sudo apt install python3-tk- Fedora:
sudo dnf install python3-tkinter- Arch:
sudo pacman -S tkRemove API Key
-
To remove a persistent export in your shell startup files (~/.bashrc, ~/.profile, ~/.bash_profile, ~/.zshrc):
- Manually open the file and remove the line like: export OPENAI_KEY="…"
- Or remove with sed (make a backup first):
Then log out/in or source the file:
cp ~/.bashrc ~/.bashrc.bak sed -i '/^export OPENAI_KEY=/d' ~/.bashrc
source ~/.bashrc
-
If it was set in /etc/environment or ~/.pam_environment:
sudo cp /etc/environment /etc/environment.bak sudo sed -i '/^OPENAI_KEY=/d' /etc/environmentThen log out/in for changes to take effect
-
Sources the app will use for an API key (in priority order):
- Key stored in the OS keyring (via the
keyringPython package). - Session key if the user set a key for the current session via the Set API Key dialog (and did not choose to save it).
- Environment variable OPENAI_KEY (if present). See
- Key stored in the OS keyring (via the
-
When you
Set API Keyin the app and choose "Remember", the key is saved into the OS keychain viakeyring. If you elect not to remember it, the key is used for the current session only (kept in memory) and will be cleared on app restart. -
The app validates keys before saving (makes a tiny, cheap API call).
-
The Send and Generate Image controls are only enabled when a usable key is present (session/keyring or env var).
-
If you use the
Forget API Keymenu action, the app will:- Delete the key stored in the OS keychain (if any).
- Clear any session-in-memory key and re-initialize the API client.
-
However, if you have a persistent environment variable
OPENAI_KEYset in your OS (for example in your shell profile or system environment), the SDK will continue to see and use that environment variable even after you forget the key in the keychain. In other words:Forget API Keywill remove the key from the keychain and clear the session, but it cannot and will not remove a persistentOPENAI_KEYenvironment variable.- To completely stop the app from using an API key that came from an environment variable, you must remove that environment variable from your system (see Troubleshooting below).
- Authentication / invalid API key:
- Ensure
OPENAI_KEYis set and valid (no stray quotes, no trailing spaces). - Ensure your API key has funds available.
- Ensure
- Forget API Key appears ineffective after restart:
- If you previously set a persistent environment variable OPENAI_KEY, the app will still pick that up after you forget the key in the keychain. Deleting the keychain entry does not delete environment variables. Remove the OPENAI_KEY environment variable from your system to fully stop the app from seeing that key.
- To clear the environment variable:
- Windows (PowerShell session only): $env:OPENAI_KEY = $null
- Windows (remove persistent user variable, PowerShell): [Environment]::SetEnvironmentVariable("OPENAI_KEY", $null, "User") (You may need to sign out / sign in or restart to fully remove persistent variables from new sessions.)
- macOS / Linux (bash/zsh): Remove or edit any lines in ~/.bashrc, ~/.bash_profile or ~/.zshrc that export OPENAI_KEY; then restart the terminal or source the file.
- Keyring backend issues on Linux:
- On some minimal Linux desktop setups no secret service may be installed; keyring can fall back to an insecure local file backend. If you rely on secure storage, install Secret Service / GNOME Keyring / KWallet support. Optionally install the secretstorage package to improve Secret Service integration (may require dbus).
ModuleNotFoundError: No module named 'tkinter':- Follow the OS-specific instructions above to install Tk support.
- PDF imports produce no text:
- The PDF may be scanned images. Use OCR or obtain a text-based PDF.
- Streaming doesn't show output:
- Check network connectivity and the installed OpenAI SDK version. Run from a terminal to view stack traces (easier to debug).
- Image generation returns no image data:
- Inspect console logs for the raw API response. Response fields vary across SDK/API versions.
- Model switching oddities:
- Different models have different capabilities and token limits. Switching to a model with a smaller context window may cause long conversation histories to exceed that model's limit; consider clearing or saving/loading conversations if you need to use a model with a much smaller context window.
- If a model is unavailable or unsupported by your account, API requests will fail — check the error printed to the terminal.
- UI built with Tkinter and
ScrolledTextfor chat display. - Conversation is a list of message dicts with keys
roleandcontent. - Background threads handle network requests to keep the UI responsive.
- A
threading.Lock(conversation_lock) protects access to the shared conversation list. - Streaming is implemented via
stream=Truefrom the OpenAI client and appends partial "delta" content to the chat widget as it arrives. - The Model menu uses Tk radio menu entries bound to a Tkinter StringVar. The selected model's value is captured when sending so the intended model is used for that request.
- Treat the OpenAI API key as sensitive; do not commit it to source control.
- Imported file contents are sent to OpenAI as user messages. Do not import confidential documents if you do not want them transmitted.
- If distributing the app, add user-facing notices and consider adding a local-only mode if/when you support local models.
Follow the guidance in CONTRIBUTING.md (included). Summary:
- Fork the repository.
- Create a feature branch.
- Implement and test changes.
- Open a Pull Request with a description.
- Save/load conversation sessions.
- Prompt templates & reply-style presets.
- Configurable model parameters (temperature, max_tokens).
- Automatic chunking & summarization for large files.
- Drag-and-drop file import.
- Theming / Dark mode.
- Local/offline model support.
- Unit tests, CI with OpenAI client mocked, and packaged releases.
- Built using the OpenAI API.
- GUI: Tkinter
- DOCX: python-docx
- PDF extraction: pypdf
- Image handling: Pillow
This project is distributed under the MIT License — see the LICENSE file in the repository root.