feat(app): adding Preferences/Wingman for OpenAI and ollama (vibe coded)#1558
feat(app): adding Preferences/Wingman for OpenAI and ollama (vibe coded)#1558dvorka wants to merge 1 commit intoenh-1539/ollama-llm-choicefrom
Conversation
🤖 Augment PR SummarySummary: Adds a new “Wingman2” Preferences tab for managing multiple LLM provider profiles (OpenAI/ollama).
🤖 Was this summary useful? React with 👍 or 👎 |
|
|
||
| #include <QtWidgets> | ||
|
|
||
| #include "../../lib/src/config/configuration.h" |
There was a problem hiding this comment.
The relative include ../../lib/src/config/configuration.h resolves to app/src/lib/... from this header, which doesn’t exist in the repo, so these new dialogs likely won’t compile. Same issue also appears in the OpenAI/ollama config dialog headers.
Severity: high
Other Locations
app/src/qt/dialogs/openai_config_dialog.h:25app/src/qt/dialogs/openai_config_dialog.h:26app/src/qt/dialogs/ollama_config_dialog.h:25app/src/qt/dialogs/ollama_config_dialog.h:26
🤖 Was this useful? React with 👍 or 👎, or 🚀 if it prevented an incident/outage.
| QPushButton* addProviderButton; | ||
|
|
||
| QGroupBox* providerDetailsGroup; | ||
| QLabel* providerTypeLabel; |
There was a problem hiding this comment.
Wingman2Tab declares providerTypeLabel/modelLabel/statusLabel members, but the .cpp constructs local QLabel*s instead; these members remain uninitialized. This is easy to accidentally dereference later and crash.
Severity: low
🤖 Was this useful? React with 👍 or 👎, or 🚀 if it prevented an incident/outage.
| : QWidget(parent), | ||
| config(Configuration::getInstance()) | ||
| { | ||
| helpLabel = new QLabel( |
There was a problem hiding this comment.
The help text says Wingman2 config is “to be used by Wingman”, but the current runtime initialization path still appears to read only the legacy wingmanProvider/wingmanOpenAi*/wingmanOllama* fields. This may make the new tab misleading if Wingman doesn’t actually consume llmProviders yet.
Severity: medium
🤖 Was this useful? React with 👍 or 👎, or 🚀 if it prevented an incident/outage.
| // populate providers combo | ||
| llmProvidersCombo->clear(); | ||
|
|
||
| vector<LlmProviderConfig>& providers = config.getLlmProviders(); |
There was a problem hiding this comment.
Wingman2Tab::refresh() populates from config.getLlmProviders() directly; since migrateFromLegacyWingmanConfig() isn’t invoked here, existing Wingman users may see an empty provider list even if legacy config is set.
Severity: medium
🤖 Was this useful? React with 👍 or 👎, or 🚀 if it prevented an incident/outage.
| if (providerType == WINGMAN_PROVIDER_OPENAI) { | ||
| OpenAiConfigDialog configDialog(this); | ||
| if (configDialog.exec() == QDialog::Accepted) { | ||
| config.addLlmProvider(configDialog.getProviderConfig()); |
There was a problem hiding this comment.
Providers added via config.addLlmProvider() look to be stored only in-memory; MarkdownConfigurationRepresentation currently persists only the legacy Wingman fields. That means Wingman2 providers/activeLlmProviderId may be lost after restart.
Severity: high
🤖 Was this useful? React with 👍 or 👎, or 🚀 if it prevented an incident/outage.
|
|
||
| // temporarily set API key to fetch models | ||
| string originalKey = config.getWingmanOpenAiApiKey(); | ||
| if (!apiKey.empty()) { |
There was a problem hiding this comment.
If the API key is provided only via MINDFORGER_OPENAI_API_KEY, handleRefresh() doesn’t inject it into config before constructing OpenAiWingman, so the refresh call likely sends an empty Authorization header and fails.
Severity: medium
🤖 Was this useful? React with 👍 or 👎, or 🚀 if it prevented an incident/outage.
| modelValue->setText(QString::fromStdString(provider->llmModel)); | ||
|
|
||
| if (provider->isValid) { | ||
| statusValue->setText(tr("Configured ✓")); |
There was a problem hiding this comment.
There was a problem hiding this comment.
Pull request overview
Adds a new “Wingman2” preferences workflow to manage multiple LLM provider configurations (OpenAI + ollama), alongside groundwork in Configuration for storing providers, plus improvements to model discovery and a small ollama bug fix.
Changes:
- Introduces Wingman2 UI (tab + dialogs) to add/configure LLM providers and select an active provider.
- Extends
Configurationwith aLlmProviderConfigmodel and basic CRUD/probe/migration APIs. - Enhances OpenAI model listing via
/v1/modelsand fixes an ollama model-list parsing bug.
Reviewed changes
Copilot reviewed 19 out of 19 changed files in this pull request and generated 22 comments.
Show a summary per file
| File | Description |
|---|---|
| vibe/designs/wingman2-llm-configuration.md | Design spec for Wingman2 provider configuration UX, storage, and probing. |
| lib/src/mind/ai/llm/openai_wingman.h | Adds declaration for HTTP model listing helper. |
| lib/src/mind/ai/llm/openai_wingman.cpp | Implements OpenAI /v1/models model fetching and fallback defaults. |
| lib/src/mind/ai/llm/ollama_wingman.cpp | Fixes model name push bug in model listing. |
| lib/src/config/configuration.h | Adds Wingman2 provider config struct, defaults, and new public APIs. |
| lib/src/config/configuration.cpp | Implements provider CRUD, active selection, probe stubs, and migration helper. |
| app/src/qt/dialogs/configuration_dialog.h | Adds Wingman2Tab declaration and members. |
| app/src/qt/dialogs/configuration_dialog.cpp | Adds Wingman2 tab UI/handlers and wires it into Preferences show/save. |
| app/src/qt/dialogs/openai_config_dialog.{h,cpp} | New OpenAI provider dialog (API key + model + refresh/probe/add). |
| app/src/qt/dialogs/ollama_config_dialog.{h,cpp} | New ollama provider dialog (URL + model + refresh/probe/add). |
| app/src/qt/dialogs/add_llm_provider_dialog.{h,cpp} | New dialog to choose provider type before configuring. |
| app/app.pro | Registers the new dialog sources/headers with qmake. |
| lib/src/app_info.h | Updates legal copyright year range. |
| build/ubuntu/debian/copyright | Updates Debian packaging copyright year range. |
| build/debian/debian/copyright | Updates Debian packaging copyright year range. |
| .github/copilot-instructions.md | Adds repository-specific Copilot guidance. |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| OpenAiWingman::OpenAiWingman() | ||
| : Wingman(WingmanLlmProviders::WINGMAN_PROVIDER_OPENAI), | ||
| config(Configuration::getInstance()), | ||
| llmModels{}, | ||
| defaultLlmModel{LLM_GPT_35_TURBO} | ||
| { | ||
| // API key to be always read from config as it might be reconfigured | ||
| // during MindForger run | ||
| MF_DEBUG( | ||
| "OpenAiWingman::OpenAiWingman() apiKey: '" | ||
| << config.getWingmanOpenAiApiKey() << "'" << endl); | ||
|
|
||
| listModels(); | ||
| } | ||
|
|
||
| OpenAiWingman::~OpenAiWingman() | ||
| { | ||
| } | ||
|
|
||
| std::vector<std::string>& OpenAiWingman::listModels() | ||
| { | ||
| llmModels.clear(); | ||
|
|
||
| // TODO list models using OpenAI API - will many models be confusing for user? | ||
| llmModels.push_back(LLM_GPT_35_TURBO); | ||
| llmModels.push_back(LLM_GPT_4); | ||
| // try to fetch models from OpenAI API | ||
| try { | ||
| listModelsHttpGet(); | ||
| } catch (...) { | ||
| MF_DEBUG("OpenAiWingman::listModels() failed to fetch from API, using defaults" << endl); | ||
| } | ||
|
|
||
| // if API call failed or returned no models, use defaults | ||
| if (llmModels.empty()) { | ||
| llmModels.push_back(LLM_GPT_35_TURBO); | ||
| llmModels.push_back(LLM_GPT_4); | ||
| } |
There was a problem hiding this comment.
OpenAiWingman::listModels() now performs a blocking network call (via listModelsHttpGet()) and is invoked from the constructor. This can stall UI flows that instantiate OpenAiWingman (e.g., model refresh dialog) and makes behavior depend on network latency/availability. Consider avoiding network I/O in the constructor and adding explicit timeouts (Qt + cURL) so failures return quickly.
| MF_DEBUG( | ||
| "OpenAiWingman::listModelsHttpGet() parsed response:" << endl | ||
| << ">>>" | ||
| << httpResponseJson.dump(4) | ||
| << "<<<" | ||
| << endl); |
There was a problem hiding this comment.
listModelsHttpGet() logs the full parsed JSON response (httpResponseJson.dump(4)). The /v1/models payload can be large, which may bloat logs and add noticeable overhead. Consider logging only a summary (e.g., model count / first N IDs) or gating the full dump behind a more verbose debug flag.
| MF_DEBUG( | |
| "OpenAiWingman::listModelsHttpGet() parsed response:" << endl | |
| << ">>>" | |
| << httpResponseJson.dump(4) | |
| << "<<<" | |
| << endl); | |
| std::size_t modelsCount = 0; | |
| if (httpResponseJson.contains("data") && httpResponseJson["data"].is_array()) { | |
| modelsCount = httpResponseJson["data"].size(); | |
| } | |
| MF_DEBUG( | |
| "OpenAiWingman::listModelsHttpGet() parsed response: " | |
| << "data array size = " << modelsCount << endl); |
| AddLlmProviderDialog(const AddLlmProviderDialog&&) = delete; | ||
| AddLlmProviderDialog& operator=(const AddLlmProviderDialog&) = delete; | ||
| AddLlmProviderDialog& operator=(const AddLlmProviderDialog&&) = delete; |
There was a problem hiding this comment.
AddLlmProviderDialog also declares deleted move operations as const AddLlmProviderDialog&& / operator=(const AddLlmProviderDialog&&). This isn’t the normal move signature and can be misleading; use AddLlmProviderDialog&& (or omit move deletions if not needed).
| AddLlmProviderDialog(const AddLlmProviderDialog&&) = delete; | |
| AddLlmProviderDialog& operator=(const AddLlmProviderDialog&) = delete; | |
| AddLlmProviderDialog& operator=(const AddLlmProviderDialog&&) = delete; | |
| AddLlmProviderDialog(AddLlmProviderDialog&&) = delete; | |
| AddLlmProviderDialog& operator=(const AddLlmProviderDialog&) = delete; | |
| AddLlmProviderDialog& operator=(AddLlmProviderDialog&&) = delete; |
| // Wingman2: collection of configured LLM providers | ||
| std::vector<LlmProviderConfig> llmProviders; | ||
| std::string activeLlmProviderId; |
There was a problem hiding this comment.
New Wingman2 state (llmProviders, activeLlmProviderId) is added, but Configuration::clear() currently doesn't reset these fields. This can lead to stale providers persisting across clear() calls (tests, reloads, etc.). Ensure clear() clears the providers vector and active ID to keep configuration resets complete.
| string errorMessage; | ||
| bool success = false; | ||
|
|
||
| if (provider->providerType == WINGMAN_PROVIDER_OPENAI) { | ||
| success = config.probeOpenAiProvider( | ||
| provider->apiKey, provider->llmModel, errorMessage); | ||
| } else if (provider->providerType == WINGMAN_PROVIDER_OLLAMA) { | ||
| success = config.probeOllamaProvider( | ||
| provider->url, provider->llmModel, errorMessage); | ||
| } | ||
|
|
||
| if (success) { | ||
| QMessageBox::information( | ||
| this, | ||
| tr("Connection Test"), | ||
| tr("Provider configuration is valid.")); | ||
| } else { | ||
| QMessageBox::critical( | ||
| this, | ||
| tr("Connection Test"), | ||
| tr("Provider configuration test failed: %1") | ||
| .arg(QString::fromStdString(errorMessage))); | ||
| } |
There was a problem hiding this comment.
handleTestConnection() reports success/failure but never updates provider->isValid, so the status in the “Selected Provider Details” panel won’t reflect the latest test result. Consider setting provider->isValid = success (and persisting via updateLlmProvider if needed) and then refreshing the UI.
| void OpenAiConfigDialog::show() | ||
| { | ||
| // load current config if any | ||
| apiKeyEdit->setText(QString::fromStdString(config.getWingmanOpenAiApiKey())); | ||
| llmModelCombo->setCurrentText(LLM_MODEL_GPT35_TURBO); | ||
| configValid = false; | ||
|
|
||
| QDialog::show(); | ||
| } |
There was a problem hiding this comment.
OpenAiConfigDialog::show() is used to populate fields from Configuration, but the dialog is shown via exec() in Wingman2Tab, and exec() does not call your overridden show() implementation. As a result, the API key/model initialization logic won’t run. Move initialization into the constructor, showEvent(), or call an explicit init method before exec().
| - Library dependencies are stored in `deps/` - each have its own build style. | ||
| - Library code lives under `lib/`. | ||
| - Qt application code lives under `app/`. | ||
| - Tests code lives under `lib/tests/`. |
There was a problem hiding this comment.
The Copilot project instructions claim tests live under lib/tests/, but this repo uses lib/test/ (and the design doc also references lib/test/src/...). Update this path to avoid steering contributors/tools to a non-existent directory.
| - Tests code lives under `lib/tests/`. | |
| - Tests code lives under `lib/test/`. |
| // generate unique ID using timestamp | ||
| auto now = chrono::system_clock::now(); | ||
| auto timestamp = chrono::duration_cast<chrono::seconds>(now.time_since_epoch()).count(); | ||
|
|
||
| // extract host from URL for display name | ||
| string host = url; | ||
| size_t pos = url.find("://"); | ||
| if (pos != string::npos) { | ||
| host = url.substr(pos + 3); | ||
| } | ||
|
|
||
| providerConfig.id = "ollama-" + to_string(timestamp); | ||
| providerConfig.displayName = "ollama " + model + " @ " + host; | ||
| providerConfig.providerType = WINGMAN_PROVIDER_OLLAMA; | ||
| providerConfig.url = url; | ||
| providerConfig.llmModel = model; | ||
| providerConfig.isValid = configValid; |
There was a problem hiding this comment.
Provider IDs are generated using seconds-resolution timestamps (e.g., ollama-<seconds>), which can collide if the user adds providers quickly. Since IDs are used as stable keys for update/remove/active selection, consider generating IDs with higher entropy (ms + random/UUID) or checking for collisions and regenerating.
| return getLlmProviderById(activeLlmProviderId); | ||
| } | ||
|
|
||
| void Configuration::addLlmProvider(const LlmProviderConfig& provider) { |
There was a problem hiding this comment.
addLlmProvider() blindly appends the provider without enforcing unique IDs (despite id being documented as unique). This can create duplicate IDs (e.g., two providers added within the same second using timestamp IDs) and then getLlmProviderById()/setActiveLlmProvider() become ambiguous. Consider rejecting duplicates (and surfacing an error) or auto-renaming/regenerating the ID on collision.
| void Configuration::addLlmProvider(const LlmProviderConfig& provider) { | |
| void Configuration::addLlmProvider(const LlmProviderConfig& provider) { | |
| // Enforce unique provider IDs to avoid ambiguous lookups. | |
| if (!provider.id.empty() && getLlmProviderById(provider.id) != nullptr) { | |
| MF_DEBUG("Configuration::addLlmProvider() duplicate ID, provider not added: " << provider.id << endl); | |
| return; | |
| } |
| MF_DEBUG("Configuration::removeLlmProvider() removed: " << id << endl); | ||
| } | ||
|
|
||
| void Configuration::setActiveLlmProvider(const string& id) { |
There was a problem hiding this comment.
setActiveLlmProvider() sets activeLlmProviderId without verifying the ID exists in llmProviders. This can leave the config pointing at a non-existent provider (e.g., after update/remove sequences or malformed persisted data). Consider validating the ID (no-op with error/log, or clear active selection) to keep getActiveLlmProvider() consistent.
| void Configuration::setActiveLlmProvider(const string& id) { | |
| void Configuration::setActiveLlmProvider(const string& id) { | |
| // Only set active provider if the ID is known; keep configuration consistent. | |
| if (!id.empty() && getLlmProviderById(id) == nullptr) { | |
| MF_DEBUG("Configuration::setActiveLlmProvider() attempted to set unknown provider id: " << id << endl); | |
| return; | |
| } |
No description provided.