Validations
Problem
Agent mode is currently hard-disabled for models without tool support metadata, which blocks usage of private or fine-tuned models that could work via prompt engineering. There should be an override to force agent mode, along with docs showing the expected function call format so users can experiment.
Solution
Issue:
When trying to use Gemma-3, I received the following error:
registry.ollama.ai/library/gemma3:latest does not support tools
This hardcoded block is overly restrictive. Many models—even those without built-in tool use—can be made to work using prompt engineering, if we just know the format the model is expected to output.
Worse, if a user is running a privately fine-tuned model, there may be no metadata available at all for Continue to detect capabilities. In these cases, there’s no good reason to block agent mode outright. The current behavior actively prevents experimentation and development with custom LLMs.
Feature Suggestions:
-
Allow Manual Agent Mode Override:
Users should have a way to force enable agent mode—even if the system doesn’t recognize tool support. Add a flag or config setting to allow it, with a warning if needed.
-
Document Tool Call Format & Prompts:
Please add documentation that includes:
- The required format for function/tool call outputs.
- Example prompts that trigger tool use.
- A breakdown of the system prompts that manage agent behavior.
This will allow developers to adapt their prompts for compatibility, especially when using nonstandard or private models.
-
Improve Error Messages:
Instead of just saying the model doesn’t support tools, explain:
- What the system expects from the model.
- What capabilities are missing or undetected.
- How the user can experiment or override the limitation.
Why This Matters:
Open-source tooling should empower developers, not gatekeep based on assumptions. Just because a model doesn’t advertise tool support doesn’t mean it can’t work. With the right prompt engineering, many models—including private fine-tunes—can handle tool calling just fine. The only thing in the way is an artificial, enforced limitation.
Thanks for building an awesome tool—let’s make it even more hackable and developer-friendly.
Validations
Problem
Agent mode is currently hard-disabled for models without tool support metadata, which blocks usage of private or fine-tuned models that could work via prompt engineering. There should be an override to force agent mode, along with docs showing the expected function call format so users can experiment.
Solution
Issue:
When trying to use
Gemma-3, I received the following error:This hardcoded block is overly restrictive. Many models—even those without built-in tool use—can be made to work using prompt engineering, if we just know the format the model is expected to output.
Worse, if a user is running a privately fine-tuned model, there may be no metadata available at all for Continue to detect capabilities. In these cases, there’s no good reason to block agent mode outright. The current behavior actively prevents experimentation and development with custom LLMs.
Feature Suggestions:
Allow Manual Agent Mode Override:
Users should have a way to force enable agent mode—even if the system doesn’t recognize tool support. Add a flag or config setting to allow it, with a warning if needed.
Document Tool Call Format & Prompts:
Please add documentation that includes:
This will allow developers to adapt their prompts for compatibility, especially when using nonstandard or private models.
Improve Error Messages:
Instead of just saying the model doesn’t support tools, explain:
Why This Matters:
Open-source tooling should empower developers, not gatekeep based on assumptions. Just because a model doesn’t advertise tool support doesn’t mean it can’t work. With the right prompt engineering, many models—including private fine-tunes—can handle tool calling just fine. The only thing in the way is an artificial, enforced limitation.
Thanks for building an awesome tool—let’s make it even more hackable and developer-friendly.