Skip to content

Remote llama-swap multi-model configuration example#154

Open
Riyavesuwala wants to merge 2 commits intoggml-org:masterfrom
Riyavesuwala:code-example-llama-swap
Open

Remote llama-swap multi-model configuration example#154
Riyavesuwala wants to merge 2 commits intoggml-org:masterfrom
Riyavesuwala:code-example-llama-swap

Conversation

@Riyavesuwala
Copy link

This PR adds a practical example and documentation showing how to use
multiple models hosted on a remote llama-swap server with the VS Code plugin.

The example demonstrates:

  • Dynamic model selection by name
  • Switching models without restarting services
  • CPU-friendly workflows using remote inference

This addresses the question raised in the issue about best practices
for configuring VS Code with a remote llama-swap setup.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants