Skip to content

Commit 73e20a1

Browse files
Update README.md
Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
1 parent 164d6aa commit 73e20a1

File tree

1 file changed

+1
-2
lines changed

1 file changed

+1
-2
lines changed

README.md

Lines changed: 1 addition & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -202,7 +202,7 @@ __Note__: Support for individual models is dependent on the specific inference p
202202

203203
| Provider | Model | Tool Calling | provider_type | Example |
204204
|----------------|------------------------------------------------------------------------------|---------------|------------------|----------------------------------------------------------------------------|
205-
| OpenAI | gpt-5, gpt-4o, gpt4-turbo, gpt-4.1, o1, o3, o4 | Yes | remote::openai | [1](examples/openai-faiss-run.yaml) [2](examples/openai-pgvector-run.yaml) |
205+
| OpenAI | gpt-5, gpt-4o, gpt-4-turbo, gpt-4.1, o1, o3, o4 | Yes | remote::openai | [1](examples/openai-faiss-run.yaml) [2](examples/openai-pgvector-run.yaml) |
206206
| OpenAI | gpt-3.5-turbo, gpt-4 | No | remote::openai | |
207207
| RHOAI (vLLM) | meta-llama/Llama-3.2-1B-Instruct | Yes | remote::vllm | [1](tests/e2e-prow/rhoai/configs/run.yaml) |
208208
| RHAIIS (vLLM) | meta-llama/Llama-3.1-8B-Instruct | Yes | remote::vllm | [1](tests/e2e/configs/run-rhaiis.yaml) |
@@ -212,7 +212,6 @@ __Note__: Support for individual models is dependent on the specific inference p
212212
| VertexAI | google/gemini-2.0-flash, google/gemini-2.5-flash, google/gemini-2.5-pro [^1] | Yes | remote::vertexai | [1](examples/vertexai-run.yaml) |
213213
| WatsonX | meta-llama/llama-3-3-70b-instruct | Yes | remote::watsonx | [1](examples/watsonx-run.yaml) |
214214

215-
216215
[^1]: List of models is limited by design in llama-stack, future versions will probably allow to use more models (see [here](https://github.com/llamastack/llama-stack/blob/release-0.3.x/llama_stack/providers/remote/inference/vertexai/vertexai.py#L54))
217216

218217
The "provider_type" is used in the llama stack configuration file when refering to the provider.

0 commit comments

Comments
 (0)