You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/weaviate/model-providers/google/embeddings-multimodal.md
+4-4Lines changed: 4 additions & 4 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -19,7 +19,7 @@ import TSCode from '!!raw-loader!../_includes/provider.vectorizer.ts';
19
19
Weaviate's integration with [Google Gemini API](https://ai.google.dev/?utm_source=weaviate&utm_medium=referral&utm_campaign=partnerships&utm_content=) and [Google Vertex AI](https://cloud.google.com/vertex-ai) APIs allows you to access their models' capabilities directly from Weaviate.
20
20
21
21
:::note Gemini API multimodal support
22
-
The `gemini-embedding-2-preview` model supports multimodal embeddings (text, images, and PDFs) and is available via both Vertex AI and Google AI Studio (Gemini API). The `multimodalembedding@001` model remains available for Vertex AI users only.
22
+
The `gemini-embedding-2` model supports multimodal embeddings (text, images, and PDFs) and is available via both Vertex AI and Google AI Studio (Gemini API). The `multimodalembedding@001` model remains available for Vertex AI users only.
23
23
:::
24
24
25
25
[Configure a Weaviate vector index](#configure-the-vectorizer) to use a Google embedding model, and Weaviate will generate embeddings for various operations using the specified model and your Google API key. This feature is called the *vectorizer*.
@@ -164,8 +164,8 @@ The following examples show how to configure Google-specific options.
164
164
-`location` (Required): e.g. `"us-central1"`
165
165
-`projectId` (Only required if using Vertex AI): e.g. `cloud-large-language-models`
166
166
-`apiEndpoint` (Optional): e.g. `us-central1-aiplatform.googleapis.com`
167
-
-`modelId` (Optional): e.g. `gemini-embedding-2-preview`, `multimodalembedding@001`
168
-
-`dimensions` (Optional): For `multimodalembedding@001`: `128`, `256`, `512`, or `1408` (default `1408`). For `gemini-embedding-2-preview`: `3072` (default).
167
+
-`modelId` (Optional): e.g. `gemini-embedding-2`, `multimodalembedding@001`
168
+
-`dimensions` (Optional): For `multimodalembedding@001`: `128`, `256`, `512`, or `1408` (default `1408`). For `gemini-embedding-2`: `3072` (default).
169
169
170
170
<TabsclassName="code"groupId="languages">
171
171
<TabItemvalue="py"label="Python">
@@ -318,7 +318,7 @@ The query below returns the `n` most similar objects to the input image from the
318
318
319
319
### Available models
320
320
321
-
-`gemini-embedding-2-preview` (Vertex AI and Gemini API, added in 1.36.5) — supports text, images, and PDFs; `3072` dimensions
321
+
-`gemini-embedding-2` (Vertex AI and Gemini API, added in 1.36.5) — supports text, images, and PDFs; `3072` dimensions
322
322
-`multimodalembedding@001` (Vertex AI only) — supports text, images, and video; dimensions: `128`, `256`, `512`, `1408`
0 commit comments