Skip to content

Comments

Feat/gemini native sdk#987

Open
xavierchoi wants to merge 10 commits intoItzCrazyKns:masterfrom
xavierchoi:feat/gemini-native-sdk
Open

Feat/gemini native sdk#987
xavierchoi wants to merge 10 commits intoItzCrazyKns:masterfrom
xavierchoi:feat/gemini-native-sdk

Conversation

@xavierchoi
Copy link

@xavierchoi xavierchoi commented Feb 2, 2026

Summary by cubic

Rewrite Gemini provider to use the native @google/genai SDK instead of the OpenAI-compat layer. Fixes 400 errors from unsupported params and adds full tool-call and streaming support.

  • New Features

    • Native SDK integration with proper message mapping (systemInstruction, model/user roles, functionResponse).
    • Tool calling via JSON Schema; supports thoughtSignature for Gemini 3 models.
    • Implemented generateText/streamText and generateObject/streamObject with JSON schema validation.
    • Returns additionalInfo.finishReason for text and stream responses.
  • Bug Fixes

    • Removed unsupported frequency/presence penalties to prevent 400s.
    • Stable tool call IDs and preserved thoughtSignature during streaming to avoid duplicates and Gemini 3 errors.
    • Response validation and JSON repair for reliable object parsing; clearer errors on empty/blocked outputs.
    • Strip models/ prefix from model names to avoid "models/models/..." 400s.

Written for commit 856c1a8. Summary will update on new commits.

Xavier Choi and others added 7 commits February 2, 2026 18:07
Replace OpenAILLM inheritance with native Google GenAI SDK implementation
to fix 400 errors caused by unsupported parameters (frequency_penalty,
presence_penalty) and incompatible APIs.

Changes:
- Implement BaseLLM<GeminiConfig> directly instead of extending OpenAILLM
- Add message conversion for Gemini format (system→systemInstruction,
  assistant→model, tool→functionResponse)
- Add tool declaration conversion using z.toJSONSchema()
- Implement all required methods: streamText, generateText, generateObject,
  streamObject
- Remove baseURL parameter from provider (no longer using OpenAI compat layer)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Remove Date.now() from generateToolCallId hash input to ensure
consistent IDs across streaming chunks. This prevents duplicate
entries in accumulatedToolCalls when the same tool call appears
in multiple chunks.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Add validation before accessing response properties to provide
clear error messages when API returns empty/blocked responses.
Includes finish reason inspection for better diagnostics.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Replace custom inline options type with GenerateOptions from
shared types module for consistency with other providers.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
- Add additionalInfo with finishReason to generateText and streamText
- Track lastFinishReason during streaming to include in final yield
- Add comment explaining 0.7 default temperature (vs OpenAI's 1.0)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
- Fix tool call ID collision by using monotonic counter instead of
  chunk-local index in streamText
- Remove misleading empty object yield on parse errors in streamObject
- Add final validation after stream ends to ensure valid JSON output

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Gemini 3 models require thoughtSignature to be preserved and sent
back during function calling. This adds:

- thoughtSignature field to ToolCall type
- Extract thoughtSignature from Gemini functionCall responses
- Include thoughtSignature when converting messages back to Gemini format

This fixes "Function call is missing a thought_signature" error with
gemini-3-pro-preview and gemini-3-flash-preview models.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
@xavierchoi
Copy link
Author

@codex 이거 PR 리뷰 좀 부탁해요!

Copy link
Contributor

@cubic-dev-ai cubic-dev-ai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No issues found across 3 files

Xavier Choi and others added 3 commits February 2, 2026 22:45
When updating existing tool calls during streaming, only arguments
was being copied, losing the thoughtSignature required by Gemini 3
models in multi-turn conversations.

Fixes: "Function call is missing a thought_signature" error in
Researcher agent with gemini-3-*-preview models.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
The Gemini API returns model names with "models/" prefix (e.g.,
"models/gemini-3-flash-preview"), but the @google/genai SDK adds
this prefix internally. This caused duplicate prefix
"models/models/..." resulting in 400 Bad Request errors.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
thoughtSignature is a sibling of functionCall in the Part object,
not a property of FunctionCall. Fixed both generateText and
streamText to access raw parts from candidates[0].content.parts
to correctly extract thoughtSignature.

Before: (fc as any).thoughtSignature  // Wrong: fc is FunctionCall
After:  (part as any).thoughtSignature  // Correct: part contains both

Fixes "Function call is missing a thought_signature" error with
Gemini 3 models in multi-turn tool calling conversations.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant