-
-
Notifications
You must be signed in to change notification settings - Fork 1.8k
Propagate chat config attributes to chat.sendMessage()/chat.sendMessageStream() spans for Google GenAI #20086
Copy link
Copy link
Open
Labels
javascriptPull requests that update javascript codePull requests that update javascript code
Description
With the removal of the chats.create() span (which was a local object construction, not an LLM call), we lost some span attributes that were previously captured from the config parameter passed to chats.create(). These config values are reused for every subsequent chat.sendMessage() / chat.sendMessageStream() call on that chat instance and should be welded onto those spans instead.
Missing attributes on chat.sendMessage() / chat.sendMessageStream() spans:
- gen_ai.request.model — from model
- gen_ai.request.temperature — from config.temperature
- gen_ai.request.top_p — from config.topP
- gen_ai.request.top_k — from config.topK
- gen_ai.request.max_tokens — from config.maxOutputTokens
- gen_ai.request.frequency_penalty — from config.frequencyPenalty
- gen_ai.request.presence_penalty — from config.presencePenalty
- gen_ai.request.available_tools — from config.tools
- gen_ai.system_instructions — from config.systemInstruction
Solution brainstorm: The relevant parameters should be captured during the chats.create() call and merged onto the LLM invocation spans.
Reactions are currently unavailable
Metadata
Metadata
Assignees
Labels
javascriptPull requests that update javascript codePull requests that update javascript code
Fields
Give feedbackNo fields configured for issues without a type.