Skip to content

Propagate chat config attributes to chat.sendMessage()/chat.sendMessageStream() spans for Google GenAI #20086

@nicohrubec

Description

@nicohrubec

With the removal of the chats.create() span (which was a local object construction, not an LLM call), we lost some span attributes that were previously captured from the config parameter passed to chats.create(). These config values are reused for every subsequent chat.sendMessage() / chat.sendMessageStream() call on that chat instance and should be welded onto those spans instead.

Missing attributes on chat.sendMessage() / chat.sendMessageStream() spans:

  • gen_ai.request.model — from model
  • gen_ai.request.temperature — from config.temperature
  • gen_ai.request.top_p — from config.topP
  • gen_ai.request.top_k — from config.topK
  • gen_ai.request.max_tokens — from config.maxOutputTokens
  • gen_ai.request.frequency_penalty — from config.frequencyPenalty
  • gen_ai.request.presence_penalty — from config.presencePenalty
  • gen_ai.request.available_tools — from config.tools
  • gen_ai.system_instructions — from config.systemInstruction

Solution brainstorm: The relevant parameters should be captured during the chats.create() call and merged onto the LLM invocation spans.

Metadata

Metadata

Assignees

No one assigned

    Labels

    javascriptPull requests that update javascript code
    No fields configured for issues without a type.

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions