Skip to content

sp_BlitzCache: add a query plan minimizer to send less data to AI #3862

@BrentOzar

Description

@BrentOzar

Is your feature request related to a problem? Please describe.
LLMs have limited context space. The more text you give them in your prompt, the more it costs to parse, and the more output quality may degrade.

LLMs are also easily distracted by the same kinds of stuff that distracts humans, like estimated cost percentages on operators.

Describe the solution you'd like
Inspired by Forrest McDaniel's post, we could add a minimizer to strip out extraneous things that we don't want the AI to be distracted by:

Describe alternatives you're still considering
I'm not 100% sold that we need this just yet. On one hand, LLMs keep getting better/faster/cheaper with more context space. On the other hand, if we can get the size down, privacy-concerned companies may be more likely to use it with local models.

For now, I'm going to leave this as an issue tagged help-wanted so that if local LLM and low-context is important to one of the readers, they can work with their company on this effort.

Are you ready to build the code for the feature?
No.

Metadata

Metadata

Assignees

No one assigned

    Labels

    Someday MaybeFeatures that would be interesting to build if we had extra development timehelp wantedsp_BlitzCache

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions