-
Notifications
You must be signed in to change notification settings - Fork 595
refactor(streaming): remove ChatNVIDIA streaming patch #1607
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: refactor/drop-streaming-callback
Are you sure you want to change the base?
refactor(streaming): remove ChatNVIDIA streaming patch #1607
Conversation
Greptile OverviewGreptile SummaryThis PR removes the custom
The changes align with the base branch Key considerations:
|
| Filename | Overview |
|---|---|
| nemoguardrails/llm/models/langchain_initializer.py | removed custom patch import, now uses standard ChatNVIDIA from langchain_nvidia_ai_endpoints, simplified docstring and error handling, added type hint to _PROVIDER_INITIALIZERS |
| nemoguardrails/llm/providers/_langchain_nvidia_ai_endpoints_patch.py | removed 107-line custom patch module that wrapped ChatNVIDIA with streaming decorators |
Sequence Diagram
sequenceDiagram
participant App as Application
participant Init as langchain_initializer
participant Patch as _langchain_nvidia_ai_endpoints_patch (REMOVED)
participant Native as langchain_nvidia_ai_endpoints.ChatNVIDIA
Note over App,Native: BEFORE: Custom Patch Flow
App->>Init: Initialize NVIDIA model
Init->>Patch: Import custom ChatNVIDIA
Patch->>Native: Inherit from ChatNVIDIAOriginal
Patch->>Patch: Apply stream_decorator
Patch->>Patch: Apply async_stream_decorator
Patch-->>Init: Return patched ChatNVIDIA
Init-->>App: Return model with streaming support
Note over App,Native: AFTER: Native Implementation Flow
App->>Init: Initialize NVIDIA model
Init->>Native: Import ChatNVIDIA directly
Native-->>Init: Return ChatNVIDIA
Init-->>App: Return model with native streaming
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
2 files reviewed, no comments
Remove the custom ChatNVIDIA patch that added streaming decorators. Now using the standard ChatNVIDIA from langchain_nvidia_ai_endpoints directly since LangChain callback-based streaming has been dropped.
a97a9be to
48a77d8
Compare
Codecov Report✅ All modified and coverable lines are covered by tests. 📢 Thoughts on this report? Let us know! |
trebedea
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
👍 LGTM
Summary
_langchain_nvidia_ai_endpoints_patch.pymodule that patched ChatNVIDIA with streaming decorators_init_nvidia_modelto use standardChatNVIDIAfromlangchain_nvidia_ai_endpointsdirectly