Skip to content

Qu0b/streaming text deltas#220

Open
qu0b wants to merge 2 commits intobanteg:masterfrom
qu0b:qu0b/streaming-text-deltas
Open

Qu0b/streaming text deltas#220
qu0b wants to merge 2 commits intobanteg:masterfrom
qu0b:qu0b/streaming-text-deltas

Conversation

@qu0b
Copy link

@qu0b qu0b commented Mar 5, 2026

No description provided.

qu0b added 2 commits March 5, 2026 14:47
Implement progressive message editing for LLM responses, allowing text
to appear incrementally as tokens arrive rather than waiting for the
complete response.

Architecture:
- Add TextDeltaEvent to domain model for streaming text chunks
- Add streaming/streaming_throttle_ms config to TelegramTransportSettings
- Extend ProgressTracker to accumulate text deltas
- Add render_streaming to Presenter protocol and all implementations
- Wire streaming into ProgressEdits with automatic render mode switching
- Emit TextDeltaEvent from all four runners (Claude, OpenCode, Pi, Codex)
- Edit final message in-place when streaming was active to avoid flicker

The feature is opt-in via `streaming = true` in [transports.telegram].
When disabled (default), behavior is unchanged.

Ref: openclaw/openclaw#33220
Streaming is now on by default. Users can opt out with
`streaming = false` in [transports.telegram].
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants