Add DeepSeek V4 Flash and Pro models#12218
Add DeepSeek V4 Flash and Pro models#12218avaritiachaos wants to merge 3 commits intoRooCodeInc:mainfrom
Conversation
bfe0c3f to
8d50030
Compare
c2c4003 to
4b44a2c
Compare
4b44a2c to
e882b9d
Compare
|
太好了。太好了,坐着等两三天了 |
|
懂github的朋友,如果更新了deepseek v4后,麻烦艾特我一下,谢谢 |
lazyupdate
left a comment
There was a problem hiding this comment.
Thanks for this PR — the thinking mode and reasoning_effort mapping work is solid.
However, there's a bug in the streaming handler that this doesn't address:
src/api/providers/openai.ts line ~234:
if ("reasoning_content" in delta && delta.reasoning_content) {When DeepSeek V4 returns an empty reasoning_content: "" (which it does in certain streaming chunks), delta.reasoning_content is falsy → the block is skipped → reasoning_content is never stored on the assistant message → the next request lacks it → DeepSeek returns 400.
Fix (1 line):
- if ("reasoning_content" in delta && delta.reasoning_content) {
+ if ("reasoning_content" in delta && typeof delta.reasoning_content === "string") {To reproduce in isolation:
- Run a mock OpenAI server that SSE-streams
reasoning_content: ""+ a tool_call (e.g. bash) - Let the agent execute the tool and construct the follow-up request
- The mock returns 400 because
reasoning_contentwas dropped - With the
typeoffix, the field is preserved (even as"") and the mock returns 200
|
Thanks for catching this edge case and providing the detailed reproduction steps! I've applied the typeof fix to preserve the empty reasoning chunks. I also checked the official deepseek.ts provider, fixed the exact same falsy evaluation issue there, and added regression tests for both. PTAL! |
Summary
Test plan
Interactively review PR in Roo Code Cloud