Skip to content

Conversation

@lsorber
Copy link
Contributor

@lsorber lsorber commented Dec 25, 2024

This PR upgrades the chatml-function-calling chat handler with support for streaming tool use and fixes #1883, #1869, and #1756, among other improvements.

Changes:

  1. General:
    a. ✨ If no system message is supplied, add an empty system message to hold the tool metadata.
    b. ✨ Add function descriptions to the system message so that tool use is better informed (fixes chatml-function-callling not adding tool description to the prompt. #1869).
    c. ✨ Replace print statements relating to JSON grammars with RuntimeWarning warnings.
    d. ✅ Add tests with fairly broad coverage of the different scenarios.
  2. Case "Tool choice by user":
    a. ✨ Add support for more than one function call by making this a special case of "Automatic tool choice" with a single tool (subsumes Support parallel function calls with tool_choice #1503).
  3. Case "Automatic tool choice -> respond with a message":
    a. ✨ Use user-defined stop and max_tokens.
    b. 🐛 Replace incorrect use of follow-up grammar with user-defined grammar.
  4. Case "Automatic tool choice -> one or more function calls":
    a. ✨ Add support for streaming the function calls (fixes Feature request: add support for streaming tool use #1883).
    b. ✨ Make tool calling more robust by giving the LLM an explicit way to terminate the tool calls by wrapping them in a <function_calls></function_calls> block.
    c. 🐛 Add missing ":" stop token to determine whether to continue with another tool call, which prevented parallel function calling (fixes chatml-function-calling chat format fails to generate multi calls to the same tool #1756).
    d. ✨ Set temperature=0 to determine whether to continue with another tool call, similar to the initial decision on whether to call a tool.

@lsorber
Copy link
Contributor Author

lsorber commented Dec 26, 2024

@abetlen The tests all pass, but the macOS ones were terminated after a timeout. I think this is because of a lack of CPU and or memory resources because the tests run fine on my macOS machine.

@SubatomicPlanets
Copy link

I would love to see this merged! Actually there are quite a lot of good pull requests here that i would like to see merged... But this one is top priority!

@lsorber
Copy link
Contributor Author

lsorber commented Jan 5, 2025

Update: I rebased on the latest main and included a few tiny improvements to further improve tool calling robustness.

@lsorber lsorber force-pushed the main branch 5 times, most recently from b4f8fde to 17301de Compare January 12, 2025 14:48
@lsorber
Copy link
Contributor Author

lsorber commented Jan 12, 2025

Update: I rebased on the latest main and conditionally skipped the added tests on macOS when not enough resources are available to run them.

@LenBanana
Copy link

Worked well for me, would you mind rebasing to the latest commit to allow for tool streaming with Qwen models?
Thanks for your work!

@conornash
Copy link

Would love to see this merged - is there anything holding it up?

@lsorber
Copy link
Contributor Author

lsorber commented Mar 14, 2025

@abetlen I rebased the PR on the latest upstream main and added a small commit to fix the returned logprobs format.

@tsharp
Copy link

tsharp commented Jan 26, 2026

I would also like to see this merge. 🧙‍♂️

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

5 participants