ToolCallingAgents do not follow the ReAct (Yao et al., 2022) Paradigm #1792
Replies: 1 comment
-
|
Great observation! This is an important distinction. My take on why this might be intentional:
However, I agree reasoning helps in complex scenarios: When I build multi-agent systems, I find explicit reasoning valuable for:
Alternative approach I've found effective: Instead of in-prompt reasoning, use state-based reasoning: state = {
"reasoning": "Need to fetch data before analysis",
"next_action": "fetch_data",
"rationale": {}
}The agent writes its reasoning to shared state, which:
This is similar to what Anthropic calls "thinking" tokens but externalized. Working example: https://github.com/KeepALifeUS/autonomous-agents So to answer your question: I suspect it's a pragmatic trade-off, but for complex workflows, adding structured reasoning (either in prompt or in state) definitely improves results. |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
Hello everyone!
The documentation implies that smolagent agents work with the ReAct (Yao et al., 2022) paradigm. This is true only for the CodeAgents and not on the ToolCallingAgents — you can validate this from the specific prompts at:
https://github.com/huggingface/smolagents/tree/main/src/smolagents/prompts
The ToolCallingAgents work by simple Actions/Observation (without Reasoning or any CoT). This was unclear in the docs and I am wondering: 1. Is this intentional?, and 2. Why is this happening when we can have better results by simply adding reasoning to the prompt requirements?
Beta Was this translation helpful? Give feedback.
All reactions