Conversation
examples/agents/react_agent.py
Outdated
| from llama_stack_client.types.tool_def_param import Parameter | ||
|
|
||
|
|
||
| class TorchtuneTool(ClientTool): |
There was a problem hiding this comment.
Will remove ClientTool into decorator in following PR.
There was a problem hiding this comment.
# What does this PR do? - See https://github.com/meta-llama/llama-stack/discussions/975 **Changes** ✅ Bugfix ToolResponseMessage role ✅ Add ReACT default prompt + default output parser ✅ Add ReACTAgent wrapper 🚧 Remove ClientTool and simplify it as a decorator (separate PR, including llama-stack-apps) ✅ Make agent able to return structured outputs - Note that some remote provider do not support response_format structured outputs, add it as an optional flag when calling `ReActAgent` wrapper. ## Test Plan see test in llamastack/llama-stack-apps#166 ## Sources Please link relevant resources if necessary. ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Ran pre-commit to handle lint / formatting issues. - [ ] Read the [contributor guideline](https://github.com/meta-llama/llama-stack/blob/main/CONTRIBUTING.md), Pull Request section? - [ ] Updated relevant documentation. - [ ] Wrote necessary unit or integration tests.
|
Merge until llamastack/llama-stack-client-python#121 is released into pypi package. |
|
|
||
| model = "meta-llama/Llama-3.1-8B-Instruct" | ||
|
|
||
| agent = ReActAgent( |
There was a problem hiding this comment.
What's your thinking behind having ReActAgent vs ReActAgentConfig + Agent? I seems that the former hides some configurations that were available, e.g. max_infer_iters? And the latter is consistent with how Agent is used.
There was a problem hiding this comment.
ReActAgent is a simple wrapper and helper class that hides the configuration needed to create a ReActAgentConfig + Agent, otherwise they are the same. We can override configurations with custom_agent_config, and that brings it the same as using Agent.
There was a problem hiding this comment.
Isn't it just as simple with ReActAgentConfig + Agent? Just one extra line to instantiate the Agent? Then we don't need custom_agent_config.
There was a problem hiding this comment.
Yeah, I guess its the difference b/w:
agent = ReActAgent(client, model, builtin_toolgroups, client_tools)v.s.
agent_config = get_react_agent_config(builtin_toolgroups, client_tools, json_response_format)
agent = Agent(client, agent_config, client_tools, output_parser=ReActOutputParser())former hides agent_config & output_parser, while latter needs users to know about output_parser, I guess its a matter of how much we want to hide from users
# What does this PR do? - address comments in #121 ## Test Plan - see llamastack/llama-stack-apps#166 ## Sources Please link relevant resources if necessary. ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Ran pre-commit to handle lint / formatting issues. - [ ] Read the [contributor guideline](https://github.com/meta-llama/llama-stack/blob/main/CONTRIBUTING.md), Pull Request section? - [ ] Updated relevant documentation. - [ ] Wrote necessary unit or integration tests.
# What does this PR do? - Add decorator for on callables for defining client side custom tools - Addresses: llamastack/llama-stack#948 ## Test Plan Usage: ```python @client_tool def add(x: int, y: int) -> int: '''Add 2 integer numbers :param x: integer 1 :param y: integer 2 :returns: sum of x + y ''' return x + y ``` `add` will be a ClientTool that can be passed - Working example in: llamastack/llama-stack-apps#166 ## Sources Please link relevant resources if necessary. ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Ran pre-commit to handle lint / formatting issues. - [ ] Read the [contributor guideline](https://github.com/meta-llama/llama-stack/blob/main/CONTRIBUTING.md), Pull Request section? - [ ] Updated relevant documentation. - [ ] Wrote necessary unit or integration tests.
# What does this PR do? - See discussion in #121 (comment) ## Test Plan test with llamastack/llama-stack-apps#166 ``` LLAMA_STACK_BASE_URL=http://localhost:8321 pytest -v tests/client-sdk/agents/test_agents.py::test_override_system_message_behavior --inference-model "meta-llama/Llama-3.3-70B-Instruct" ``` <img width="1697" alt="image" src="https://github.com/user-attachments/assets/c036cbf6-9fc1-4064-82af-fa1984300653" /> ## Sources Please link relevant resources if necessary. ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Ran pre-commit to handle lint / formatting issues. - [ ] Read the [contributor guideline](https://github.com/meta-llama/llama-stack/blob/main/CONTRIBUTING.md), Pull Request section? - [ ] Updated relevant documentation. - [ ] Wrote necessary unit or integration tests.
What does this PR do?
Feature/Issue validation/testing/test plan
Sources
Please link relevant resources if necessary.
Before submitting
Pull Request section?
to it if that's the case.
Thanks for contributing 🎉!