Skip to content

Commit bc8b95e

Browse files
committed
Sync open source content 🐝 (from 02b3c1c7da71d04dfed064ae78137651e416418b)
1 parent 4d48b62 commit bc8b95e

File tree

6 files changed

+457
-0
lines changed

6 files changed

+457
-0
lines changed

_meta.global.tsx

Lines changed: 14 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -152,6 +152,20 @@ const meta = {
152152
},
153153
},
154154
},
155+
"agents-api": {
156+
title: "Gram Agents API (Beta)",
157+
items: {
158+
"overview": {
159+
title: "Gram Agents API Overview",
160+
},
161+
"billing-and-usage": {
162+
title: "Gram Agents API Billing and Usage",
163+
},
164+
"example-usage": {
165+
title: "Gram Agents API Usage Examples",
166+
},
167+
},
168+
},
155169
"command-line": {
156170
title: "Command Line",
157171
items: {
Lines changed: 49 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,49 @@
1+
---
2+
title: "Gram billing and usage"
3+
description: "Understand how Gram Agents API usage is billed and how to manage usage limits."
4+
sidebar:
5+
order: 2
6+
---
7+
8+
import { Callout } from "@/mdx/components";
9+
10+
## Authentication
11+
12+
To use the Gram Agents API, create a Chat API key in the Gram dashboard and include it in requests.
13+
14+
### Creating a Chat API key
15+
16+
In the [Gram dashboard](https://app.getgram.ai), navigate to **Settings** and create a new API key with chat permissions.
17+
18+
![Screenshot of the Gram settings page showing chat API key creation](/assets/docs/gram/img/guides/chat-key.png)
19+
20+
### Request headers
21+
22+
Include the following headers with each request:
23+
24+
```yaml
25+
Content-Type: application/json
26+
Gram-Key: <your-chat-api-key>
27+
Gram-Project: <your-project-slug>
28+
```
29+
30+
The `Gram-Project` header specifies which project the request is associated with. Use `default` if not using multiple projects.
31+
32+
## Billing
33+
34+
Gram Agents API usage is billed as chat usage, similar to Playground usage. Model costs are tracked and billed based on the tokens consumed during agent execution.
35+
36+
### Usage limits
37+
38+
Default account chat credit limits apply to Gram Agents API requests.
39+
40+
<Callout type="info">
41+
To request higher limits upgrade your account or contact [[email protected]](mailto:[email protected]).
42+
</Callout>
43+
44+
### Monitoring usage
45+
46+
Track Gram Agents API usage in the [Gram dashboard](https://app.getgram.ai/billing) under **Billing**.
47+
48+
![Screenshot of the Gram billing dashboard showing chat-based credits](/assets/docs/gram/img/guides/chat-based-credits.png)
49+
Lines changed: 272 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,272 @@
1+
---
2+
title: "Gram Agents API usage examples"
3+
description: "Examples demonstrating different ways to use the Gram Agents API."
4+
sidebar:
5+
order: 3
6+
---
7+
8+
The following examples demonstrate different ways to use the Gram Agents API. All examples use the API endpoint at `https://app.getgram.ai/rpc/agents.response`.
9+
10+
## Basic request
11+
12+
A simple synchronous request with a single toolset:
13+
14+
```python
15+
import os
16+
import requests
17+
18+
url = "https://app.getgram.ai/rpc/agents.response"
19+
20+
headers = {
21+
"Content-Type": "application/json",
22+
"Gram-Key": os.getenv("GRAM_API_KEY"),
23+
"Gram-Project": "default",
24+
}
25+
26+
payload = {
27+
"model": "openai/gpt-4o",
28+
"instructions": "You are a helpful assistant.",
29+
"input": "What information can you retrieve about my account?",
30+
"toolsets": [
31+
{
32+
"toolset_slug": "my-api",
33+
"environment_slug": "my-env",
34+
"headers": {}
35+
},
36+
],
37+
}
38+
39+
response = requests.post(url, headers=headers, json=payload)
40+
data = response.json()
41+
42+
print(data["output"][-1]["content"][-1]["text"])
43+
```
44+
45+
## Multiple toolsets
46+
47+
Combine multiple toolsets in a single request to give the agent access to different APIs:
48+
49+
```python
50+
payload = {
51+
"model": "openai/gpt-4o",
52+
"instructions": "You are a helpful assistant with access to multiple services.",
53+
"input": "Find the user's email and look up their payment history.",
54+
"toolsets": [
55+
{
56+
"toolset_slug": "user-api",
57+
"environment_slug": "my-env",
58+
"headers": {}
59+
},
60+
{
61+
"toolset_slug": "payments-api",
62+
"environment_slug": "my-env",
63+
"headers": {}
64+
},
65+
],
66+
}
67+
```
68+
69+
## Sub-agents
70+
71+
Define specialized sub-agents for complex workflows. Each sub-agent can have its own toolsets and instructions:
72+
73+
```python
74+
payload = {
75+
"model": "openai/gpt-4o",
76+
"async": True,
77+
"instructions": "You are a coordinator that delegates tasks to specialized agents.",
78+
"input": "Get user details and their payment history, then summarize.",
79+
"sub_agents": [
80+
{
81+
"name": "User Agent",
82+
"description": "Handles user-related operations.",
83+
"instructions": "Fetch user information using the provided tools.",
84+
"toolsets": [
85+
{
86+
"toolset_slug": "user-api",
87+
"environment_slug": "my-env",
88+
"headers": {}
89+
},
90+
]
91+
},
92+
{
93+
"name": "Payments Agent",
94+
"description": "Handles payment-related operations.",
95+
"tools": [
96+
"tools:http:payments:get_charges",
97+
"tools:http:payments:get_refunds",
98+
],
99+
"environment_slug": "my-env",
100+
},
101+
],
102+
}
103+
```
104+
105+
Sub-agents can be configured with either:
106+
- `toolsets`: Full toolset references
107+
- `tools`: Specific tool URNs for fine-grained control
108+
109+
## Asynchronous execution
110+
111+
For longer-running tasks, use async mode and poll for results:
112+
113+
```python
114+
import os
115+
import time
116+
import requests
117+
118+
url = "https://app.getgram.ai/rpc/agents.response"
119+
120+
headers = {
121+
"Content-Type": "application/json",
122+
"Gram-Key": os.getenv("GRAM_API_KEY"),
123+
"Gram-Project": "default",
124+
}
125+
126+
payload = {
127+
"model": "openai/gpt-4o",
128+
"async": True,
129+
"instructions": "You are a helpful assistant.",
130+
"input": "Analyze the data and provide a summary.",
131+
"toolsets": [
132+
{
133+
"toolset_slug": "analytics-api",
134+
"environment_slug": "my-env",
135+
"headers": {}
136+
},
137+
],
138+
}
139+
140+
# Start the async request
141+
response = requests.post(url, headers=headers, json=payload)
142+
data = response.json()
143+
response_id = data["id"]
144+
145+
print(f"Response ID: {response_id}")
146+
147+
# Poll for completion
148+
poll_url = f"https://app.getgram.ai/rpc/agents.response?response_id={response_id}"
149+
150+
while True:
151+
time.sleep(5)
152+
poll_response = requests.get(poll_url, headers=headers)
153+
poll_data = poll_response.json()
154+
status = poll_data.get("status")
155+
156+
print(f"Status: {status}")
157+
158+
if status != "in_progress":
159+
print(poll_data["output"][-1]["content"][-1]["text"])
160+
break
161+
```
162+
163+
## Multi-turn conversations with previous_response_id
164+
165+
Chain responses together using `previous_response_id` to build conversational agents:
166+
167+
```python
168+
import os
169+
import requests
170+
171+
url = "https://app.getgram.ai/rpc/agents.response"
172+
173+
headers = {
174+
"Content-Type": "application/json",
175+
"Gram-Key": os.getenv("GRAM_API_KEY"),
176+
"Gram-Project": "default",
177+
}
178+
179+
payload = {
180+
"model": "openai/gpt-4o",
181+
"instructions": "You are a helpful assistant.",
182+
"input": "Get the details of organization 'acme-corp'.",
183+
"toolsets": [
184+
{
185+
"toolset_slug": "org-api",
186+
"environment_slug": "my-env",
187+
"headers": {}
188+
},
189+
],
190+
}
191+
192+
# First turn
193+
response = requests.post(url, headers=headers, json=payload)
194+
data = response.json()
195+
196+
print("Turn 1:", data["output"][-1]["content"][-1]["text"])
197+
198+
# Second turn - reference the previous response
199+
payload["previous_response_id"] = data["id"]
200+
payload["input"] = "What workspaces are in that organization?"
201+
202+
response = requests.post(url, headers=headers, json=payload)
203+
data = response.json()
204+
205+
print("Turn 2:", data["output"][-1]["content"][-1]["text"])
206+
```
207+
208+
## Multi-turn conversations with message history
209+
210+
Alternatively, pass the full conversation history in the `input` field:
211+
212+
```python
213+
# First turn
214+
payload = {
215+
"model": "openai/gpt-4o",
216+
"instructions": "You are a helpful assistant.",
217+
"input": [
218+
{"role": "user", "content": "Get the details of organization 'acme-corp'."}
219+
],
220+
"toolsets": [
221+
{
222+
"toolset_slug": "org-api",
223+
"environment_slug": "my-env",
224+
"headers": {}
225+
},
226+
],
227+
}
228+
229+
response = requests.post(url, headers=headers, json=payload)
230+
data = response.json()
231+
232+
# Second turn - include previous output in context
233+
payload["input"] = [
234+
*data["output"],
235+
{"role": "user", "content": "What workspaces are in that organization?"}
236+
]
237+
238+
response = requests.post(url, headers=headers, json=payload)
239+
```
240+
241+
## Disable response storage
242+
243+
Use `store: false` to prevent the response from being saved:
244+
245+
```python
246+
payload = {
247+
"model": "openai/gpt-4o",
248+
"instructions": "You are a helpful assistant.",
249+
"input": "Process this sensitive request.",
250+
"store": False,
251+
"toolsets": [
252+
{
253+
"toolset_slug": "my-api",
254+
"environment_slug": "my-env",
255+
"headers": {}
256+
},
257+
],
258+
}
259+
260+
response = requests.post(url, headers=headers, json=payload)
261+
data = response.json()
262+
263+
# Response is available immediately but will be deleted shortly after
264+
print(data["output"][-1]["content"][-1]["text"])
265+
```
266+
267+
Note that `store: false` requires synchronous execution (`async: false` or omitted).
268+
269+
## More examples
270+
271+
Additional examples are available in the [Gram examples repository](https://github.com/speakeasy-api/gram/tree/main/examples/agentsapi).
272+

0 commit comments

Comments
 (0)