Skip to content

Commit 4b2b4ea

Browse files
committed
Merge branch 'main' into feat/reasoning
2 parents a6b7282 + 03acf41 commit 4b2b4ea

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

47 files changed

+2208
-985
lines changed

CHANGELOG.md

Lines changed: 18 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -11,11 +11,22 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0
1111

1212
### New features
1313

14-
* Added new family of functions (`parallel_chat()`, `parallel_chat_text()`, and `parallel_chat_structured()`) for submitting multiple prompts at once with some basic rate limiting toggles. (#188)
15-
* `ChatOpenAI()` and `ChatAzureOpenAI()` gain access to latest models, built-in tools, image generation, etc. as a result of moving to the new [Responses API](https://platform.openai.com/docs/api-reference/responses). (#192)
1614
* `ChatOpenAI()`, `ChatAnthropic()`, and `ChatGoogle()` gain a new `reasoning` parameter to easily opt-into, and fully customize, reasoning capabilities. (#202)
1715
* A new `ContentThinking` content type was added and captures the "thinking" portion of a reasoning model. (#192)
18-
* `ChatAnthropic()` and `ChatBedrockAnthropic()` gain new `cache` parameter to control caching. By default it is set to "5m". This should (on average) reduce the cost of your chats. (#215)
16+
* Added support for built-in provider tools via a new `ToolBuiltIn` class. This enables provider-specific functionality like OpenAI's image generation to be registered and used as tools. Built-in tools pass raw provider definitions directly to the API rather than wrapping Python functions. (#214)
17+
* `ChatGoogle()` gains basic support for image generation. (#214)
18+
* `ChatOpenAI()` and `ChatAzureOpenAI()` gain a new `service_tier` parameter to request a specific service tier (e.g., `"flex"` for slower/cheaper or `"priority"` for faster/more expensive). (#204)
19+
20+
### Changes
21+
22+
* The `Chat.get_cost()` method's `options` parameter was renamed to `include`. (#244)
23+
24+
## [0.14.0] - 2025-12-09
25+
26+
### New features
27+
28+
* `ChatOpenAI()` (and `ChatAzureOpenAI()`) gain access to latest models, built-in tools, etc. as a result of moving to the new [Responses API](https://platform.openai.com/docs/api-reference/responses). (#192)
29+
* Added new family of functions (`parallel_chat()`, `parallel_chat_text()`, and `parallel_chat_structured()`) for submitting multiple prompts at once with some basic rate limiting toggles. (#188)
1930
* Added support for systematic evaluation via [Inspect AI](https://inspect.aisi.org.uk/). This includes:
2031
* A new `.export_eval()` method for exporting conversation history as an Inspect eval dataset sample. This supports multi-turn conversations, tool calls, images, PDFs, and structured data.
2132
* A new `.to_solver()` method for translating chat instances into Inspect solvers that can be used with Inspect's evaluation framework.
@@ -30,12 +41,16 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0
3041

3142
### Improvements
3243

44+
* `ChatAnthropic()` and `ChatBedrockAnthropic()` now default to Claude Sonnet 4.5.
45+
* `ChatGroq()` now defaults to llama-3.1-8b-instant.
46+
* `Chat.chat()`, `Chat.stream()`, and related methods now automatically complete dangling tool requests when a chat is interrupted during a tool call loop, allowing the conversation to be resumed without causing API errors (#230).
3347
* `content_pdf_file()` and `content_pdf_url()` now include relevant `filename` information. (#199)
3448

3549
### Bug fixes
3650

3751
* `.set_model_params()` now works correctly for `.*_async()` methods. (#198)
3852
* `.chat_structured()` results are now included correctly into the multi-turn conversation history. (#203)
53+
* `ChatAnthropic()` now drops empty assistant turns to avoid API errors when tools return side-effect only results. (#226)
3954

4055
## [0.13.2] - 2025-10-02
4156

chatlas/__init__.py

Lines changed: 6 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -10,8 +10,6 @@
1010
from ._content import (
1111
ContentToolRequest,
1212
ContentToolResult,
13-
ContentToolResultImage,
14-
ContentToolResultResource,
1513
)
1614
from ._content_image import content_image_file, content_image_plot, content_image_url
1715
from ._content_pdf import content_pdf_file, content_pdf_url
@@ -36,8 +34,8 @@
3634
from ._provider_portkey import ChatPortkey
3735
from ._provider_snowflake import ChatSnowflake
3836
from ._tokens import token_usage
39-
from ._tools import Tool, ToolRejectError
40-
from ._turn import Turn
37+
from ._tools import Tool, ToolBuiltIn, ToolRejectError
38+
from ._turn import AssistantTurn, SystemTurn, Turn, UserTurn
4139

4240
try:
4341
from ._version import version as __version__
@@ -81,20 +79,20 @@
8179
"content_pdf_url",
8280
"ContentToolRequest",
8381
"ContentToolResult",
84-
"ContentToolResultImage",
85-
"ContentToolResultResource",
8682
"interpolate",
8783
"interpolate_file",
8884
"Provider",
8985
"token_usage",
9086
"Tool",
87+
"ToolBuiltIn",
9188
"ToolRejectError",
9289
"Turn",
90+
"UserTurn",
91+
"SystemTurn",
92+
"AssistantTurn",
9393
"types",
9494
)
9595

9696
# Rebuild content models to resolve forward references to ToolAnnotation
9797
ContentToolRequest.model_rebuild()
9898
ContentToolResult.model_rebuild()
99-
ContentToolResultImage.model_rebuild()
100-
ContentToolResultResource.model_rebuild()

chatlas/_batch_job.py

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -14,7 +14,7 @@
1414
from ._chat import Chat
1515
from ._content import Content
1616
from ._provider import BatchStatus
17-
from ._turn import Turn, user_turn
17+
from ._turn import AssistantTurn, Turn, user_turn
1818
from ._typing_extensions import TypedDict
1919

2020
BatchStage = Literal["submitting", "waiting", "retrieving", "done"]
@@ -223,7 +223,7 @@ def _retrieve(self) -> bool:
223223
self._save_state()
224224
return True
225225

226-
def result_turns(self) -> list[Turn | None]:
226+
def result_turns(self) -> list[AssistantTurn | None]:
227227
turns = []
228228
for result in self.results:
229229
turn = self.provider.batch_result_turn(

0 commit comments

Comments
 (0)