Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
36 changes: 30 additions & 6 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -30,9 +30,9 @@ end

#### Prompts

> ℹ️ **Tip:** Some providers (such as OpenAI) support `system` and `developer`
> roles, but the examples in this README stick to `user` roles since they are
> supported across all providers.
> ℹ️ **Tip:** Some providers support `system` and `developer` roles while
> others do not, and llm.rb will map roles appropriately for providers with
> different role support.

A prompt builder that produces a chain of messages that can be sent in one request:

Expand All @@ -44,7 +44,7 @@ llm = LLM.openai(key: ENV.fetch("KEY"))
bot = LLM::Bot.new(llm)

prompt = bot.build_prompt do
it.user "Answer concisely."
it.system "Answer concisely."
it.user "Was 2024 a leap year?"
it.user "How many days were in that year?"
end
Expand Down Expand Up @@ -100,7 +100,7 @@ llm = LLM.openai(key: ENV.fetch("KEY"))
bot = LLM::Bot.new(llm, tools: [System])

prompt = bot.build_prompt do
it.user "You can run safe shell commands."
it.system "You can run safe shell commands."
it.user "Run `date`."
end

Expand All @@ -109,6 +109,29 @@ bot.chat(bot.functions.map(&:call))
bot.messages.select(&:assistant?).each { |m| puts "[#{m.role}] #{m.content}" }
```

#### Agents

The [LLM::Agent](https://0x1eef.github.io/x/llm.rb/LLM/LLM/Agent.html)
class provides a class-level DSL for defining reusable, preconfigured
assistants with defaults for model, tools, schema, and instructions.
Instructions are injected only on the first request, and unlike
[LLM::Bot](https://0x1eef.github.io/x/llm.rb/LLM/LLM/Bot.html),
an [LLM::Agent](https://0x1eef.github.io/x/llm.rb/LLM/LLM/Agent.html)
will automatically call tools when needed:

```ruby
class SystemAdmin < LLM::Agent
model "gpt-4.1-nano"
instructions "You are a Linux system admin"
tools Shell
schema Result
end

llm = LLM.openai(key: ENV["KEY"])
agent = SystemAdmin.new(llm)
res = agent.chat("Run 'date'")
```

## Features

#### General
Expand All @@ -120,6 +143,7 @@ bot.messages.select(&:assistant?).each { |m| puts "[#{m.role}] #{m.content}" }
#### Chat, Agents
- 🧠 Stateless + stateful chat (completions + responses)
- 🤖 Tool calling / function execution
- 🔁 Agent tool-call auto-execution (bounded)
- 🗂️ JSON Schema structured output
- 📡 Streaming responses

Expand Down Expand Up @@ -320,7 +344,7 @@ end
llm = LLM.openai(key: ENV["KEY"])
bot = LLM::Bot.new(llm, schema: Player)
prompt = bot.build_prompt do
it.user "The player's name is Sam and their position is (7, 12)."
it.system "The player's name is Sam and their position is (7, 12)."
it.user "Return the player's name and position"
end

Expand Down
1 change: 1 addition & 0 deletions lib/llm.rb
Original file line number Diff line number Diff line change
Expand Up @@ -18,6 +18,7 @@ module LLM
require_relative "llm/file"
require_relative "llm/provider"
require_relative "llm/bot"
require_relative "llm/agent"
require_relative "llm/buffer"
require_relative "llm/function"
require_relative "llm/eventstream"
Expand Down
214 changes: 214 additions & 0 deletions lib/llm/agent.rb
Original file line number Diff line number Diff line change
@@ -0,0 +1,214 @@
# frozen_string_literal: true

module LLM
##
# {LLM::Agent LLM::Agent} provides a class-level DSL for defining
# reusable, preconfigured assistants with defaults for model,
# tools, schema, and instructions.
#
# @note
# Unlike {LLM::Bot LLM::Bot}, this class will automatically run
# tool calls for you.
#
# @note
# Instructions are injected only on the first request.
#
# @note
# This idea originally came from RubyLLM and was adapted to llm.rb.
#
# @example
# class SystemAdmin < LLM::Agent
# model "gpt-4.1-nano"
# instructions "You are a Linux system admin"
# tools Shell
# schema Result
# end
#
# llm = LLM.openai(key: ENV["KEY"])
# agent = SystemAdmin.new(llm)
# agent.chat("Run 'date'")
class Agent
##
# Set or get the default model
# @param [String, nil] model
# The model identifier
# @return [String, nil]
# Returns the current model when no argument is provided
def self.model(model = nil)
return @model if model.nil?
@model = model
end

##
# Set or get the default tools
# @param [Array<LLM::Function>, nil] tools
# One or more tools
# @return [Array<LLM::Function>]
# Returns the current tools when no argument is provided
def self.tools(*tools)
return @tools || [] if tools.empty?
@tools = tools.flatten
end

##
# Set or get the default schema
# @param [#to_json, nil] schema
# The schema
# @return [#to_json, nil]
# Returns the current schema when no argument is provided
def self.schema(schema = nil)
return @schema if schema.nil?
@schema = schema
end

##
# Set or get the default instructions
# @param [String, nil] instructions
# The system instructions
# @return [String, nil]
# Returns the current instructions when no argument is provided
def self.instructions(instructions = nil)
return @instructions if instructions.nil?
@instructions = instructions
end

##
# @param [LLM::Provider] provider
# A provider
# @param [Hash] params
# The parameters to maintain throughout the conversation.
# Any parameter the provider supports can be included and
# not only those listed here.
# @option params [String] :model Defaults to the provider's default model
# @option params [Array<LLM::Function>, nil] :tools Defaults to nil
# @option params [#to_json, nil] :schema Defaults to nil
def initialize(provider, params = {})
defaults = {model: self.class.model, tools: self.class.tools, schema: self.class.schema}.compact
@provider = provider
@bot = LLM::Bot.new(provider, defaults.merge(params))
@instructions_applied = false
end

##
# Maintain a conversation via the chat completions API.
# This method immediately sends a request to the LLM and returns the response.
#
# @param prompt (see LLM::Provider#complete)
# @param [Hash] params The params passed to the provider, including optional :stream, :tools, :schema etc.
# @option params [Integer] :max_tool_rounds The maxinum number of tool call iterations (default 10)
# @return [LLM::Response] Returns the LLM's response for this turn.
# @example
# llm = LLM.openai(key: ENV["KEY"])
# agent = LLM::Agent.new(llm)
# response = agent.chat("Hello, what is your name?")
# puts response.choices[0].content
def chat(prompt, params = {})
i, max = 0, Integer(params.delete(:max_tool_rounds) || 10)
res = @bot.chat(apply_instructions(prompt), params)
until @bot.functions.empty?
raise LLM::ToolLoopError, "pending tool calls remain" if i >= max
res = @bot.chat @bot.functions.map(&:call), params
i += 1
end
@instructions_applied = true
res
end

##
# Maintain a conversation via the responses API.
# This method immediately sends a request to the LLM and returns the response.
#
# @note Not all LLM providers support this API
# @param prompt (see LLM::Provider#complete)
# @param [Hash] params The params passed to the provider, including optional :stream, :tools, :schema etc.
# @option params [Integer] :max_tool_rounds The maxinum number of tool call iterations (default 10)
# @return [LLM::Response] Returns the LLM's response for this turn.
# @example
# llm = LLM.openai(key: ENV["KEY"])
# agent = LLM::Agent.new(llm)
# res = agent.respond("What is the capital of France?")
# puts res.output_text
def respond(prompt, params = {})
i, max = 0, Integer(params.delete(:max_tool_rounds) || 10)
res = @bot.respond(apply_instructions(prompt), params)
until @bot.functions.empty?
raise LLM::ToolLoopError, "pending tool calls remain" if i >= max
res = @bot.respond @bot.functions.map(&:call), params
i += 1
end
@instructions_applied = true
res
end

##
# @return [LLM::Buffer<LLM::Message>]
def messages
@bot.messages
end

##
# @return [Array<LLM::Function>]
def functions
@bot.functions
end

##
# @return [LLM::Object]
def usage
@bot.usage
end

##
# @return [LLM::Builder]
def build_prompt(&)
@bot.build_prompt(&)
end

##
# @param [String] url
# The URL
# @return [LLM::Object]
# Returns a tagged object
def image_url(url)
@bot.image_url(url)
end

##
# @param [String] path
# The path
# @return [LLM::Object]
# Returns a tagged object
def local_file(path)
@bot.local_file(path)
end

##
# @param [LLM::Response] res
# The response
# @return [LLM::Object]
# Returns a tagged object
def remote_file(res)
@bot.remote_file(res)
end

private

def apply_instructions(prompt)
instr = self.class.instructions
return prompt unless instr
if LLM::Builder === prompt
messages = prompt.to_a
builder = LLM::Builder.new(@provider) do |builder|
builder.system instr unless @instructions_applied
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Won't work with Gemini.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fixed by 3291f3a

messages.each { |msg| builder.chat(msg.content, role: msg.role) }
end
builder.tap(&:call)
else
build_prompt do
_1.system instr unless @instructions_applied
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Won't work with Gemini.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fixed by 3291f3a

_1.user prompt
end
end
end
end
end
2 changes: 1 addition & 1 deletion lib/llm/bot.rb
Original file line number Diff line number Diff line change
Expand Up @@ -131,7 +131,7 @@ def usage
# end
# bot.chat(prompt)
def build_prompt(&)
LLM::Builder.new(&).tap(&:call)
LLM::Builder.new(@provider, &).tap(&:call)
end

##
Expand Down
26 changes: 22 additions & 4 deletions lib/llm/builder.rb
Original file line number Diff line number Diff line change
Expand Up @@ -4,6 +4,9 @@
# The {LLM::Builder LLM::Builder} class can build a collection
# of messages that can be sent in a single request.
#
# @note
# This API is not meant to be used directly.
#
# @example
# llm = LLM.openai(key: ENV["KEY"])
# bot = LLM::Bot.new(llm)
Expand All @@ -16,7 +19,8 @@ class LLM::Builder
##
# @param [Proc] evaluator
# The evaluator
def initialize(&evaluator)
def initialize(provider, &evaluator)
@provider = provider
@buffer = []
@evaluator = evaluator
end
Expand All @@ -33,7 +37,13 @@ def call
# @param [Symbol] role
# The role (eg user, system)
# @return [void]
def chat(content, role: :user)
def chat(content, role: @provider.user_role)
role = case role.to_sym
when :system then @provider.system_role
when :user then @provider.user_role
when :developer then @provider.developer_role
else role
end
@buffer << LLM::Message.new(role, content)
end

Expand All @@ -42,15 +52,23 @@ def chat(content, role: :user)
# The message content
# @return [void]
def user(content)
chat(content, role: :user)
chat(content, role: @provider.user_role)
end

##
# @param [String] content
# The message content
# @return [void]
def system(content)
chat(content, role: :system)
chat(content, role: @provider.system_role)
end

##
# @param [String] content
# The message content
# @return [void]
def developer(content)
chat(content, role: @provider.developer_role)
end

##
Expand Down
4 changes: 4 additions & 0 deletions lib/llm/error.rb
Original file line number Diff line number Diff line change
Expand Up @@ -50,4 +50,8 @@ def message
##
# When the context window is exceeded
ContextWindowError = Class.new(InvalidRequestError)

##
# When stuck in a tool call loop
ToolLoopError = Class.new(Error)
end
Loading