Skip to content

Commit 46f1f05

Browse files
committed
example: multi-agent-debate
1 parent 603255d commit 46f1f05

File tree

9 files changed

+883
-0
lines changed

9 files changed

+883
-0
lines changed
Lines changed: 115 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,115 @@
1+
Debate System Using LLM Agents
2+
==============================
3+
4+
Overview
5+
--------
6+
This project is a debate system powered by LLMs using Langroid, enabling structured debates on various topics
7+
such as AI in healthcare, education, intellectual property, and societal biases.
8+
The program creates and manages agents that represent opposing sides of a debate,
9+
interact with users, and provide constructive feedback based on established debate criteria.
10+
11+
New Topics and Pro and Con Side System messages can be manually configured by updating or modifying the
12+
system_messages.json File.
13+
"pro_ai": {
14+
"topic": "Your New TOPIC",
15+
"message": " YOUR Prompt"
16+
},
17+
"con_ai": {
18+
"topic": "Your New TOPIC",
19+
"message": " YOUR CON or opposing Prompt"
20+
}
21+
22+
Features
23+
--------
24+
- Multiple Debate Topics:
25+
- AI in Healthcare
26+
- AI and Intellectual Property
27+
- AI and Societal Biases
28+
- AI as an Educator
29+
- Agent-Based Interaction:
30+
- Pro and Con agents for each topic simulate structured debate arguments.
31+
- Configurable to use different LLMs from OPENAI, Google, & Mistral:
32+
- 1: gpt-4o
33+
2: gpt-4
34+
3: gpt-4o-mini
35+
4: gpt-4-turbo
36+
5: gpt-4-32k
37+
6: gpt-3.5-turbo-1106
38+
7: Mistral: mistral:7b-instruct-v0.2-q8_0a
39+
8: Gemini:gemini-2.0-flash
40+
9: Gemini:gemini-1.5-flash
41+
10: Gemini:gemini-1.5-flash-8b
42+
11: Gemini:gemini-1.5-pro
43+
- Feedback Mechanism:
44+
- Provides structured feedback on debate performance based on key criteria.
45+
- Interactive or Autonomous Mode:
46+
- Users can either control interactions manually or let agents autonomously continue debates.
47+
48+
File Structure
49+
--------------
50+
- main.py: The entry point of the application. Initializes the system, configures agents, and starts the debate loop.
51+
- config.py: Provides functions for configuring global settings and LLM-specific parameters.
52+
- model.py: Pydantic model for system_messages.json
53+
- system_messages.json: Topic Titles and system_messages for pro and con agents. You can add more topics and their
54+
respective pro and con system messages here. The system_messages has a statement:
55+
"Limit responses to MAXIMUM 2 points expressed as single sentences." Please change or delete it for a realistic debate.
56+
- system_message.py: Global system messages
57+
- utils.py: User Prompts and other helper functions
58+
- generation_config_models.py: pydantic model for generation_config.json
59+
- generation_config.json: LLM generation parameters
60+
The system dynamically updates user selection with the topics from this file.
61+
62+
Getting Started
63+
---------------
64+
Prerequisites
65+
1. Python 3.8+
66+
2. Langroid Framework: Install Langroid with necessary dependencies:
67+
pip install "langroid[litellm]"
68+
3. Setup the following env variables in the .env File in the root of your repo
69+
or set them on your terminal.
70+
export OPENAI_API_KEY=OPEN AI KEY
71+
export GEMINI_API_KEY=GEMiNi API KEY
72+
export METAPHOR_API_KEY=METAPHOR_API_KEY
73+
4. Please read the following page for more information:
74+
https://langroid.github.io/langroid/quick-start/setup/
75+
76+
Usage
77+
-----
78+
Run the CLI Application
79+
Start the application from the root of the langroid repo with:
80+
python examples/multi-agent-debate/main.py
81+
82+
Options
83+
- Debug Mode: Run the program with debug logs for detailed output.
84+
python examples/multi-agent-debate/main.py --debug
85+
- Disable Caching: Avoid using cached responses for LLM interactions.
86+
python examples/multi-agent-debate/main.py --nocache
87+
88+
89+
Interaction
90+
1. Decide if you want to you use same LLM for all agents or different ones
91+
2. Decide if you want autonomous debate between AI Agents or user vs. AI Agent.
92+
3. Select a debate topic.
93+
4. Choose your side (Pro or Con).
94+
5. Engage in a debate by providing arguments and receiving responses from agents.
95+
6. Request feedback at any time by typing `f`.
96+
7. Decide if you want the Metaphor Search to run to find Topic relevant web links
97+
and summarize them.
98+
8. End the debate manually by typing `done`.
99+
100+
Feedback Criteria
101+
-----------------
102+
The feedback mechanism evaluates debates based on:
103+
1. Clash of Values
104+
2. Argumentation
105+
3. Cross-Examination
106+
4. Rebuttals
107+
5. Persuasion
108+
6. Technical Execution
109+
7. Adherence to Debate Etiquette
110+
8. Final Focus
111+
112+
License
113+
-------
114+
This project is licensed under the MIT License.
115+
"""
Lines changed: 99 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,99 @@
1+
from rich.prompt import Prompt
2+
from typing import Optional
3+
import langroid.language_models as lm
4+
import langroid.utils.configuration
5+
from langroid.language_models import OpenAIGPTConfig
6+
from langroid.utils.configuration import Settings
7+
from generation_config_models import load_generation_config, GenerationConfig
8+
9+
# Constants
10+
MODEL_MAP = {
11+
"1": lm.OpenAIChatModel.GPT4o,
12+
"2": lm.OpenAIChatModel.GPT4,
13+
"3": lm.OpenAIChatModel.GPT4o_MINI,
14+
"4": lm.OpenAIChatModel.GPT4_TURBO,
15+
"5": lm.OpenAIChatModel.GPT4_32K,
16+
"6": lm.OpenAIChatModel.GPT3_5_TURBO,
17+
"7": "ollama/mistral:7b-instruct-v0.2-q8_0",
18+
"8": "gemini/" + lm.GeminiModel.GEMINI_2_FLASH,
19+
"9": "gemini/" + lm.GeminiModel.GEMINI_1_5_FLASH,
20+
"10": "gemini/" + lm.GeminiModel.GEMINI_1_5_FLASH_8B,
21+
"11": "gemini/" + lm.GeminiModel.GEMINI_1_5_PRO,
22+
}
23+
24+
MISTRAL_MAX_OUTPUT_TOKENS = 16_000
25+
26+
def get_global_settings(debug: bool = False, nocache: bool = True) -> Settings:
27+
"""
28+
Retrieve global Langroid settings.
29+
30+
Args:
31+
debug (bool): If True, enables debug mode.
32+
nocache (bool): If True, disables caching.
33+
34+
Returns:
35+
Settings: Langroid's global configuration object.
36+
"""
37+
return langroid.utils.configuration.Settings(
38+
debug=debug,
39+
cache=not nocache,
40+
)
41+
42+
43+
def create_llm_config(chat_model_option: str, temperature: Optional[float] = None) -> OpenAIGPTConfig:
44+
"""
45+
Creates an LLM (Language Learning Model) configuration based on the selected model.
46+
47+
This function uses the user's selection (identified by `chat_model_option`)
48+
to retrieve the corresponding chat model from the `MODEL_MAP` and create
49+
an `OpenAIGPTConfig` object with the specified settings.
50+
51+
Args:
52+
chat_model_option (str): The key corresponding to the user's selected model.
53+
54+
Returns:
55+
OpenAIGPTConfig: A configuration object for the selected LLM.
56+
57+
Raises:
58+
ValueError: If the user provided`chat_model_option` does not exist in `MODEL_MAP`.
59+
"""
60+
chat_model = MODEL_MAP.get(chat_model_option)
61+
# Load generation configuration from JSON
62+
generation_config: GenerationConfig = load_generation_config("examples/multi-agent-debate/generation_config.json")
63+
64+
if not chat_model:
65+
raise ValueError(f"Invalid model selection: {chat_model_option}")
66+
67+
# Determine max_output_tokens based on the selected model
68+
max_output_tokens = (
69+
MISTRAL_MAX_OUTPUT_TOKENS if chat_model_option == "7" else generation_config.max_output_tokens
70+
)
71+
72+
# Use passed temperature if provided; otherwise, use the one from the JSON config
73+
effective_temperature = temperature if temperature is not None else generation_config.temperature
74+
75+
# Create and return the LLM configuration
76+
return OpenAIGPTConfig(
77+
chat_model=chat_model,
78+
min_output_tokens=generation_config.min_output_tokens,
79+
max_output_tokens=generation_config.max_output_tokens,
80+
temperature=effective_temperature,
81+
seed=generation_config.seed
82+
)
83+
84+
85+
def get_base_llm_config(chat_model_option: str, temperature: Optional[float] = None) -> OpenAIGPTConfig:
86+
"""
87+
Prompt the user to select a base LLM configuration and return it.
88+
89+
Args:
90+
config_agent_name (str): The name of the agent being configured.
91+
92+
Returns:
93+
OpenAIGPTConfig: The selected LLM's configuration.
94+
"""
95+
96+
# Pass temperature only if it is provided
97+
if temperature is not None:
98+
return create_llm_config(chat_model_option, temperature)
99+
return create_llm_config(chat_model_option)
Lines changed: 7 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,7 @@
1+
{
2+
"max_output_tokens": 10000,
3+
"min_output_tokens": 1,
4+
"temperature": 0.7,
5+
"seed": 42
6+
}
7+
Lines changed: 27 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,27 @@
1+
from typing import List, Optional
2+
from pydantic import BaseModel, Field
3+
import json
4+
5+
6+
class GenerationConfig(BaseModel):
7+
"""Represents configuration for text generation."""
8+
max_output_tokens: int = Field(default=1024, ge=1, description="Maximum output tokens.")
9+
min_output_tokens: int = Field(default=1, ge=0, description="Minimum output tokens.")
10+
temperature: float = Field(default=0.7, ge=0.0, le=1.0, description="Sampling temperature.")
11+
seed: Optional[int] = Field(default=42, description="Seed for reproducibility. If set, ensures deterministic "
12+
"outputs for the same input.")
13+
14+
15+
def load_generation_config(file_path: str) -> GenerationConfig:
16+
"""
17+
Load and validate generation configuration from a JSON file.
18+
19+
Args:
20+
file_path (str): Path to the JSON file.
21+
22+
Returns:
23+
GenerationConfig: Validated generation configuration.
24+
"""
25+
with open(file_path, "r", encoding="utf-8") as f:
26+
config_data = json.load(f)
27+
return GenerationConfig(**config_data)

0 commit comments

Comments
 (0)