On this tutorial, we reveal how a semi-centralized Anemoi-style multi-agent system works by letting two peer brokers negotiate straight with no supervisor or supervisor. We present how a Drafter and a Critic iteratively refine an output by means of peer-to-peer suggestions, lowering coordination overhead whereas preserving high quality. We implement this sample end-to-end in Colab utilizing LangGraph, specializing in readability, management movement, and sensible execution reasonably than summary orchestration idea. Try the FULL CODES here.
!pip -q set up -U langgraph langchain-openai langchain-core
import os
import json
from getpass import getpass
from typing import TypedDict
from langchain_openai import ChatOpenAI
from langgraph.graph import StateGraph, END
if not os.environ.get("OPENAI_API_KEY"):
os.environ["OPENAI_API_KEY"] = getpass("Enter OPENAI_API_KEY (hidden): ")
MODEL = os.environ.get("OPENAI_MODEL", "gpt-4o-mini")
llm = ChatOpenAI(mannequin=MODEL, temperature=0.2)We arrange the Colab surroundings by putting in the required LangGraph and LangChain packages and securely accumulating the OpenAI API key as a hidden enter. We initialize the language mannequin that will likely be shared by all brokers, maintaining the configuration minimal and reproducible. Try the FULL CODES here.
class AnemoiState(TypedDict):
activity: str
max_rounds: int
spherical: int
draft: str
critique: str
agreed: bool
ultimate: str
hint: boolWe outline a typed state that acts because the shared communication floor between brokers throughout negotiation. We explicitly monitor the duty, draft, critique, settlement flag, and iteration rely to maintain the movement clear and debuggable. This state obviates the necessity for a central supervisor or for implicit reminiscence. Try the FULL CODES here.
DRAFTER_SYSTEM = """You might be Agent A (Drafter) in a peer-to-peer loop.
You write a high-quality resolution to the consumer's activity.
If you happen to obtain critique, you revise decisively and incorporate it.
Return solely the improved draft textual content."""
def drafter_node(state: AnemoiState) -> AnemoiState:
activity = state["task"]
critique = state.get("critique", "").strip()
r = state.get("spherical", 0) + 1
if critique:
user_msg = f"""TASK:
{activity}
CRITIQUE:
{critique}
Revise the draft."""
else:
user_msg = f"""TASK:
{activity}
Write the primary draft."""
draft = llm.invoke(
[
{"role": "system", "content": DRAFTER_SYSTEM},
{"role": "user", "content": user_msg},
]
).content material.strip()
if state.get("hint", False):
print(f"n--- Drafter Spherical {r} ---n{draft}n")
return {**state, "spherical": r, "draft": draft, "agreed": False}We implement the Drafter agent, which produces the preliminary response and revises it every time peer suggestions is accessible. We maintain the Drafter centered purely on enhancing the user-facing draft, with out consciousness of management logic or termination situations. It mirrors the Anemoi concept of brokers optimizing domestically whereas observing peer indicators. Try the FULL CODES here.
CRITIC_SYSTEM = """You might be Agent B (Critic).
Return strict JSON:
{"agree": true/false, "critique": "..."}"""
def critic_node(state: AnemoiState) -> AnemoiState:
activity = state["task"]
draft = state.get("draft", "")
uncooked = llm.invoke(
[
{"role": "system", "content": CRITIC_SYSTEM},
{
"role": "user",
"content": f"TASK:n{task}nnDRAFT:n{draft}",
},
]
).content material.strip()
cleaned = uncooked.strip("```").exchange("json", "").strip()
strive:
information = json.hundreds(cleaned)
agree = bool(information.get("agree", False))
critique = str(information.get("critique", "")).strip()
besides Exception:
agree = False
critique = uncooked
if state.get("hint", False):
print(f"--- Critic Determination ---nAGREE: {agree}n{critique}n")
ultimate = draft if agree else state.get("ultimate", "")
return {**state, "agreed": agree, "critique": critique, "ultimate": ultimate}We implement the Critic agent, which evaluates the draft and decides whether or not it is able to ship or wants revision. We implement a strict agree-or-revise resolution to keep away from imprecise suggestions and guarantee quick convergence. This peer analysis step permits high quality management with out introducing a supervisory agent. Try the FULL CODES here.
def continue_or_end(state: AnemoiState) -> str:
if state.get("agreed", False):
return "finish"
if state.get("spherical", 0) >= state.get("max_rounds", 3):
return "force_ship"
return "loop"
def force_ship_node(state: AnemoiState) -> AnemoiState:
return {**state, "ultimate": state.get("ultimate") or state.get("draft", "")}
graph = StateGraph(AnemoiState)
graph.add_node("drafter", drafter_node)
graph.add_node("critic", critic_node)
graph.add_node("force_ship", force_ship_node)
graph.set_entry_point("drafter")
graph.add_edge("drafter", "critic")
graph.add_conditional_edges(
"critic",
continue_or_end,
{"loop": "drafter", "force_ship": "force_ship", "finish": END},
)
graph.add_edge("force_ship", END)
anemoi_critic_loop = graph.compile()
demo_task = """Clarify the Anemoi semi-centralized agent sample and why peer-to-peer critic loops scale back bottlenecks."""
consequence = anemoi_critic_loop.invoke(
{
"activity": demo_task,
"max_rounds": 3,
"spherical": 0,
"draft": "",
"critique": "",
"agreed": False,
"ultimate": "",
"hint": False,
}
)
print("n====================")
print("✅ FINAL OUTPUT")
print("====================n")
print(consequence["final"])We assemble the LangGraph workflow that routes management between Drafter and Critic till settlement is reached or the utmost spherical restrict is reached. We depend on easy conditional routing reasonably than centralized planning, thereby preserving the system’s semi-centralized nature. Lastly, we execute the graph and return the most effective accessible output to the consumer.
In conclusion, we demonstrated that Anemoi-style peer negotiation is a sensible different to manager-worker architectures, providing decrease latency, decreased context bloat, and less complicated agent coordination. By permitting brokers to watch and proper one another straight, we achieved convergence with fewer tokens and fewer orchestration complexity. On this tutorial, we supplied a reusable blueprint for constructing scalable, semi-centralized agent techniques. It lays the inspiration for extending the sample to multi-peer meshes, red-team loops, or protocol-based agent interoperability.
Try the FULL CODES here. Additionally, be at liberty to comply with us on Twitter and don’t overlook to hitch our 100k+ ML SubReddit and Subscribe to our Newsletter. Wait! are you on telegram? now you can join us on telegram as well.

