On this tutorial, we construct a sophisticated, end-to-end multi-agent analysis workflow utilizing the CAMEL framework. We design a coordinated society of brokers, Planner, Researcher, Author, Critic, and Finalizer, that collaboratively rework a high-level subject into a cultured, evidence-grounded analysis temporary. We securely combine the OpenAI API, orchestrate agent interactions programmatically, and add light-weight persistent reminiscence to retain data throughout runs. By structuring the system round clear roles, JSON-based contracts, and iterative refinement, we show how CAMEL can be utilized to assemble dependable, controllable, and scalable agentic pipelines. Take a look at the FULL CODES here.
!pip -q set up "camel-ai[all]" "python-dotenv" "wealthy"
import os
import json
import time
from typing import Dict, Any
from wealthy import print as rprint
def load_openai_key() -> str:
key = None
attempt:
from google.colab import userdata
key = userdata.get("OPENAI_API_KEY")
besides Exception:
key = None
if not key:
import getpass
key = getpass.getpass("Enter OPENAI_API_KEY (hidden): ").strip()
if not key:
increase ValueError("OPENAI_API_KEY is required.")
return key
os.environ["OPENAI_API_KEY"] = load_openai_key()We arrange the execution setting and securely load the OpenAI API key utilizing Colab secrets and techniques or a hidden immediate. We make sure the runtime is prepared by putting in dependencies and configuring authentication so the workflow can run safely with out exposing credentials. Take a look at the FULL CODES here.
from camel.fashions import ModelFactory
from camel.varieties import ModelPlatformType, ModelType
from camel.brokers import ChatAgent
from camel.toolkits import SearchToolkit
MODEL_CFG = {"temperature": 0.2}
mannequin = ModelFactory.create(
model_platform=ModelPlatformType.OPENAI,
model_type=ModelType.GPT_4O,
model_config_dict=MODEL_CFG,
)
We initialize the CAMEL mannequin configuration and create a shared language mannequin occasion utilizing the ModelFactory abstraction. We standardize mannequin habits throughout all brokers to make sure constant, reproducible reasoning all through the multi-agent pipeline. Take a look at the FULL CODES here.
MEM_PATH = "camel_memory.json"
def mem_load() -> Dict[str, Any]:
if not os.path.exists(MEM_PATH):
return {"runs": []}
with open(MEM_PATH, "r", encoding="utf-8") as f:
return json.load(f)
def mem_save(mem: Dict[str, Any]) -> None:
with open(MEM_PATH, "w", encoding="utf-8") as f:
json.dump(mem, f, ensure_ascii=False, indent=2)
def mem_add_run(subject: str, artifacts: Dict[str, str]) -> None:
mem = mem_load()
mem["runs"].append({"ts": int(time.time()), "subject": subject, "artifacts": artifacts})
mem_save(mem)
def mem_last_summaries(n: int = 3) -> str:
mem = mem_load()
runs = mem.get("runs", [])[-n:]
if not runs:
return "No previous runs."
return "n".be a part of([f"{i+1}. topic={r['topic']} | ts={r['ts']}" for i, r in enumerate(runs)])We implement a light-weight persistent reminiscence layer backed by a JSON file. We retailer artifacts from every run and retrieve summaries of earlier executions, permitting us to introduce continuity and historic context throughout classes. Take a look at the FULL CODES here.
def make_agent(function: str, purpose: str, extra_rules: str = "") -> ChatAgent:
system = (
f"You might be {function}.n"
f"Objective: {purpose}n"
f"{extra_rules}n"
"Output have to be crisp, structured, and straight usable by the following agent."
)
return ChatAgent(mannequin=mannequin, system_message=system)
planner = make_agent(
"Planner",
"Create a compact plan and analysis questions with acceptance standards.",
"Return JSON with keys: plan, questions, acceptance_criteria."
)
researcher = make_agent(
"Researcher",
"Reply questions utilizing internet search outcomes.",
"Return JSON with keys: findings, sources, open_questions."
)
author = make_agent(
"Author",
"Draft a structured analysis temporary.",
"Return Markdown solely."
)
critic = make_agent(
"Critic",
"Establish weaknesses and recommend fixes.",
"Return JSON with keys: points, fixes, rewrite_instructions."
)
finalizer = make_agent(
"Finalizer",
"Produce the ultimate improved temporary.",
"Return Markdown solely."
)
search_tool = SearchToolkit().search_duckduckgo
researcher = ChatAgent(
mannequin=mannequin,
system_message=researcher.system_message,
instruments=[search_tool],
)
We outline the core agent roles and their duties throughout the workflow. We assemble specialised brokers with clear targets and output contracts, and we improve the Researcher by attaching an internet search device for evidence-grounded responses. Take a look at the FULL CODES here.
def step_json(agent: ChatAgent, immediate: str) -> Dict[str, Any]:
res = agent.step(immediate)
txt = res.msgs[0].content material.strip()
attempt:
return json.hundreds(txt)
besides Exception:
return {"uncooked": txt}
def step_text(agent: ChatAgent, immediate: str) -> str:
res = agent.step(immediate)
return res.msgs[0].content materialWe summary interplay patterns with brokers into helper features that implement structured JSON or free-text outputs. We simplify orchestration by dealing with parsing and fallback logic centrally, making the pipeline extra strong to formatting variability. Take a look at the FULL CODES here.
def run_workflow(subject: str) -> Dict[str, str]:
rprint(mem_last_summaries(3))
plan = step_json(
planner,
f"Matter: {subject}nCreate a good plan and analysis questions."
)
analysis = step_json(
researcher,
f"Analysis the subject utilizing internet search.n{json.dumps(plan)}"
)
draft = step_text(
author,
f"Write a analysis temporary utilizing:n{json.dumps(analysis)}"
)
critique = step_json(
critic,
f"Critique the draft:n{draft}"
)
ultimate = step_text(
finalizer,
f"Rewrite utilizing critique:n{json.dumps(critique)}nDraft:n{draft}"
)
artifacts = {
"plan_json": json.dumps(plan, indent=2),
"research_json": json.dumps(analysis, indent=2),
"draft_md": draft,
"critique_json": json.dumps(critique, indent=2),
"final_md": ultimate,
}
mem_add_run(subject, artifacts)
return artifacts
TOPIC = "Agentic multi-agent analysis workflow with high quality management"
artifacts = run_workflow(TOPIC)
print(artifacts["final_md"])We orchestrate the whole multi-agent workflow from planning to finalization. We sequentially go artifacts between brokers, apply critique-driven refinement, persist outcomes to reminiscence, and produce a finalized analysis temporary prepared for downstream use.
In conclusion, we carried out a sensible CAMEL-based multi-agent system that mirrors real-world analysis and evaluation workflows. We confirmed how clearly outlined agent roles, tool-augmented reasoning, and critique-driven refinement result in higher-quality outputs whereas lowering hallucinations and structural weaknesses. We additionally established a basis for extensibility by persisting artifacts and enabling reuse throughout classes. This method permits us to maneuver past single-prompt interactions and towards strong agentic techniques that may be tailored for analysis, evaluation, reporting, and decision-support duties at scale.
Take a look at the FULL CODES here. Additionally, be happy to comply with us on Twitter and don’t neglect to affix our 100k+ ML SubReddit and Subscribe to our Newsletter. Wait! are you on telegram? now you can join us on telegram as well.
Michal Sutter is an information science skilled with a Grasp of Science in Information Science from the College of Padova. With a strong basis in statistical evaluation, machine studying, and knowledge engineering, Michal excels at reworking advanced datasets into actionable insights.

