On this tutorial, we stroll by establishing a complicated AI Agent utilizing Microsoft’s Agent-Lightning framework. We’re working all the things straight inside Google Colab, which implies we are able to experiment with each the server and shopper parts in a single place. By defining a small QA agent, connecting it to a neighborhood Agent-Lightning server, after which coaching it with a number of system prompts, we are able to observe how the framework helps useful resource updates, process queuing, and automatic analysis. Try the FULL CODES here.
!pip -q set up agentlightning openai nest_asyncio python-dotenv > /dev/null
import os, threading, time, asyncio, nest_asyncio, random
from getpass import getpass
from agentlightning.litagent import LitAgent
from agentlightning.coach import Coach
from agentlightning.server import AgentLightningServer
from agentlightning.varieties import PromptTemplate
import openai
if not os.getenv("OPENAI_API_KEY"):
attempt:
os.environ["OPENAI_API_KEY"] = getpass("🔑 Enter OPENAI_API_KEY (depart clean if utilizing a neighborhood/proxy base): ") or ""
besides Exception:
cross
MODEL = os.getenv("MODEL", "gpt-4o-mini")
We start by putting in the required libraries & importing all of the core modules we’d like for Agent-Lightning. We additionally arrange our OpenAI API key securely and outlined the mannequin we’ll use for the tutorial. Try the FULL CODES here.
class QAAgent(LitAgent):
def training_rollout(self, process, rollout_id, sources):
"""Given a process {'immediate':..., 'reply':...}, ask LLM utilizing the server-provided system immediate and return a reward in [0,1]."""
sys_prompt = sources["system_prompt"].template
consumer = process["prompt"]; gold = process.get("reply","").strip().decrease()
attempt:
r = openai.chat.completions.create(
mannequin=MODEL,
messages=[{"role":"system","content":sys_prompt},
{"role":"user","content":user}],
temperature=0.2,
)
pred = r.selections[0].message.content material.strip()
besides Exception as e:
pred = f"[error]{e}"
def rating(pred, gold):
P = pred.decrease()
base = 1.0 if gold and gold in P else 0.0
gt = set(gold.break up()); pr = set(P.break up());
inter = len(gt & pr); denom = (len(gt)+len(pr)) or 1
overlap = 2*inter/denom
brevity = 0.2 if base==1.0 and len(P.break up())<=8 else 0.0
return max(0.0, min(1.0, 0.7*base + 0.25*overlap + brevity))
return float(rating(pred, gold))
We outline a easy QAAgent by extending LitAgent, the place we deal with every coaching rollout by sending the consumer’s immediate to the LLM, accumulating the response, and scoring it in opposition to the gold reply. We design the reward operate to confirm correctness, token overlap, and brevity, enabling the agent to be taught and produce concise and correct outputs. Try the FULL CODES here.
TASKS = [
{"prompt":"Capital of France?","answer":"Paris"},
{"prompt":"Who wrote Pride and Prejudice?","answer":"Jane Austen"},
{"prompt":"2+2 = ?","answer":"4"},
]
PROMPTS = [
"You are a terse expert. Answer with only the final fact, no sentences.",
"You are a helpful, knowledgeable AI. Prefer concise, correct answers.",
"Answer as a rigorous evaluator; return only the canonical fact.",
"Be a friendly tutor. Give the one-word answer if obvious."
]
nest_asyncio.apply()
HOST, PORT = "127.0.0.1", 9997
We outline a tiny benchmark with three QA duties and curate a number of candidate system prompts to optimize. We then apply nest_asyncio and set our native server host and port, permitting us to run the Agent-Lightning server and shoppers inside a single Colab runtime. Try the FULL CODES here.
async def run_server_and_search():
server = AgentLightningServer(host=HOST, port=PORT)
await server.begin()
print("✅ Server began")
await asyncio.sleep(1.5)
outcomes = []
for sp in PROMPTS:
await server.update_resources({"system_prompt": PromptTemplate(template=sp, engine="f-string")})
scores = []
for t in TASKS:
tid = await server.queue_task(pattern=t, mode="practice")
rollout = await server.poll_completed_rollout(tid, timeout=40) # waits for a employee
if rollout is None:
print("⏳ Timeout ready for rollout; persevering with...")
proceed
scores.append(float(getattr(rollout, "final_reward", 0.0)))
avg = sum(scores)/len(scores) if scores else 0.0
print(f"🔎 Immediate avg: {avg:.3f} | {sp}")
outcomes.append((sp, avg))
greatest = max(outcomes, key=lambda x: x[1]) if outcomes else ("",0)
print("n🏁 BEST PROMPT:", greatest[0], " | rating:", f"{greatest[1]:.3f}")
await server.cease()
We begin the Agent-Lightning server and iterate by our candidate system prompts, updating the shared system_prompt earlier than queuing every coaching process. We then ballot for accomplished rollouts, compute common rewards per immediate, report the best-performing immediate, and gracefully cease the server. Try the FULL CODES here.
def run_client_in_thread():
agent = QAAgent()
coach = Coach(n_workers=2)
coach.match(agent, backend=f"http://{HOST}:{PORT}")
client_thr = threading.Thread(goal=run_client_in_thread, daemon=True)
client_thr.begin()
asyncio.run(run_server_and_search())
We launch the shopper in a separate thread with two parallel staff, permitting it to course of duties despatched by the server. On the similar time, we run the server loop, which evaluates completely different prompts, collects rollout outcomes, and experiences the very best system immediate based mostly on common reward.
In conclusion, we’ll see how Agent-Lightning permits us to create a versatile agent pipeline with just a few traces of code. We will begin a server, run parallel shopper staff, consider completely different system prompts, and routinely measure efficiency, all inside a single Colab setting. This demonstrates how the framework streamlines the method of constructing, testing, and optimizing AI brokers in a structured method.
Try the FULL CODES here. Be happy to take a look at our GitHub Page for Tutorials, Codes and Notebooks. Additionally, be happy to observe us on Twitter and don’t neglect to affix our 100k+ ML SubReddit and Subscribe to our Newsletter.
Asif Razzaq is the CEO of Marktechpost Media Inc.. As a visionary entrepreneur and engineer, Asif is dedicated to harnessing the potential of Synthetic Intelligence for social good. His most up-to-date endeavor is the launch of an Synthetic Intelligence Media Platform, Marktechpost, which stands out for its in-depth protection of machine studying and deep studying information that’s each technically sound and simply comprehensible by a large viewers. The platform boasts of over 2 million month-to-month views, illustrating its reputation amongst audiences.