New to Agentic AI? This course is your complete guide to building intelligent agents using Python and FastAPI. Explore the DACA (Dapr Agentic Cloud Ascent) pattern in simple terms, and build real-world AI-powered apps step-by-step. No prior experience needed — just your curiosity and a willingness to build smart systems from scratch to scale.
Table of Contents
Quarter 4 : Topic 1 | 🧠 Introduction to Prompt and Context Engineering
How to talk to AI like a pro using structured, meaningful prompts.
🌍 The Rise of Conversational AI
Artificial Intelligence is changing how we communicate with technology.
Tools like ChatGPT, Google Gemini, and Claude use powerful Large Language Models (LLMs) that can generate human-like responses, summarize complex ideas, and even write code.
But here’s the secret most people miss:
The quality of your AI output depends entirely on the quality of your input.
That’s where Prompt Engineering and Context Engineering come in — the modern-day superpowers that turn you from an AI user into an AI designer.

💡 What Is Prompt Engineering?
Prompt Engineering is the skill of crafting instructions that tell an AI model exactly what you want it to do.
A prompt is your command, question, or scenario — the bridge between your intention and the AI’s action.
Let’s see it in action:
🧩 Basic Prompt: “Write about Artificial Intelligence.”
🧠 Engineered Prompt: “You are a tech journalist. Write a 150-word LinkedIn post about how AI helps students learn faster, in a friendly and motivational tone.”
The difference is clear — the first prompt is open-ended and vague, while the second gives the AI structure, role, tone, and purpose.
That extra detail transforms the model’s response from average to outstanding.
🎯 What Is Context Engineering?
While Prompt Engineering focuses on what to ask,
Context Engineering focuses on how much background information to give the AI.
LLMs don’t inherently know your situation. They only understand what’s inside your prompt.
So, adding context helps them produce more relevant, accurate, and human-like responses.
✏️ Example:
“You are a career coach. Using the resume below, write a short LinkedIn summary tailored for software-engineering jobs.”
Here, the resume is the context.
By giving background information, you help the AI align its output with your goal.
💬 Prompt Engineering tells the AI what to do.
🧩 Context Engineering tells it why and for whom to do it.

🧩 The Four Elements of a Great Prompt
Every effective prompt usually includes four key elements.
Think of them as the building blocks of great AI communication:
1. 🧭 Instruction
What do you want the model to do?
Example: “Summarize this paragraph in three bullet points.”
2. 🧱 Context
What background will help it give a better answer?
Example: “You are a teacher summarizing this for grade-5 students.”
3. 🧩 Input Data
What text, question, or dataset should the model use?
Example: (Paste your paragraph or question here.)
4. 🎨 Output Indicator
What should the final answer look like?
Example: “Give the response in a table with simple explanations.”
When you combine all four, your prompt becomes clear, instructive, and predictable — exactly what an LLM needs to perform well.

🔍 Putting It All Together – A Quick Comparison
| Type | Example | Result |
|---|---|---|
| ❌ Basic Prompt | “Write about AI in education.” | Generic, unfocused answer |
| ✅ Engineered Prompt | “You are an education blogger. Write a 150-word LinkedIn post explaining how Artificial Intelligence is transforming classroom learning. Use a positive tone and include two emojis.” | Polished, audience-specific, human-like output |
Small adjustments in wording create massive improvements in AI performance.
🤖 Understanding Large Language Models (LLMs)
To see why these techniques matter, let’s peek behind the curtain.
A Large Language Model is a deep-learning system trained on billions of words from books, articles, and websites.
It doesn’t “think” like humans — it predicts the most likely next word in a sentence based on the context of your input.

In simple terms: An LLM doesn’t know things — it recognizes patterns.
When you write a well-structured prompt, you guide the model toward the right pattern, resulting in more intelligent, context-aware answers.
That’s why:
🧠 The smarter your prompt, the smarter your AI.

🧠 Final Thoughts
Prompt and Context Engineering are more than buzzwords — they are the foundation of how we shape intelligent systems.
By mastering these two skills, you don’t just use AI — you direct it.
Remember the formula:
Prompt = Instruction + Context + Input Data + Output Indicator
Combine this structure with an understanding of LLMs, and you’ll unlock the full potential of generative AI.
Quarter 4 : Topic 2 | 🧠Fundamental and Advanced Prompting Techniques (with ReAct: Reasoning + Acting)
🎯 Introduction
Prompt Engineering isn’t just about asking questions — it’s about communicating intelligently with AI systems like ChatGPT, Claude, or Gemini.
If you’ve ever wondered why sometimes AI gives a perfect answer and other times something completely random — the reason lies in how you prompt.
In this article, we’ll explore both fundamental and advanced prompting techniques — with practical, real-life examples — and end with the powerful ReAct (Reasoning + Acting) method that makes AI behave like a true assistant.

🔹 Fundamental Prompting Techniques
Let’s begin with the basics — techniques that help you shape the model’s response format, tone, and clarity.
🟩 1. Zero-Shot Prompting — Ask Directly
The simplest form of prompting.
You ask a question without providing any examples or prior context.
Example (Daily Life):
“What are three easy breakfast ideas I can make in under 10 minutes?”
AI will use its general knowledge to give you quick, simple options.

🧭 When to Use:
- For general knowledge or well-known facts
- Quick one-off requests
- Clear and simple instructions
💡 Tip: Keep your question clear and direct. AI performs best when the task is specific.
🟨 2. One-Shot Prompting — Guide with One Example
When you want the AI to follow a specific format or tone, give it one example.
It learns the pattern and replicates it.

Example (Daily Life):
“Summarize text like this:
Example: ‘The movie was great but slow.’ → Summary: Good visuals, weak pacing.Text: ‘The food was amazing but service was slow.’ → Summary:”
AI identifies the format — a short summary highlighting both good and bad points — and applies it consistently.
🧭 When to Use:
- When you want a structured or stylized answer
- For content formatting or paraphrasing
🟦 3. Few-Shot Prompting — Train Through Examples
Few-shot prompting is like showing the AI a mini dataset — 3 to 5 examples to establish a pattern.
It’s a powerful way to “teach” the model what kind of outputs you expect.
Example (Workplace):
“Analyze these messages and mark them as polite or rude.”
Message: “Please share the file when you have time.” → Polite
Message: “Send me the report now.” → Rude
Message: “Could you send the document?” → PoliteMessage: “Why didn’t you send it yet?” →

The AI will now recognize the tone of messages based on your examples.
🧭 When to Use:
- Classification and analysis tasks
- Pattern-based text generation
- When you need consistent formatting
💡 Best Practice: Use clear, diverse, and high-quality examples.
⚙️ Behavior Control Prompts
Once you master the basics, you can control the AI’s personality, tone, and style using System, Role, and Contextual prompts.
🧭 4. System Prompting — Set the Behavior
This defines how the AI should behave globally.
It’s like giving it permanent instructions for the conversation.
Example:
“You are a home kitchen assistant. Always suggest recipes using ingredients I already have.”
Now, if you ask:
“I have eggs, bread, and cheese. What can I cook?”
AI will act like a helpful chef and provide meal ideas accordingly.
🧩 Use Case: To control tone, focus, and consistency of AI behavior.
🎭 5. Role Prompting — Assign a Persona
Role prompting turns the AI into a character or expert.
Example:
“Act as a friendly fitness coach. Create a 7-day beginner workout plan using only home exercises.”
Now the AI takes on the role of a personal trainer — using encouraging, human-like language.
🧩 Use Case: Teaching, content creation, professional consulting.
📚 6. Contextual Prompting — Add Background Information
Give the model context about your target audience or scenario.
This helps it produce content that “feels right” for your readers or use case.
Example:
“Context: You are writing for university students.
Write a short paragraph about managing time during exams.”
AI now adapts tone and vocabulary for that specific audience.
🧩 Use Case: Marketing copy, educational material, audience-targeted content.
🔗 Advanced Prompting Strategies
Now that you can control output and tone, let’s explore techniques that help the AI think logically and reason step-by-step.
🧩 7. Chain-of-Thought (CoT) Prompting — Think Step by Step
Chain-of-thought prompting instructs the AI to reason systematically instead of guessing.
Example:
“If I leave home at 8:00 a.m., my office is 30 minutes away, and I stop for 10 minutes for coffee — what time will I reach? Let’s think step by step.”
AI responds:
- Travel time = 30 + 10 = 40 minutes
- 8:00 + 40 = 8:40 a.m.
🧭 When to Use:
- Math, reasoning, logic problems
- Decision-making scenarios
- Process explanations
💡 Tip: Use phrases like “Let’s think step by step” or “Explain your reasoning before answering.”
🔁 8. Self-Consistency — Cross-Verify the Output
AI can sometimes produce different results for the same question.
Self-Consistency ensures reliability by comparing multiple reasoning paths.
Example:
“If a t-shirt costs $100 and there’s a 25% discount, what’s the final price?”
Ask this 3 times.
Each time, the model calculates differently but arrives at $75 consistently.
🧭 When to Use:
- To verify AI answers
- In numerical or logical tasks
🔄 9. Step-Back Prompting — Start Broad, Then Go Deep
In this technique, you first ask a general question, then use that answer to handle your specific case.
It helps the model ground its response in general logic.
Example:
Step 1: “What are good habits for healthy sleep?”
Step 2: “Now, using those habits, create a sleep plan for a university student.”
AI first lists habits, then applies them practically — giving more structured, realistic output.

⚡ 10. ReAct Prompting — Reason + Act
The ReAct framework combines reasoning (thinking) and acting (doing).
It makes the AI perform multi-step logical actions — just like a human assistant.
Example 1 — Daily Task Planner:
“You are my personal assistant. Based on this list, make a daily schedule.”
Tasks: “Drop kids at school, attend meeting, cook lunch, evening walk.”
AI will reason about time gaps and act by creating a timeline:
- 7:30 – Drop kids
- 9:00 – Meeting
- 1:00 – Lunch
- 6:00 – Evening walk

Example 2 — Travel Decision:
“I have Rs. 50,000 for a 3-day trip. Should I go to Murree or Karachi? Think step by step and then recommend one.”
AI thinks logically (reasoning about budget, travel time, cost) and acts by recommending the best option.
🧩 When to Use:
- Research, planning, and analysis tasks
- When AI needs to “think before doing”
- Workflows that need reasoning + action
💡 Best Practice: Clearly separate reasoning and final answer sections for clarity.
🧭 Summary — Choosing the Right Technique
| Goal | Best Technique |
|---|---|
| Quick answer | Zero-Shot |
| Consistent format | One/Few-Shot |
| Control tone | System or Role Prompt |
| Target audience | Contextual |
| Step-by-step reasoning | Chain-of-Thought |
| Reliable result | Self-Consistency |
| Broader logic to specific case | Step-Back |
| Logical + Action-oriented output | ReAct |
🧠 Key Takeaway
Prompting isn’t magic — it’s a conversation skill.
The more structure and clarity you give, the more intelligent your AI becomes.
ReAct represents the future — where AI not only thinks but also acts logically, just like a real assistan
Topic 1: Agentic AI Hello World + FastAPI Setup | GET & POST API
✨ Introduction
Aaj hum seekhne wale hain ke FastAPI ke zariye web APIs kaise banai jaati hain, aur unmein GET aur POST method ka kya role hota hai.
Ye tutorial bilkul beginner level students ke liye hai jo programming ya web development ka safar shuru kar rahe hain.
📌 FastAPI Kya Hai?
FastAPI ek modern aur fast Python framework hai jo web APIs banane ke liye use hota hai. Ye asynchronous hota hai, isliye kaafi fast kaam karta hai.
📁 Folder Structure
Aap ka project folder kuch is tarah ka hoga:
fastapi-project/
└── main.py▶️ Server Ko Run Karna
Terminal mein jaake yeh command likhein: uvicorn main:app --reload
Phir browser mein open karein: http://127.0.0.1:8000
✅ GET Method Kya Karta Hai?
- Jab bhi aap kisi URL ko browser mein likhte hain, aap ek GET request bhej rahe hote hain.
- Hamare example mein:
GET http://127.0.0.1:8000/Response:
{
"message": "Hello World."
}
🧾 main.py ka Full Code video sy dekheya.
Aaj ki video mein hum dekhein ge ke Pydantic kya hota hai, ye FastAPI ke sath kaise use hota hai, aur kaise hum nested models, validation, aur response schemas define karte hain using Pydantic.
Topic 2: FastAPI + Pydantic Step-by-Step Tutorial
Ye tutorial specially un logon ke liye hai jo Agentic AI ya FastAPI projects par kaam kar rahe hain, jaise ke DACA chatbot. Chaliye shuru karte hain
🛠️ Step 1: Pydantic ka Introduction
Sab se pehle, Pydantic ek Python library hai jo data validation aur type safety ke liye use hoti hai.
Iska kaam hai ke jo bhi data hamare API mein aaye, usko check kare — kya sahi type ka hai, required fields hain ya nahi, aur agar koi galti hai to error throw kare
🔍 Key Features:
✔️ Type-Safe Validation (str, int, list etc.)
✔️ Automatic Type Conversion
✔️ Detailed Error Handling
✔️ Nested Models ka support
✔️ JSON Serialization
✔️ Custom Validators🧪 Step 2: Setup aur Example 1
Ab hum code example se samjhte hain. Pehle hum ek project create karein
uv init fastdca_p1
cd fastdca_p1
uv venv
source .venv/bin/activate
uv add "fastapi[standard]"
Phir ek file banayein: pydantic_example_1.py
from pydantic import BaseModel, ValidationError
# Ek simple model define kar rahe hain
class User(BaseModel):
id: int
name: str
email: str
age: int | None = None # Optional field
# ✅ Valid data
user_data = {"id": 1, "name": "Alice", "email": "alice@example.com", "age": 25}
user = User(**user_data)
print(user)
print(user.model_dump())
# ❌ Invalid data
try:
invalid_user = User(id="not_an_int", name="Bob", email="bob@example.com")
except ValidationError as e:
print(e)
Jab aap uv run python pydantic_example_1.py run karein, aapko valid data ka output milega, aur invalid data par ek validation error dikhai dega.
🧱 Step 3: Nested Models ka Example
Pydantic mein hum complex ya nested models bhi define kar sakte hain. Let’s try this:
from pydantic import BaseModel, EmailStr
# Address ek nested model hai
class Address(BaseModel):
street: str
city: str
zip_code: str
# User ke andar Address ka list aa raha hai
class UserWithAddress(BaseModel):
id: int
name: str
email: EmailStr
addresses: list[Address]
# ✅ Valid nested data
user_data = {
"id": 2,
"name": "Bob",
"email": "bob@example.com",
"addresses": [
{"street": "123 Main St", "city": "New York", "zip_code": "10001"},
{"street": "456 Oak Ave", "city": "Los Angeles", "zip_code": "90001"},
],
}
user = UserWithAddress.model_validate(user_data)
print(user.model_dump())
Aap dekh rahe hain ke hum easily list of objects handle kar rahe hain — ye AI workflows mein bohat useful hota hai.
🛑 Step 4: Custom Validator
Ab ek custom validation lagate hain ke user ka name kam az kam 2 characters ka ho:
from pydantic import BaseModel, EmailStr, validator, ValidationError
from typing import List
class Address(BaseModel):
street: str
city: str
zip_code: str
class UserWithAddress(BaseModel):
id: int
name: str
email: EmailStr
addresses: List[Address]
@validator("name")
def name_must_be_at_least_two_chars(cls, v):
if len(v) < 2:
raise ValueError("Name must be at least 2 characters long")
return v
# ❌ Invalid name test
try:
invalid_user = UserWithAddress(
id=3,
name="A",
email="charlie@example.com",
addresses=[{"street": "789 Pine Rd", "city": "Chicago", "zip_code": "60601"}],
)
except ValidationError as e:
print(e)
Output mein aapko custom error message dikhai dega. Ye validation AI agents ke liye zaroori hai jahan user input strict hona chahiye.
🤖 Step 5: Final FastAPI App — Agentic Chatbot
Ab hum full FastAPI app banate hain jahan nested metadata aur message models hain:
📁 File: main.py
from fastapi import FastAPI, HTTPException
from pydantic import BaseModel, Field
from datetime import datetime, UTC
from uuid import uuid4
app = FastAPI(
title="DACA Chatbot API",
description="A FastAPI-based API for a chatbot in the DACA tutorial series",
version="0.1.0",
)
# Metadata model with timestamp and UUID
class Metadata(BaseModel):
timestamp: datetime = Field(default_factory=lambda: datetime.now(tz=UTC))
session_id: str = Field(default_factory=lambda: str(uuid4()))
# Request message model
class Message(BaseModel):
user_id: str
text: str
metadata: Metadata
tags: list[str] | None = None
# Response model
class Response(BaseModel):
user_id: str
reply: str
metadata: Metadata
# Root endpoint
@app.get("/")
async def root():
return {"message": "Welcome to the DACA Chatbot API! Access /docs for the API documentation."}
# GET endpoint
@app.get("/users/{user_id}")
async def get_user(user_id: str, role: str | None = None):
return {"user_id": user_id, "role": role if role else "guest"}
# POST endpoint
@app.post("/chat/", response_model=Response)
async def chat(message: Message):
if not message.text.strip():
raise HTTPException(status_code=400, detail="Message text cannot be empty")
reply_text = f"Hello, {message.user_id}! You said: '{message.text}'. How can I assist you today?"
return Response(user_id=message.user_id, reply=reply_text, metadata=Metadata())
Jab aap
fastapi dev main.pychalate hain, aap/docspar Swagger UI mein apna API dekh sakte hain — proper validation ke sath.To students, aapne dekha kaise humne basic Pydantic se le kar FastAPI ke andar nested models tak ka complete flow cover kiya. Ye agentic AI apps jaise DACA mein backbone ka kaam karta hai.
Topic 3: Python Uv + FastAPI: Python Variables se Task Tracker Banayein (Step-by-Step Guide)
Agentic AI mein agents apni state ko variables ke zariye track karte hain. Agar aap beginner hain aur FastAPI seekh rahe hain, to aaj hum ek simple project karenge jisme Python variables se tasks ka status track karenge aur FastAPI endpoints se unhe complete karenge.
Step 1: FastAPI Install Karna
Sab se pehle apne system mein FastAPI aur Uvicorn install karen:
pip install fastapi uvicorn
- FastAPI: Python framework APIs banane ke liye
- Uvicorn: Server chalane ke liye
Step 2: Python Code Likhein (Task Tracker API)
Ab ek Python file banayen, jaise agentic_task_tracker.py, aur niche code likhein:
from fastapi import FastAPI
app = FastAPI()
# Agent ke tasks ko track karne ke liye variables
task1_completed = False
task2_completed = False
task3_completed = False
@app.get("/")
def read_root():
return {"message": "Welcome to Agentic AI Task Tracker!"}
@app.get("/status")
def get_status():
return {
"task1_completed": task1_completed,
"task2_completed": task2_completed,
"task3_completed": task3_completed,
}
@app.post("/complete_task/{task_id}")
def complete_task(task_id: int):
global task1_completed, task2_completed, task3_completed
if task_id == 1:
task1_completed = True
return {"message": "Task 1 completed."}
elif task_id == 2:
if not task1_completed:
return {"error": "Task 1 must be completed first."}
task2_completed = True
return {"message": "Task 2 completed."}
elif task_id == 3:
if not task2_completed:
return {"error": "Task 2 must be completed first."}
task3_completed = True
return {"message": "Task 3 completed."}
else:
return {"error": "Invalid task ID."}
Explanation:
- Humne 3 variables banaye jo tasks ke completion ko track karte hain.
/statusendpoint current completion status batata hai./complete_task/{task_id}POST request se specific task complete hota hai.- Tasks ko sequence mein complete karna zaroori hai, warna error aati hai.
Step 3: Server Chalana
Terminal mein yeh command run karein:
uvicorn agentic_task_tracker:app --reload
agentic_task_trackerPython file ka naam hai.appFastAPI instance ka naam hai.--reloadse code mein changes par server automatically reload hoga.
Step 4: API Endpoints Test Karna
1. Welcome message
Browser ya Postman mein URL dal kar GET request bhejein: http://127.0.0.1:8000
Tasks ka status check karna
GET request: http://127.0.0.1:8000/status
Conclusion
Aaj humne seekha:
- Python variables se Agentic AI ke tasks track karna
- FastAPI mein GET aur POST endpoints banana
- Sequence mein tasks complete karna aur validation karna
- Common error aur uska solution
Aap is project ko extend karke zyada tasks ya complex logic add kar sakte hain!

Project 1: Translator Agent using Gemini 1.5 Flash and Chainlit – Build Your Own AI Translator
In this exciting project, we build a real-time Translator Agent using Gemini 1.5 Flash, Chainlit, and Python. This AI-powered chatbot can translate any text into multiple languages with the help of Google’s Gemini model, offering speed and accuracy like never before.
🚀 Why This Project?
- ✅ Uses the powerful Gemini 1.5 Flash model via LiteLLM
- ✅ Built with Chainlit for a sleek and interactive chat interface
- ✅ Supports chat history and saves it as JSON
- ✅ Simple, fast, and customizable for beginners and pros
🛠️ Key Features:
- 🌍 Multi-language Translation
- 💬 Real-time Chat UI with Chainlit
- 🔐 Secure API key handling with
.env - 💾 Automatic chat history saving
- ⚡ Fast responses powered by Gemini Flash model
👨💻 Tech Stack:
- Python
- Chainlit
- Gemini 1.5 Flash
- LiteLLM
- dotenv
This project is perfect for anyone interested in building real-world AI tools using the latest LLMs. Whether you’re a student, AI enthusiast, or developer, this is a great hands-on way to explore the power of Google Gemini in your own applications.
📂 Use Case: Ideal for learning AI integration, building NLP tools, or enhancing multilingual apps.
Project 2: Build a Gemini AI Agent Using UV
Objective
- Initialize a UV Python project (as in Task 1).
- Use Google’s Gemini API through the
openai-agentsclient. - Create a simple agent that responds to basic questions.
1. Initialize the UV Project
Follow the steps from Task 1 to create and activate a new UV project.
uv init my-gemini-agent
cd my-gemini-agent
2. Install Dependencies
Install the required Python packages:
uv add openai-agents
uv add python-dotenv
openai-agents: Structured framework for building AI agents (compatible with Gemini).python-dotenv: Loads environment variables from a.envfile.
3. Configure Environment Variables
Create a .env file in the project root and add your Gemini API key:
GEMINI_API_KEY=your_api_key_here
4. Create the Agent Script
Create a new file (e.g., main.py) with the following content:
base_url=”https://generativelanguage.googleapis.com/v1beta/openai/”,
5. Run the Script
uv run main.py
6. Code Explanation
dotenv.load_dotenv(): Loads environment variables from.env.AsyncOpenAI: Connects to the Gemini API endpoint with your key.OpenAIChatCompletionsModel: Wraps the Gemini model (gemini-2.0-flash-exp).Agent: Defines a conversational agent with given name and behavior.Runner.run_sync(): Runs the agent synchronously and returns the final output.
Feel free to customize the instructions, model choice, or input prompts to explore more advanced use cases.
Topic : Runner in OpenAI Agents SDK | Step-by-Step Tutorial with Real Example
n this video, we’ll break down the Runner in the OpenAI Agents SDK — the most essential function to run your AI agents.
You’ll learn:
✅ What Runner.run() does
✅ How it powers your agents behind the scenes
✅ How to pass input, get output, and manage agent context
Topic : OpenAI Agents SDK: Master the handoff() Function with Real Example
In this video, you’ll learn what handoff() is, how it works in multi-agent systems, and why it’s crucial for building smart, modular AI workflows. We’ll break everything down in a simple, beginner-friendly way with a real code example — explained line-by-line in plain language.
💡 What you’ll learn:
- What is the
handoff()function? - How does it transfer tasks between agents?
- What is the difference between parent and child agents?
- Complete working code with easy explanations
Topic : guardrails in openai agents sdk | input & output guardrails with example
Want to secure your OpenAI agents from malicious or unwanted inputs?
In this tutorial, we explain Guardrails in the OpenAI Agents SDK — both Input Guardrails and Output Guardrails — in a simple, beginner-friendly way!
You’ll learn:
✅ What Guardrails are
✅ What are Tripwires
✅ How to block math questions using input guardrails
✅ How to monitor agent output using output guardrails
Unlock the Power of AI: Build Your Personal Study Assistant in Just 20 Minutes
For students of GIAIC and PIAIC, mastering the integration of AI into everyday tasks can be a game-changer. Our latest tutorial, “Build Your Own Personal Study AI Assistant in 20 Minutes,” is designed to equip you with the skills to create a powerful AI tool using Python, FastAPI, and OpenAI Agents
OpenAI Agents SDK Projects
Project 1: Personal Study Assistant
Description: Build an AI agent system that helps students manage their study schedules, find relevant resources, and summarize academic content. The system will consist of multiple agents working together: a scheduler, a web researcher, and a summarizer.
Directions for Students:
- Define the Agents:
- Create a “Scheduler Agent” to take user input (e.g., study topics and deadlines) and generate a study plan.
- Create a “Research Agent” to search the web for articles, videos, or papers related to the study topics.
- Create a “Summarizer Agent” to condense the research findings into concise notes.
- Set Up Inputs:
- Design a simple interface (e.g., command-line or text-based) where users can input their study goals and time constraints.
- Configure Tools:
- Use the built-in web search tool in the Responses API to enable the Research Agent to fetch real-time information.
- Allow the Summarizer Agent to process text from web results or uploaded files (e.g., PDFs).
- Implement Handoffs:
- Ensure the Scheduler Agent passes the study topics to the Research Agent, which then hands off the collected data to the Summarizer Agent.
- Add Guardrails:
- Add checks to ensure the web search results are relevant (e.g., filter out non-academic sources) and the summaries are concise (e.g., limit word count).
- Test and Debug:
- Test the system with sample inputs like “Learn about machine learning by next week” and use the SDK’s tracing tools to monitor agent interactions.
- Enhance the Project:
- Add a feature to save the study plan and summaries to a file for later use.
Who Should Watch:
This tutorial is ideal for students and developers eager to expand their knowledge in AI and web development. Whether you’re looking to enhance your study sessions or explore new tech integrations, this guide offers valuable insights and hands-on experience.
Topic : sdk release process in openai agents sdk
Ye versioning format teen numbers se milke bana hota hai:
- 0 → Leading zero ka matlab hai ke SDK abhi bhi rapid development phase mein hai.
- Y → Minor version
- Z → Patch version
✅ Patch Version (Z) — Safe Changes
Ab baat karte hain Z (Patch) version ki.
Isko tab increment kiya jata hai jab changes non-breaking hon, jaise:
- ✅ Bug fixes
- ✅ New features
- ✅ Internal/private interface changes
- ✅ Beta features mein updates
📊 Summary Table
| Version | Use Kab Hota Hai? | Kya Change Reflect Karta Hai? |
|---|---|---|
0 | SDK abhi evolving hai | Development stage dikhaata hai |
Y | Breaking changes | Public interface mein major changes |
Z | Non-breaking changes | Bug fix, naya feature, ya internal tweak |
Topic : Streaming Events in OpenAI Agents SDK
Aaj ki video mein hum baat karenge OpenAI Agents SDK ke aik powerful feature ke baare mein — Streaming Events. Jab bhi koi Agent kaam kar raha hota hai, to wo real-time mein kuch signals generate karta hai jinko hum events kehte hain. Yeh humein batatay hain ke agent abhi kya kar raha hai — message create kar raha hai, tool call ho raha hai, ya agent update ho raha hai.
🔍 StreamEvent Type:
Sabse pehle, StreamEvent aik type alias hai jo teen tarah ke events ko represent karta hai:
StreamEvent = Union[
RawResponsesStreamEvent,
RunItemStreamEvent,
AgentUpdatedStreamEvent,
]
Yani jab hum agent ka output stream karte hain to yeh teenon mein se koi bhi event mil sakta hai.
Whenever the agent generates a message, calls a tool, or gets updated, a streaming event is triggered. There are three main types of events:
- RawResponsesStreamEvent – Captures raw, real-time text output as it’s being generated by the LLM. This is like watching the model “think out loud.”
- RunItemStreamEvent – Represents internal steps like tool calls, responses, handoffs, or reasoning items. This gives you insight into how the agent is processing information.
- AgentUpdatedStreamEvent – Indicates that a new agent has been created or an existing one has been updated. Helpful when agent behavior changes during a session.
💡 Ab isko real zindagi ke example se compare karo:
Socho ke tum aik AI chatbot se baat kar rahi ho…
- Jab wo sochta hai ke kya reply du? ➡️
RunItemStreamEvent - Jab wo “Hello, kaise ho?” type kar raha hai ➡️
RawResponsesStreamEvent - Aur agar beech mein wo agent change kar de ➡️
AgentUpdatedStreamEvent
Agent Output Schema in OpenAI SDK | JSON Validation & Structured Output Explained
Learn how to define and validate structured responses in OpenAI Agents SDK using AgentOutputSchema and AgentOutputSchemaBase.
In this tutorial, we break down the purpose of is_plain_text, validate_json, json_schema, and other key methods that help ensure clean, reliable output from your AI agents.
Whether you’re building tools, multi-agent workflows, or data pipelines, structured output validation is essential.
🎓 Perfect for beginners and intermediate AI developers.
🔗 Topics Covered:
- What is AgentOutputSchemaBase?
- Validating JSON with strict schema
- is_plain_text vs structured output
- Practical use cases
- Handling errors like ModelBehaviorError
Topic : crypto agent openai agents sdk |agentic ai in pakistan
Real-Time Crypto Price Chatbot Using Agentic AI + Chainlit + Python
✨ Project Overview
In the age of AI-driven automation and real-time data needs, we created a powerful Crypto Price Chatbot using Agentic AI SDK, Python, and Chainlit. This intelligent agent helps users get live cryptocurrency prices — for any coin, in any currency — within a conversational interface.
This project leverages the power of LLM agents, a real-time data API (CoinGecko), and an interactive frontend to deliver an elegant and practical solution. The goal was not only to build a chatbot but to explore how AI agents can work with external tools and APIs in an intelligent, helpful, and scalable way.
🎯 Why We Built This
There are thousands of crypto coins — Bitcoin, Ethereum, Litecoin, Dogecoin, etc. — and their prices fluctuate every second. Traders, learners, and curious users constantly need this data.
But:
- Traditional websites are slow or cluttered
- Crypto APIs are often complex or paid
- New users may feel lost or overwhelmed
So we thought:
“What if we could build a simple, AI-powered agent that understands your query and gives you live prices like a friend?”
That’s where this project was born — using Agentic AI SDK, we created a smart agent that can:
- Take natural language input
- Understand the coin and currency
- Fetch real-time data
- Reply in a friendly format
🧠 What We Built
We created an AI agent named CryptoDataAgent using:
- 🧠 Agentic AI SDK – to build and manage the agent
- 🌐 CoinGecko API – for real-time crypto price data (no API key required!)
- 💬 Chainlit – for the live chatbot interface
- 🐍 Python – as the programming language
The bot can answer questions like:
- “Bitcoin price kya hai in USD?”
- “Tell me Ethereum rate in GBP”
- “Litecoin ki EUR price kya hai?”
pip install chainlit agents openai requests python-dotenv
✅ Step 2: Build the Tool (tools.py)
We used CoinGecko’s public API (no key required) to create a reusable tool:
@function_tool
def get_crypto_price(coin: str = "bitcoin", currency: str = "usd") -> str:
# Sends request to CoinGecko API and returns price✅ Step 3: Define the Agent (agent_crypto.py)
Using Agentic AI, we wrote:
pythonCopyEditCryptoDataAgent = Agent(
name="CryptoDataAgent",
instructions="You are a helpful agent that gives real-time cryptocurrency prices...",
model=OpenAIChatCompletionsModel(...),
tools=[get_crypto_price]
)
We linked this agent with our tool and model (Gemini or OpenAI).
✅ Step 4: Chainlit Chat UI
Using Chainlit decorators:
pythonCopyEdit@cl.on_chat_start
async def on_chat_start():
await cl.Message("Welcome to the Crypto Price Chatbot!").send()
@cl.on_message
async def handle_message(message):
# Read user input, pass to agent, return response
The agent processes natural text like “What’s Ethereum price in CAD?”, calls the API, and sends a polished response.
🎯 What Makes This Special?
- ✅ No Paid API Needed – CoinGecko API is free
- 🧠 LLM Agent Thinking – Understands user context
- ⚡ Real-Time Data – Always fresh prices
- 💬 Chat Interface – No coding knowledge needed
- 🛠️ Extensible – Can be extended to show graphs, trends, or trading alerts
🧪 Example Prompts That Work:
- “Bitcoin price in INR?”
- “Dogecoin ka rate kya hai USD mein?”
- “Show me Ethereum value in EUR”
Agent will:
- Understand natural query
- Pick coin & currency
- Call the tool
- Show result
💡 Future Enhancements
We plan to expand this bot to include:
- 📈 24hr trends and historical charts
- 💵 Market cap, volume, rank
- 🔔 Price alerts (email / SMS)
- 👥 Multi-agent system for portfolio tracking
🗨️ Final Thoughts
This project shows how AI agents + tools + chat interfaces can solve real-world problems. It blends smart natural language processing, real-time data handling, and user-friendly design.
Whether you’re a developer, student, or crypto enthusiast — you can learn a lot by building this.
🔚 Conclusion
We built a practical, intelligent, and free crypto price agent using Agentic AI SDK, Chainlit, and Python — and it’s ready to help anyone who wants live crypto data instantly.
💡 Try asking it: “Bitcoin price in PKR?”
We’re excited to build more such agentic tools that combine AI with everyday tasks
Topic GPT-4.1 Prompting Guide
aj hum samjhein ge GPT-4.1 Prompting Guide ko — khas tor par un logon ke liye jo agents banate hain, ya AI se kaam lena chahtay hain OpenAI SDK ke through.
Yeh guide bilkul beginner-friendly hai, aur har important topic cover kare gi — jaise Agentic Workflows, Long Context, Chain of Thought, Instruction Following, aur apply_patch system.
Chalo shuru karte hain
Agentic Workflows (0:30 – 3:00) GPT-4.1 ko train kiya gaya hai taake yeh agents ki tarah kaam kar sakay — iska matlab hai model autonomously step-by-step kaam complete karta hai jab tak solution na mil jaye.
Persistence Reminder: “You are an agent – please keep going until the user’s query is completely resolved.” Yani model ko kehna ke kaam mukammal karne tak rukna nahi hai.
Example: “Keep going until the user’s problem is solved.”
Tool-Calling Reminder: “If you are not sure about file content or codebase structure… use tools!” Yani model ko tools ka istemal karna sikhana zaroori hai.
Planning Reminder: “You MUST plan extensively before each function call…” Yeh instruction model ko chain-of-thought style planning pe majboor karta hai.
Yeh 3 cheezen milkar GPT-4.1 ko ek “chatbot” se ek independent agent bana deti hain jo user ke problem ko khud solve karta hai.
🔹 2. Tool Calls
GPT-4.1 tools ko bohat achi tarah samajhta hai.
✅ Tip: Tool ka naam clear rakho, aur description mein explain karo ke us tool ka kaam kya hai.
Galti: Manual tool description ko prompt mein daalna aur model se expect karna ke woh samjhay.
Sahi tareeqa: API mein tool define karo using tools field. Example: python tool with "input" parameter.
🔹 3. Prompting-Induced Planning & Chain-of-Thought
GPT-4.1 reasoning model nahi hai, lekin agar aap prompt mein keh do ke step-by-step socho, toh model behtar kaam karta hai.
💡 Example:
“Pehle socho ke query solve karne ke liye kaunse documents zaroori hain. Un ke title aur ID print karo. Phir IDs ka list banao.”
Yeh Chain-of-Thought technique model ko planning aur logic mein help karti hai.
🔹 4. Long Context
GPT-4.1 ka context window 1 million tokens tak jaata hai — matlab yeh bohat lamba context samajh sakta hai.
🧠 Use cases:
- Documents ka analysis
- Relevant info extract karna
- Multi-hop reasoning (ek se zyada cheezein combine karke answer dena)
💡 Tips:
- Instructions ko context ke start aur end dono jagah likho.
- Agar sirf ek dafa instructions likhni hain, toh upar likho, neeche nahi.
🔹 5. Instruction Following
GPT-4.1 bohat strictly aapke instructions ko follow karta hai. Iska matlab hai agar prompt unclear hua, toh model exactly waisa behave kare ga jaise aap ne likha hai.
📌 Workflow banate waqt yeh sections helpful hain:
# Role and Objective# Instructions# Output Format# Examples
⚠️ Common mistakes:
- Conflicting ya incomplete instructions
- Model ko force karna tool call se pehle answer na de — is se kabhi kabar hallucination bhi ho sakti hai.
🔹 6. Prompt Structure & Delimiters
Prompt structure neat aur organized honi chahiye:
Recommended format:
shellCopyEdit# Role and Objective
# Instructions
## Sub-instructions
# Reasoning Steps
# Output Format
# Examples
📍 Delimiters Tips:
- Markdown headings (
###) use karo - XML structure bhi acha perform karta hai
- JSON sirf tab use karo jab strictly necessary ho — long context mein JSON performance kam hoti hai
🔹 7. Patch Format – Apply Patch
Agar aap agent bana rahe ho jo code modify karta hai, toh apply_patch command use hoti hai.
📄 Format Example:
bashCopyEditapply_patch <<"EOF"
*** Begin Patch
*** Update File: path/to/file.py
@@ def function_name
- old line
+ new line
*** End Patch
EOF
💡 Patch ka format strictly follow karo. Done! ka output ayega lekin agar patch apply nahi hua, toh warning lines check karo.
🔹 8. SWE-bench Verified Sample Prompt
OpenAI ne GPT-4.1 ko test kiya SWE-bench Verified par (software engineering benchmark) — aur agentic prompts se 55% problems solve hui!
📋 Sample Prompt mein:
- Har step explain hota hai: read issue, investigate codebase, plan, fix, test, iterate
- Model ko kehna hota hai: “Kabhi rukna nahi jab tak tamam tests pass na ho jaayein.”
Yeh prompt aap as base use kar sakte ho apne software automation agents ke liye.
🧠 9. General Advice & Caveats
✔️ Prompt banate waqt:
- Clearly structure karo
- Avoid repetition ya over-formatting
- Examples dalo jo correct output dikhayein
- Har behavior ka explanation rules mein hona chahiye
⚠️ Caveats:
- Model kabhi kabar long repetitive tasks mein thak jata hai — usay strongly instruct karo ke full output de
- Parallel tool calls mein rare errors ho sakti hain — test karna zaroori hai
Agar aap agents banate ho, ya GPT-4.1 ke sath serious development karna chahte ho, toh yeh prompting guide aap ke liye gold mine hai.
| 🔢 | Question | ✅Answer | 🧠 Description To Remember |
| 1 | Jab AI aisa jawab dey jo galat ho lekin sahi lagay, usay kya kehtay hain? | Hallucination | Jab AI confident ho kar galat ya nonsensical jawab dey — isay hallucination kehtay hain. |
| 2 | Jab AI kisi race ya gender ko kisi job ya skill se jor deta hai, to kya error hota hai? | Bias | Jab AI ka training data unfair ya biased ho to woh ghalat assumptions laga leta hai. |
| 3 | IBM ka AI kis 4 beliefs pe based hota hai? | Open, trusted, targeted, empowering | IBM chahata hai ke AI khula (open), bharosemand (trusted), focused (targeted), aur madadgar (empowering) ho. |
| 4 | ChatGPT kis AI model pe based hai? | Transformer-based LLM | ChatGPT aik large language model hai jo transformer architecture use karta hai — isi wajah se yeh powerful hai. |
| 5 | “Can Machines Think?” ka sawal sab se pehle kis ne uthaya? | Alan Turing | Alan Turing AI ka “father” maana jata hai — usne 1950 mein AI ka concept diya. |
| 6 | Konsi learning technique AI ko trial aur error se seekhne deti hai? | Reinforcement Learning | Jaise video game mein reward milta hai, AI ko bhi feedback milta hai sahi ya galat ka. |
| 7 | Konsi technology AI ko insaanon ki tarah baat karne deti hai? | Chatbots | Chatbots jaise ChatGPT, Siri ya Alexa text/audio ke zariye insano se baat karte hain. |
| 8 | Konsa AI field data se khud seekhta hai bina programming ke? | Machine Learning / Deep Learning | AI khud data se patterns aur decisions banana seekhta hai. |
| 9 | Konsi technology AI ko human language samajhne mein madad karti hai? | Natural Language Processing | NLP ka kaam hota hai AI ko language aur meaning samajhna sikhana. |
| 10 | Konsa IBM tool BT Global ne 5.5 million network items monitor karne ke liye use kiya? | IBM SevOne NPM | Network performance real-time track karne ke liye IBM ka yeh advanced tool hai. |
| 11 | AI models ko train/tune/deploy karne ke liye watsonx ka konsa part use hota hai? | watsonx.ai | Yeh AI ka brain hai jahan model train aur deploy hotay hain. |
| 12 | ESG ka matlab kya hota hai? | Environmental, Social, Governance | Companies ki zimmedari hoti hai environment, society aur management ka khayal rakhna. |
| 13 | Jab hum AI ko detail mein instructions dete hain to isay kya kehte hain? | Prompt Engineering | Jaise ChatGPT ko kehna “ek story likho” — woh prompt hota hai. |
| 14 | Konsa AI naya content (text, image, code) generate karta hai? | Generative AI | Jaise DALL·E image banata hai aur ChatGPT jawab likhta hai — yeh Generative AI hai. |
| 15 | Prompt engineering ke 3 types kya hain? | Few-shot, One-shot, Zero-shot | Few-shot: kuch examples dey kar; One-shot: sirf aik example; Zero-shot: bina kisi example. |
| 16 | MongoDB, EnterpriseDB kis type ki databases hain? | OEM Databases | OEM ka matlab hai 3rd-party databases jo IBM ke system ke sath kaam karti hain. |
| 17 | Agar public cloud allowed nahi to AI kahan deploy karain? | Cloud Pak for Data on private cloud | Secure aur controlled environment mein AI ko private cloud pe deploy kartay hain. |
| 18 | Konsi technology multiple sources (clouds, databases) se data read kar leti hai bina copy kiye? | Data Virtualization | Data ko migrate kiye bagair access karna — yeh hi virtualization hai. |
| 19 | Konsi architecture large organizations ko data access democratize karne deti hai? | Hybrid Cloud Data Fabric | Data ko har jaga se easily access karne ke liye yeh fabric design kiya gaya hai. |
| 20 | Aaj kal data sprawl (bikhar jana) kis wajah se hota hai? | Copy/paste data sharing | Har department alag copy bana leta hai — jisse data clutter hojata hai. |
| 21 | Cloud Pak ka core benefit kya hai? | Consolidate, reduce spend, unlock value | Systems aur data ko combine kar ke cost kam aur faida zyada. |
| 22 | CP4DaaS users ko discount kis program ke zariye milta hai? | Hybrid Subscription Advantage | IBM ka discount model jo hybrid usage walon ko benefit deta hai. |
| 23 | Data complexity ka example kya hai? | Unstructured formats like docs/images | Jaise PDF, Word, images — yeh structured nahi hote is liye complex hotay hain. |
| 24 | Konsa service Cloud Pak ka base part nahi hai? | Db2 Warehouse | Yeh alag service hai — CP4D ka core part nahi. |
| 25 | CP4D data fabric kis kis cheez ka support deta hai? | Virtualization, catalog, governance | Yeh 3 cheezain help karti hain AI readiness ke liye. |
| 26 | AI ko scale karne ke 2 major challenges kya hain? | Data access, governance | Data milna aur usay properly manage karna sab se bada masla hai. |
| 27 | Konsa IBM tool data assets ke liye policies assign karta hai? | IBM Knowledge Accelerators | Yeh pre-built policies provide karta hai for faster setup. |
| 28 | Konsa tool bias, fairness aur drift manage karta hai? | Watson OpenScale | AI ke andar transparency aur monitoring ka kaam karta hai. |

Topic: What is Model Context Protocol?
Model Context Protocol (MCP) is a communication protocol designed to simplify how large language models (LLMs) and AI systems interact with various resources and external tools. Think of it as a universal language or set of rules that enables various AI systems to share context, information, and capabilities.
In simpler terms:
- Without MCP: Your AI tools work in isolation, like separate islands that can’t easily share information
- With MCP: Your AI tools can work together as a team, sharing information and capabilities
It has three key components
- Host
- Client
- Server

MCP solves this by creating a standardized way for:
- AI models to communicate with each other
- AI models to use external tools and resources
- Applications to interact with multiple AI models
MCP’s Core Concepts and Capabilities.
Understanding MCP requires familiarity with these key concepts:
6.1. Resources
Resources are the fundamental building blocks in MCP—they represent capabilities that can be accessed through the protocol.
Types of resources include:
a. Models – AI systems that can process and generate text:
- Large Language Models like Claude or GPT
- Specialized models for specific domains or tasks
- Each with its own capabilities and limitations
b. Tools – External systems that models can use to:
- Retrieve information (search engines, databases)
- Perform actions (file operations, API calls)
- Interact with the outside world
c. Contexts – Shared information that persists across interactions:
- Conversation history
- User preferences
- Session data
- File contents
6.2. Prompts
In MCP, prompts are structured messages sent to models. They include:
- Content: The actual message or instruction
- Context: Additional information for the model
- Tools: Available external capabilities
- Parameters: Settings for how the model should respond
6.3. Sampling
Sampling controls how models generate responses:
- Temperature: Controls randomness (0.0 = deterministic, 1.0 = creative)
- Top-p: Limits token selection to most likely options (nucleus sampling)
- Max tokens: Sets the maximum length of the response
- Stop sequences: Strings that will cause generation to stop
These parameters allow fine-tuning the model’s output for different tasks.
6.4. Transport
Transport defines how information moves between MCP components:
- Authentication: Methods for securing connections
- HTTP/REST: For basic request-response interactions
- WebSockets: For streaming responses and real-time updates
- Message formats: JSON structures for requests and responses

First, we have capability exchange, where:
- The client sends an initial request to learn server capabilities.
- The server then responds with its capability details.
- For instance, a Weather API server, when invoked, can reply back with available “tools”, “prompts templates”, and any other resources for the client to use.
Once this exchange is done, Client acknowledges the successful connection and further message exchange continues.
MCP Client
This component is embedded within host applications and handles:
- Translating between the host application’s needs and MCP’s standardized format
- Managing connections with MCP servers
- Requesting permissions from users when external tools need to be accessed
- Processing responses from servers and feeding them back to the host application
The MCP Client functions as a bridge between the user-facing application and the external capabilities.
MCP Server
This is the component that integrates with external data sources:
- Connects to specific external systems (GitHub, databases, weather services, etc.)
- Exposes standardized interfaces to interact with these external systems
- Handles the logic required to transform MCP requests into external system calls
- Manages authentication and connections to external resources
Think of MCP Servers as specialized adapters that connect the MCP ecosystem to specific external systems.
MCP vs. API: Quick comparison
| Feature | MCP | Traditional API |
|---|---|---|
| Integration Effort | Single, standardized integration | Separate integration per API |
| Real-Time Communication | ✅ Yes | ❌ No |
| Dynamic Discovery | ✅ Yes | ❌ No |
| Scalability | Easy (plug-and-play) | Requires additional integrations |
| Security & Control | Consistent across tools | Varies by API |

MCP’s Tools use and flow with code example
Sochiye user ne bola:
“Please tell me the weather in Lahore.”
Yeh message MCP mein wrap ho ke model ko jata hai. Model decide karta hai ke “weather tool” call karna hai.
MCP phir tool call karta hai, aur jab result mil jata hai — woh bhi MCP ke format mein model ko wapas diya jata hai.
Is tarah har interaction structured aur predictable hoti hai
from agents import Agent, Runner, OpenAIChatCompletionsModel
import os
from dotenv import load_dotenv
load_dotenv()
API_KEY = os.getenv("OPENAI_API_KEY")
model = OpenAIChatCompletionsModel(
model="gpt-4o" # GPT-4o model use ho raha hai
)
agent = Agent(
name="Weather Agent", # Agent ka naam
instructions="Tum aik weather assistant ho jo cities ka mausam batata hai.",
model=model
)
result = Runner.run_streamed(
starting_agent=agent,
input="What's the weather in Lahore?"
)
for event in result.stream_events():
print(event.data.text, end="")
👆 Is code mein jo bhi interaction ho raha hai, woh MCP protocol ke through structured format mein ho raha hai — taake model confuse na ho aur step-by-step sahi response de.

Some use cases of MCPs
- Smarter Coding Assistants in IDEs: MCP allows AI coding assistants within Integrated Development Environments (IDEs) to connect directly to your specific codebase, documentation, and related tools.
- Hyper-Contextual Enterprise Chatbots: Instead of a generic chatbot, enterprises can build assistants that securely tap into internal knowledge bases.
- More Capable AI Assistants: AI applications can use MCP to securely interact with local files, applications, and services on your computer.
MCP provides a unified and standardized way to integrate AI agents and models with external data and tools. It’s not just another API; it’s a powerful connectivity framework enabling intelligent, dynamic, and context-rich AI applications
Topic: What is Agent Communication Protocol (ACP)?
The Agent Communication Protocol (ACP) is a communication standard that allows multiple AI agents to interact in a safe, structured, and interpretable manner. Just like humans communicate using a shared language, AI agents use ACP to understand each other’s messages.
🧠 What Does ACP Do?
ACP defines the rules and structure for how agents exchange messages. Its main goals include:
- Creating mutual understanding between agents
- Giving structure to each message (who sent it, what it says, why it says it)
- Making coordination and collaboration easier in multi-agent systems
🔍 Where is ACP Used?
| Use Case | Description |
|---|---|
| 🤝 Multi-Agent Collaboration | Multiple agents working together to complete a task (e.g., planning, dialog management) |
| 🏢 Enterprise Workflows | One agent reads emails, another manages calendars, a third handles user interaction |
| 🎮 Game AI Systems | NPCs (non-player characters) exchange messages to coordinate movements or decisions |
| 🌐 Distributed Systems | Bots deployed across a network communicate across different machines or environments |
🧾 Key Components of ACP
- Performatives / Message Types
Inspired by the FIPA standard, ACP uses performatives to define message intentions, such as:Request: “Please perform this task”Inform: “Here’s some information”Confirm/Disconfirm: “Yes” or “No, I’m sure”Query: “I need this information”
- Sender and Receiver
Every message includes the source (sender) and the target (receiver). - Ontology
A shared vocabulary or domain-specific dictionary that helps both agents understand message terms. - Content and Context
- Content: The actual message or instruction
- Context: The background or situation of the message
🧪 Example JSON Message (ACP Format)
{
"type": "request",
"sender": "calendar_agent",
"receiver": "email_agent",
"content": "Please send an invite for tomorrow's meeting at 3PM.",
"ontology": "office_schedule"
}
In this message:
- The calendar agent is requesting the email agent to send a meeting invite.
- The message follows the “request” performative.
- The ontology used is
office_schedule.
🔄 ACP vs MCP (Model Context Protocol)
| Feature | ACP | MCP (Model Context Protocol) |
|---|---|---|
| Purpose | Agent-to-agent communication | Model-to-tool/context integration |
| Use Case | Dialogue agents, coordination | Model orchestration, tool calling |
| Format | Messages & performatives | Structured prompts & tool responses |
| Level | High-level logical messaging | Low-level model execution control |
🔄 Agent Communication Protocol FAQ’s
What is agent protocol?
An agent protocol is a set of rules that agents in a system follow to communicate and interact with each other efficiently.
What is agent communication?
Agent communication is the exchange of messages between autonomous agents in a multi-agent system to achieve coordinated actions.
What is serial communication protocols?
Serial communication protocols are rules governing the transmission of data one bit at a time over a communication channel.
What is a communication protocol?
A communication protocol defines how data is exchanged between devices, ensuring successful communication through structured rules.
ACP aur MCP mein kya farq hai?
ACP agent-to-agent communication ke liye hota hai, jabke MCP model-to-tool integration aur shared context ke liye use hota hai.
Kya main ACP messages ka structure customize kar sakta hoon?
Haan, aap JSON, XML ya kisi bhi structured format mein ACP messages design kar saktay ho — bas performative, sender, receiver wagaira clearly define hon.
Kya ACP sirf AI systems mein hi use hota hai?
Mostly AI aur autonomous systems mein hota hai, lekin kisi bhi multi-agent system mein use ho sakta hai.
✔️ Benefits of ACP:
🚀 Standardization: Allows different AI systems to interact seamlessly.
🔄 Efficiency: Reduces errors and improves response times.
📡 Scalability: Supports multiple AI agents working together.
🤝 Interoperability: Enables AI from different developers to communicate correctly.
🎯 Summary
ACP is a structured way to enable multiple AI agents to collaborate. It ensures that every message exchanged includes who sent it, what is being requested or shared, and in what context — allowing agents to operate like a well-coordinated team.
Health Care Chatbot using Python, Gemini & OpenAI Agents SDK
Healthcare industry mein AI ka role din ba din barh raha hai. Aaj hum seekhenge kaise ek powerful healthcare chatbot banaya jaye Python, Gemini AI, aur OpenAI Agents SDK ka use karte hue.
Kya Seekhenge:
- Python environment setup for healthcare applications
- OpenAI Agents SDK integration for intelligent medical responses
- Gemini AI ka use medical queries ke liye
- Natural Language Processing for healthcare data
- Complete chatbot deployment process
What you’ll learn:
✅ Setting up Python environment for healthcare chatbot development
✅ Integrating OpenAI Agents SDK for intelligent responses
✅Using Gemini AI for enhanced medical query processing
✅ Building complete healthcare chatbot from scratch
✅ Implementing natural language processing for medical queries
✅Deploying your healthcare chatbot project
Required Tools:
- Python 3.8+
- OpenAI API key
- Gemini AI access
- Basic programming knowledge
Key Features:
- Medical symptom analysis
- Drug interaction checking
- Appointment scheduling
- Health record management
- Multi-language support (Urdu/English)
Implementation Steps:
- Environment setup aur dependencies install karna
- OpenAI Agents SDK configuration
- Gemini AI integration for enhanced responses
- Medical database connection
- User interface development
- Testing aur deployment
OpenAI Agents SDK Agentic Quiz MCQs
Are you preparing for the Agentic AI SDK quiz, an interview, or a GIAIC exam and struggling to understand complex concepts?
This video is your shortcut to success! We’ve broken down the 80 most important Agentic AI SDK multiple-choice questions (MCQs) with clear Roman Urdu explanations to help you learn faster and smarter — even if you’re new to AI or not from a Computer Science background.
🧠 Master Agentic AI SDK: 80 Must-Know MCQs Explained in Urdu
🎯 Why This Video Is a Must-Watch for Students:
✅ Real Exam Questions
These questions are selected from real OpenAI SDK assessments, quizzes, and practical scenarios.
✅ Roman Urdu Explanations
Every concept is simplified in Roman Urdu so that even beginners can understand tool behavior, guardrails, handoffs, and more without getting lost in technical jargon.
✅ Boosts Exam & Interview Confidence
Whether you’re preparing for GIAIC’s certification or an AI-related job interview, this video gives you the edge.
✅ Saves You Time
Skip long docs! This crash course teaches you the core SDK logic in under 30 minutes — faster revision, better retention.
✅ Built for Pakistani & South Asian Learners
The Roman Urdu format bridges the gap for students who prefer bilingual explanations in a local, relatable style.
🔍 Topics Covered:
- AgentOutputSchema and schema validation
- Input & Output guardrails in action
- Tool call execution, retries & cancellations
- Agent handoffs and context preservation
- Streaming events, error handling, and control flow
- Tool behavior settings like
stop_on_first_tool
📌 Perfect For:
- GIAIC Students
- OpenAI SDK Learners
- FastAPI & Agentic AI Developers
- AI/ML Beginners in Pakistan & South Asia
- Teachers, Freelancers & Mentors
📺 Ready to master Agentic AI SDK the easy way?
Watch now, learn fast, and pass your quiz with confidence