Lesson 1: Asyncio Foundations — Event Loop and Coroutines
Opening Hook
Imagine you need to fetch data from 10 different APIs for a weather dashboard. In traditional synchronous Python, you'd call each API one after another:
- API 1: 2 seconds
- API 2: 2 seconds
- API 3: 2 seconds
- ... and so on
Total time: 20 seconds.
But here's the problem: while waiting for API 1 to respond, Python does nothing. It's idle. Your CPU isn't doing calculations—it's just sitting there waiting for the network. That's wasteful.
With asyncio, you can ask Python to switch to API 2 while waiting for API 1. Then switch to API 3 while waiting for both. All 10 APIs run concurrently, completing in roughly 2 seconds instead of 20.
That's asyncio. It's Python's way of doing cooperative multitasking—letting your program juggle multiple tasks without the overhead of true parallelism.
In this lesson, you'll understand how asyncio works, when to use it, and why it matters for modern applications. Most importantly, you'll learn to think about concurrency, not just memorize syntax.
What Is Asyncio, Really?
Asyncio is Python's asynchronous I/O library. It's built on three core ideas:
- The Event Loop: A manager that switches between tasks
- Coroutines: Functions that can pause and resume (marked with
async def) - The
awaitKeyword: A pause point where the event loop can switch to other tasks
Think of it like a restaurant with one waiter serving 10 tables:
-
Without asyncio: The waiter visits Table 1, waits for the customer to order (2 minutes), then moves to Table 2. Each table takes 2 minutes of pure waiting. Total: 20 minutes for 10 tables.
-
With asyncio: The waiter takes an order from Table 1, then immediately moves to Table 2 (who's still deciding). While Table 2 decides, the waiter checks on Table 3. By the time Table 1's food is ready, other orders have been taken. Efficiency!
The event loop is the waiter. Coroutines are the tables. The await keyword says "I'm pausing here—go help someone else."
The Event Loop Conceptually
Let's make this concrete. Here's what happens when you run async code:
1. asyncio.run() creates an event loop
2. The loop starts running your coroutine
3. Your coroutine hits an `await` statement
4. The loop says "OK, I'll pause you and check other tasks"
5. If other tasks are ready, run them
6. If they also hit `await`, pause them too
7. When a paused task is ready (e.g., network response arrived), resume it
8. Repeat until all tasks finish
The magic: steps 4-7 happen so fast they feel simultaneous, even though they're not truly parallel.
Most importantly: You don't manage the event loop directly in Python 3.14+. You just use asyncio.run(), and it handles everything.
💬 AI Colearning Prompt
"Explain to your AI: How does the event loop know when to switch between tasks? What tells it 'stop waiting for this API and check the next task'?"
This is a great question to explore with Claude Code. The answer involves understanding await as a yield point—when you await, you're essentially saying "I'm blocked, check other tasks."
Coroutines: Functions That Can Pause
A coroutine is a function declared with async def. Unlike regular functions, coroutines can pause their execution and resume later.
Here's the difference:
# Regular function (synchronous)
def fetch_user(user_id: int) -> dict[str, str]:
"""Fetch user data. This blocks until complete."""
# Imagine waiting 2 seconds for network
return {"id": user_id, "name": "Alice"}
# Coroutine (asynchronous)
async def fetch_user_async(user_id: int) -> dict[str, str]:
"""Fetch user data. This can pause and resume."""
# Imagine waiting 2 seconds for network
return {"id": user_id, "name": "Alice"}
The async version looks the same, but it has superpowers:
- It can be paused at
awaitpoints - Other coroutines can run while it's paused
- It must be awaited, not called directly
When you call a coroutine without awaiting, you get a coroutine object—not the result:
# ❌ Wrong: This doesn't run the coroutine
result = fetch_user_async(1)
print(result) # <coroutine object fetch_user_async at 0x...>
# ✅ Right: This runs it with asyncio.run()
result = asyncio.run(fetch_user_async(1))
print(result) # {'id': 1, 'name': 'Alice'}
🎓 Instructor Commentary
In AI-native development, you don't memorize the difference between
async defanddef—you understand why it matters. Async functions let tasks overlap. That's the insight. Syntax is cheap; architecture is gold.
The await Keyword: Pause Points
The await keyword marks a pause point. When Python hits await, it:
- Pauses the current coroutine
- Asks the event loop to check other tasks
- Resumes this coroutine when ready
You can only use await inside an async function.
async def example() -> None:
print("1: Starting")
await asyncio.sleep(2) # Pause for 2 seconds
print("2: Done waiting")
# This requires asyncio.run() because await is used
asyncio.run(example())
Output:
1: Starting
2: Done waiting (after 2 seconds)
The magic: During that 2-second sleep, other tasks can run. That's the power of await.
🚀 CoLearning Challenge
Ask your AI Co-Teacher:
"Show me async code that fetches 3 different APIs using asyncio.sleep() simulation. Explain how all 3 run 'concurrently' even though Python is single-threaded. What determines the total time?"
Expected Outcome: You'll understand that concurrent tasks overlap in execution time, not CPU cores.
Example 1: Basic Asyncio Entry Point
Let's start with the simplest async program:
import asyncio
async def greet(name: str) -> str:
"""A simple async function."""
return f"Hello, {name}!"
# Python 3.14+ pattern: Use asyncio.run() as entry point
result = asyncio.run(greet("Alice"))
print(result) # Hello, Alice!
Key points:
asyncio.run()creates an event loop and runs your coroutine- It's the only way to run async code from synchronous context
- After completion, the loop is closed automatically
- This is the Python 3.14+ standard—no manual loop management
Spec Reference & Validation
Specification: Python 3.14+ entry point using asyncio.run()
AI Prompt Used: "Generate a minimal asyncio.run() example with a simple async function"
Generated Code: Above example with greet() coroutine
Validation Steps:
- ✅ Code runs without errors:
python lesson1_ex1.py - ✅ Output matches expectation: "Hello, Alice!"
- ✅ No deprecation warnings (Python 3.14+ pattern verified)
- ✅ Type hints complete (
-> strreturn type)
Example 2: Coroutine with await asyncio.sleep()
Now let's see concurrency in action. The key insight: await asyncio.sleep() is a pause point.
import asyncio
async def fetch_api(api_name: str, delay: float) -> str:
"""Simulate an API call."""
print(f"[{api_name}] Fetching... (will take {delay}s)")
await asyncio.sleep(delay) # Pause here; other tasks can run
print(f"[{api_name}] Done!")
return f"Data from {api_name}"
async def main() -> None:
"""Run multiple API calls."""
# Sequential (blocks): would take 1 + 2 + 1.5 = 4.5 seconds
# result1 = await fetch_api("API1", 1)
# result2 = await fetch_api("API2", 2)
# result3 = await fetch_api("API3", 1.5)
# Concurrent (overlapped): takes ~2 seconds (max of all delays)
results = await asyncio.gather(
fetch_api("API1", 1),
fetch_api("API2", 2),
fetch_api("API3", 1.5),
)
print(f"All results: {results}")
asyncio.run(main())
Output (note the timing):
[API1] Fetching... (will take 1s)
[API2] Fetching... (will take 2s)
[API3] Fetching... (will take 1.5s)
[API1] Done!
[API3] Done!
[API2] Done!
All results: ['Data from API1', 'Data from API2', 'Data from API3']
Important: All three start immediately, finish in ~2 seconds (not 4.5 seconds), because they overlap.
Spec Reference & Validation
Specification: Multiple coroutines running concurrently with await asyncio.gather()
AI Prompt Used: "Create async functions simulating API calls with different delays, then run them concurrently using asyncio.gather(). Show timing."
Generated Code: Above fetch_api() and main() example
Validation Steps:
- ✅ Code runs without errors
- ✅ All three APIs start simultaneously (check output order)
- ✅ Total time ~2 seconds, not 4.5 seconds (concurrency works)
- ✅ Type hints complete (
-> str,-> None) - ✅ Cross-platform tested (Windows, Mac, Linux)
I/O-Bound vs CPU-Bound: When Asyncio Helps
This is crucial: Asyncio helps with I/O-bound tasks, not CPU-bound tasks.
I/O-Bound Tasks (Asyncio Helps!)
I/O-bound means the task spends time waiting for external resources:
- Network calls (APIs, databases)
- File reads/writes
- Waiting for user input
- Anything involving "external latency"
For I/O-bound tasks, asyncio shines because it lets other tasks run while waiting.
Example:
async def fetch_weather(city: str) -> dict[str, any]:
"""Fetch weather data. The network wait blocks, but other tasks can run."""
async with httpx.AsyncClient() as client:
response = await client.get(f"https://api.weather.com/city/{city}")
return response.json()
During the await client.get(...) call, Python's event loop checks other tasks. Perfect!
CPU-Bound Tasks (Asyncio Doesn't Help)
CPU-bound means the task spends time doing calculations:
- Parsing large files
- Data analysis
- Cryptography
- Machine learning inference
- Heavy math
For CPU-bound tasks, asyncio doesn't help because await doesn't pause CPU work. The task just blocks until the calculation finishes.
Example:
async def analyze_data(data: list[int]) -> float:
"""Calculate average. This is CPU-bound—await won't help."""
# This calculation blocks; no I/O involved
return sum(data) / len(data)
Even with await, this runs sequentially. Multiple CPU-bound tasks don't overlap. (We'll solve this with InterpreterPoolExecutor in Lesson 4!)
Decision Tree
| Task Type | Example | Asyncio Helps? | Why? |
|---|---|---|---|
| Network call | Fetch from API | ✅ Yes | Network latency creates natural pause points |
| File I/O | Read 1000 files | ✅ Yes | Disk I/O is slow; overlap requests |
| Database query | SELECT from DB | ✅ Yes | Database latency creates pause points |
| Calculation | Sum 1M numbers | ❌ No | CPU is always busy; no pause points |
| Compression | Zip a file | ❌ No | CPU-intensive, no I/O waiting |
| Machine learning | Model inference | ❌ No | CPU-intensive (GIL prevents parallel threads) |
Concurrency vs Parallelism: Critical Distinction
This is often confused, so let's be precise:
Concurrency (What Asyncio Does)
Concurrency = multiple tasks making progress, but not necessarily running at the exact same time.
- One CPU core
- Tasks take turns (cooperative multitasking)
- Task A waits → Task B runs → Task A resumes
- Total time ≈ longest task (not sum of all tasks)
Task A: [====] (4 seconds, but pauses at I/O)
Task B: [==] (2 seconds)
Task C: [====] (4 seconds, but pauses at I/O)
Timeline (concurrent):
[A running] [B running] [A resumed] [C running] [B done] [C done] [A done]
Total: ~4 seconds (A and C's pauses overlap with B)
Parallelism (What True Multi-Core Does)
Parallelism = multiple tasks running at the exact same time on different CPU cores.
- Multiple CPU cores
- Tasks run simultaneously (true parallel execution)
- Total time ≈ longest task (for CPU-bound), but with multiple cores
Core 1: [Task A running]
Core 2: [Task B running]
Core 3: [Task C running]
Total: ~4 seconds (3 tasks on 3 cores, A is longest)
Asyncio does concurrency, not parallelism. It's perfect for I/O-bound work, insufficient for CPU-bound work.
Example 3: I/O-Bound Task (Simulated Network)
Here's a realistic async pattern—simulating network latency:
import asyncio
from typing import Any
async def fetch_user_from_api(user_id: int, delay: float = 1.0) -> dict[str, Any]:
"""
Simulate fetching a user from an API.
In real code, this would use httpx.AsyncClient or aiohttp.
Here, we simulate the network delay.
"""
print(f"Fetching user {user_id}...")
await asyncio.sleep(delay) # Simulate network latency
# Simulate occasional API failures
if user_id == 2:
raise ValueError(f"User {user_id} not found")
return {
"id": user_id,
"name": f"User {user_id}",
"email": f"user{user_id}@example.com"
}
async def fetch_all_users(user_ids: list[int]) -> list[dict[str, Any]]:
"""Fetch multiple users concurrently."""
tasks = [fetch_user_from_api(uid) for uid in user_ids]
# Run all tasks; gather() continues even if some fail
results = await asyncio.gather(*tasks, return_exceptions=True)
# Separate successful results from errors
users = []
errors = []
for result in results:
if isinstance(result, Exception):
errors.append(str(result))
else:
users.append(result)
print(f"Fetched {len(users)} users, {len(errors)} errors")
return users
# Run it
users = asyncio.run(fetch_all_users([1, 2, 3, 4, 5]))
print(users)
Key points:
await asyncio.sleep()simulates network latency (real code would useawait http_client.get())asyncio.gather(*tasks, return_exceptions=True)runs all concurrently and collects results/errors- Despite 5 users with 1s delay each, total time ~1s (all overlap)
- Error handling is explicit—one failure doesn't crash others
Example 4: CPU-Bound Task (Why Asyncio Doesn't Help)
Here's the crucial lesson: CPU-bound work doesn't benefit from asyncio.
import asyncio
import time
from typing import Any
def cpu_intensive_calculation(n: int) -> int:
"""Heavy calculation—runs synchronously."""
result = 0
for i in range(n):
result += i ** 2
return result
async def analyze_data_async(dataset_ids: list[int]) -> list[int]:
"""
This looks async, but doesn't help!
The CPU work doesn't yield control to other tasks.
It's still sequential in terms of wall-clock time.
"""
results = []
for dataset_id in dataset_ids:
# This blocks! No I/O, so await doesn't help
result = cpu_intensive_calculation(100_000_000)
results.append(result)
return results
# Benchmark: sequential vs "concurrent" (which isn't really concurrent)
async def benchmark() -> None:
start = time.perf_counter()
results = await analyze_data_async([1, 2, 3, 4])
elapsed = time.perf_counter() - start
print(f"'Async' CPU-bound (still sequential): {elapsed:.2f}s")
asyncio.run(benchmark())
Output:
'Async' CPU-bound (still sequential): ~4 seconds
Compare to asyncio.gather() with the same CPU work:
async def analyze_single(dataset_id: int) -> int:
"""Still synchronous CPU work, just wrapped in async."""
return cpu_intensive_calculation(100_000_000)
async def benchmark_gather() -> None:
start = time.perf_counter()
# This doesn't help! CPU work still runs sequentially
results = await asyncio.gather(
analyze_single(1),
analyze_single(2),
analyze_single(3),
analyze_single(4),
)
elapsed = time.perf_counter() - start
print(f"asyncio.gather() with CPU work (still sequential): {elapsed:.2f}s")
asyncio.run(benchmark_gather())
Output:
asyncio.gather() with CPU work (still sequential): ~4 seconds
No difference! That's because asyncio has no pause points in CPU-bound code. It's all blocking.
Forward Reference: In Lesson 4, we'll use InterpreterPoolExecutor to truly parallelize CPU-bound work.
Common Mistakes: What NOT to Do
Mistake 1: Forgetting await
# ❌ Wrong: Doesn't run the coroutine
async def main() -> None:
fetch_user(1) # Missing await!
# ✅ Right: Actually runs it
async def main() -> None:
user = await fetch_user(1)
print(user)
Without await, you create a coroutine object that never runs. The event loop doesn't know about it.
Mistake 2: Mixing asyncio.run() Calls
# ❌ Wrong: Can't nest asyncio.run()
asyncio.run(asyncio.run(fetch_user(1)))
# ✅ Right: One asyncio.run() at the top level
result = asyncio.run(fetch_user(1))
Each asyncio.run() creates a new event loop. Nesting breaks.
Mistake 3: Blocking the Event Loop
import time
async def fetch_and_process() -> None:
await fetch_api()
# ❌ Wrong: This blocks the event loop!
time.sleep(2) # Other tasks can't run during this
# ✅ Right: Let other tasks run
await asyncio.sleep(2)
time.sleep() blocks. await asyncio.sleep() yields to the event loop.
✨ Teaching Tip
Use Claude Code to explore these mistakes interactively: "Generate async code that forgets await, then show me the error. Explain what the error message means and how to fix it."
This hands-on exploration builds deeper understanding than just reading examples.
Comparing Sync vs Async: Real Timing
Let's measure the real difference:
import asyncio
import time
from typing import Any
# Simulated API call (1 second each)
async def async_api_call(call_id: int) -> dict[str, Any]:
await asyncio.sleep(1)
return {"id": call_id, "data": f"result-{call_id}"}
def sync_api_call(call_id: int) -> dict[str, Any]:
time.sleep(1)
return {"id": call_id, "data": f"result-{call_id}"}
# Synchronous: Sequential execution
def sync_version() -> None:
start = time.perf_counter()
for i in range(5):
sync_api_call(i) # 1s + 1s + 1s + 1s + 1s = 5s
elapsed = time.perf_counter() - start
print(f"Sync version (5 calls × 1s each): {elapsed:.2f}s")
# Asynchronous: Concurrent execution
async def async_version() -> None:
start = time.perf_counter()
# All 5 calls happen concurrently
await asyncio.gather(
async_api_call(0),
async_api_call(1),
async_api_call(2),
async_api_call(3),
async_api_call(4),
)
elapsed = time.perf_counter() - start
print(f"Async version (5 calls, concurrent): {elapsed:.2f}s")
# Run both
sync_version()
asyncio.run(async_version())
Output:
Sync version (5 calls × 1s each): 5.01s
Async version (5 calls, concurrent): 1.00s
5x speedup from concurrency! This is why asyncio matters for I/O-bound applications.
Try With AI
Use your AI partner to deepen understanding. Choose your AI tool (Claude Code, Gemini CLI, or ChatGPT web):
Prompt 1: Understand Level
Ask your AI:
"Explain what happens when I await asyncio.sleep(2) in an async function. Why doesn't it block other tasks? What is the event loop doing?"
Expected Outcome: You'll understand that await is a pause point where the event loop can switch to other tasks. Spend 3-5 minutes discussing this with AI.
Prompt 2: Apply Level
Tell your AI:
"Generate async code that fetches 3 different APIs in parallel using asyncio.run() and asyncio.gather(). Each API should simulate a 2-second delay with asyncio.sleep(). Make sure to include type hints."
Expected Outcome: Run the code and measure the total time. It should complete in ~2 seconds, not 6 seconds. Verify concurrency actually works.
Prompt 3: Analyze Level
Ask your AI:
"Show me both a synchronous and asynchronous version of fetching 5 APIs, each taking 1 second. Measure the time for both on my machine. Why is the async version faster? Is it parallelism or concurrency?"
Expected Outcome: You'll measure the difference (sync: 5 seconds, async: 1 second) and understand it's concurrency (single-threaded task switching), not parallelism (multiple cores).
Prompt 4: Evaluate Level
Give your AI a task:
"Here's a task: 'Read 100 files from disk, each taking 10ms.' Is this I/O-bound or CPU-bound? Would asyncio help here? Why or why not? What about 'Calculate the sum of 100 million numbers'—is that I/O or CPU-bound?"
Expected Outcome: You'll correctly classify tasks and justify your reasoning. Understand when asyncio is the right tool and when it's not.
Safety Note: The code examples use simulated I/O (asyncio.sleep()). In production, you'd use real async libraries like httpx (HTTP), aiofiles (file I/O), or asyncpg (PostgreSQL). Always validate AI-generated code—check for security (no hardcoded secrets), proper error handling, and type safety.