Skip to main content

CI/CD Concepts: The Automated Pipeline

You've been managing deployments manually: commit code, build a container, test locally, push to registry, run kubectl commands. This works for one person learning Kubernetes. At scale, manual deployments become a bottleneck. A single typo in a deployment command can take down production. Without automated testing, bugs make it to users. Without audit trails, nobody knows who deployed what, when, or why.

CI/CD (Continuous Integration/Continuous Deployment) solves this by automating the entire path from code commit to running service. Every change automatically triggers a pipeline: tests validate quality, builds produce artifacts, registries store versions, and deployments propagate to clusters. The pipeline becomes your quality guarantee.

This lesson teaches the conceptual foundations of CI/CD pipelines—the stages, the flow, the artifacts, and why each stage matters. You'll understand pipeline design before GitHub Actions syntax, and deployment principles before ArgoCD configuration.

The Five Stages of a CI/CD Pipeline

Every production pipeline has the same conceptual structure, regardless of tools. Think of it like assembly line manufacturing:

Code Push → Build → Test → Push → Deploy
↓ ↓ ↓ ↓ ↓
Trigger Compile Execute Publish Execute

Let's walk through each stage:

Stage 1: Trigger (Someone Pushed Code)

What happens: A developer commits to main branch. A webhook fires. The pipeline wakes up.

Why this matters: The trigger determines when automation starts. Without it, deployments stay manual. Common triggers:

  • Push to main: "Deploy on every commit" (high-cadence teams)
  • Pull request opened: "Run tests before merge" (safety gate)
  • Tag created: "Deploy only on release tags" (careful, controlled)
  • Manual dispatch: "Allow human to start pipeline" (on-demand, special cases)

Example trigger context: Your FastAPI agent lives in panaversity-agents/agent-task-service on GitHub. When you push to main, GitHub Actions receives a webhook event. The pipeline is triggered.

Artifact at this stage: A Git event (commit hash, branch name, author).

Stage 2: Build (Compile and Package)

What happens: The pipeline checks out your code, compiles/packages it, produces a deliverable. For Python projects, this might be a wheel file or Docker image.

Why this matters: Compilation catches syntax errors early. Packaging ensures deployments are consistent (same code, same dependencies).

Example build context: Your FastAPI agent Dockerfile has dependencies pinned:

FROM python:3.11-slim
WORKDIR /app
COPY pyproject.toml poetry.lock ./
RUN pip install poetry && poetry install --no-root
COPY . .
RUN poetry install
CMD ["uvicorn", "app.main:app", "--host", "0.0.0.0", "--port", "8000"]

Output:

Successfully built image: agent-task-service:abc123def456

The build stage produces a container image artifact—a complete, self-contained package that runs anywhere (local machine, CI runner, Kubernetes cluster).

Artifact at this stage: A container image (or compiled binary, or wheel file—the thing to deploy).

Stage 3: Test (Validate Quality)

What happens: The pipeline runs automated tests. Unit tests validate individual functions. Integration tests validate components work together. The pipeline only proceeds if tests pass.

Why this matters: Manual testing is unreliable (testers get tired, forget cases). Automated tests catch regressions before they reach users.

Example test context: Your FastAPI agent has tests:

def test_task_creation():
"""Task creation endpoint returns 201 with created task"""
response = client.post("/tasks", json={"title": "Fix auth bug", "priority": "high"})
assert response.status_code == 201
assert response.json()["id"] is not None

def test_invalid_priority():
"""Invalid priority is rejected"""
response = client.post("/tasks", json={"title": "Test", "priority": "maybe"})
assert response.status_code == 400

Output:

tests/test_tasks.py::test_task_creation PASSED
tests/test_tasks.py::test_invalid_priority PASSED
============ 2 passed in 0.45s ============

If ANY test fails, the pipeline stops here. No deployment proceeds with broken code. This is the quality gate—the automated checkpoint that prevents bad code from reaching production.

Artifact at this stage: Test results and coverage reports (proof that code works as expected).

Stage 4: Push (Publish the Artifact)

What happens: The tested, built artifact is published to a registry where it can be deployed from. For container images, this means pushing to Docker Hub, GitHub Container Registry (GHCR), or similar.

Why this matters: Registries are the source of truth for deployments. They version artifacts, store metadata, and make images available to clusters worldwide.

Example push context: After tests pass, the pipeline tags the image with the commit hash and version:

docker tag agent-task-service:latest ghcr.io/panaversity/agent-task-service:1.0.0
docker tag agent-task-service:latest ghcr.io/panaversity/agent-task-service:sha-abc123def456
docker push ghcr.io/panaversity/agent-task-service:1.0.0
docker push ghcr.io/panaversity/agent-task-service:sha-abc123def456

Output:

Pushed: ghcr.io/panaversity/agent-task-service:1.0.0
Pushed: ghcr.io/panaversity/agent-task-service:sha-abc123def456

Now any cluster with registry access can pull and run that image.

Artifact at this stage: Published artifact in a registry (versioned, immutable copy).

Stage 5: Deploy (Run in Production)

What happens: The pipeline instructs Kubernetes (or another orchestration system) to pull the image and run it. In GitOps (which you'll see in later lessons), this means committing a configuration file to Git, and a GitOps controller automatically reconciles the cluster to match.

Why this matters: Deployment automation ensures consistency across environments (dev, staging, production all follow the same process). With versioning, you can rollback to previous versions if something breaks.

Example deploy context: A Kubernetes Deployment manifest specifies which image to run:

apiVersion: apps/v1
kind: Deployment
metadata:
name: agent-task-service
spec:
replicas: 3
selector:
matchLabels:
app: agent-task-service
template:
metadata:
labels:
app: agent-task-service
spec:
containers:
- name: agent
image: ghcr.io/panaversity/agent-task-service:1.0.0
ports:
- containerPort: 8000
livenessProbe:
httpGet:
path: /health
port: 8000
initialDelaySeconds: 10

Output:

deployment.apps/agent-task-service created
service/agent-task-service created
3/3 pods running

The service is live, handling requests.

Artifact at this stage: A running service in production (validated, versioned, auditable).

Continuous Integration vs Continuous Deployment

CI and CD are separate concepts, often confused:

Continuous Integration (CI)

Definition: Automatically integrate code changes into a shared repository, run tests, and report results.

Where it happens: Stages 1-3 (Trigger, Build, Test).

Purpose: Catch integration problems early. If your code works alone but breaks when combined with teammate's changes, CI detects it immediately.

What "continuous" means: Every commit runs the pipeline, not weekly integration sessions.

Example CI flow:

Your PR is pushed

Tests run automatically

Results appear on PR within 2 minutes

If tests pass → "Ready to merge"
If tests fail → "Fix these issues before merging"

Continuous Deployment (CD)

Definition: Automatically deploy validated code to production after successful tests.

Where it happens: Stages 4-5 (Push, Deploy).

Purpose: Get working code to users immediately, without waiting for manual deployment windows.

What "continuous" means: Every merge to main deploys to production (in high-trust teams). More conservative teams deploy manually but use CD infrastructure (push a button instead of running commands).

Example CD flow:

Tests pass on main

Image published to registry

Deployment manifest updated

Kubernetes sees new image

Pods restart with new code

Service updated (users see new version)

CI vs CD: The Distinction

AspectCICD
StagesTrigger, Build, TestPush, Deploy
Focus"Does this code work?""Is this code in production?"
FrequencyPer commitPer successful CI
RiskLow (nothing deployed yet)Higher (affects users)
ReversalDelete commit (rare)Rollback to previous version (common)

In practice: CI is automatic (tests always run). CD can be automatic (deploy immediately) or gated (require manual approval before deploying).

Artifacts and Their Lifecycle

A CI/CD pipeline produces and consumes artifacts—concrete outputs that move through stages.

What Is an Artifact?

An artifact is anything produced during the pipeline that becomes input for the next stage:

  • Code (Git commit): Source at pipeline start
  • Compiled output (Docker image): Built by Build stage
  • Test reports (JSON files): Produced by Test stage
  • Published image (in registry): Output of Push stage
  • Running service (in Kubernetes): Result of Deploy stage

The Artifact Lifecycle

Source Code (Git)
↓ [Build stage reads]
Docker Image (local)
↓ [Test stage uses]
Test Results (reports)
↓ [Push stage reads]
Published Image (registry)
↓ [Deploy stage pulls]
Running Pods (cluster)

Each stage reads artifacts from the previous stage and produces new artifacts.

Versioning Artifacts

Proper versioning prevents confusion: "Which code is running in production?"

Example: Your FastAPI agent on commit abc123:

  • Commit hash: abc123def456789... (unique, 40-char identifier)
  • Short hash: abc123 (first 7 chars, human-readable)
  • Semantic version: 1.2.3 (meaningful to users: major.minor.patch)
  • Registry tags: All of the above
# Three ways to refer to the same image
docker pull ghcr.io/panaversity/agent-task-service:abc123
docker pull ghcr.io/panaversity/agent-task-service:1.2.3
docker pull ghcr.io/panaversity/agent-task-service:latest # Points to 1.2.3

Traceability: From any running pod, you can ask "What commit is this?" The image tag tells you.

Quality Gates: Blocking Failures Before Production

A quality gate is a decision point in the pipeline: "Is this artifact good enough to proceed?"

Why Quality Gates Matter

Without gates, broken code reaches production:

Without gates: Commit → Build → (no test) → Push → Deploy → 🔥 Service down

With gates: Commit → Build → Test fails → Pipeline stops → Bug fixed → Retry

The Test Quality Gate

The most common gate is test execution. When tests fail, the pipeline stops:

tests/test_auth.py::test_invalid_token FAILED
AssertionError: Expected 401, got 200

Pipeline Status: FAILED ❌
Next stages blocked until this is fixed

No deployment proceeds with broken code.

Other Quality Gates

Beyond tests, pipelines can have:

  • Code quality gates: SonarQube checks code complexity, duplication
  • Security gates: SAST (static application security testing) scans for vulnerabilities
  • Coverage gates: "Tests must cover 80% of code"
  • Performance gates: "This change can't make the app 10% slower"
  • Manual approval gates: "Require human review before deploying"

Each gate is a pass/fail checkpoint. Fail any gate, and the pipeline stops.

Why Automation Matters: Five Benefits

1. Speed

Manual deployments take hours (build locally, test locally, coordinate with ops team, run deployment commands). Automated pipelines complete in minutes.

Example: Your agent changes take 30 minutes to deploy manually. Automated: 5 minutes from commit to running.

2. Consistency

Humans forget steps. Ops runs deployment differently on Mondays than Thursdays. Automated pipelines run identically every time.

Example: Build always installs dependencies the same way. Deploy always waits for health checks. No variation, no surprises.

3. Auditability

Every deployment is logged: who committed, what changed, tests results, deployment time, version deployed. This is essential for compliance (healthcare, finance, security).

Example: Audit log shows: "User alice deployed version 1.2.3 at 2025-12-23 10:42 UTC. All tests passed. Service health: green."

4. Rollback Capability

With versioned artifacts, rolling back is simple: point to the previous version.

Example: Version 1.2.4 has a bug. Rollback to 1.2.3 in seconds instead of hours of manual recovery.

5. Safety Through Automation

The pipeline catches errors before humans make them. Syntax errors, test failures, security vulnerabilities—all caught automatically.

Example: A developer accidentally commits database credentials. CI security scanning detects this and blocks the commit before it reaches the registry.

The Value Proposition

Manual deployments: Slow, error-prone, hard to audit, hard to rollback.

CI/CD pipelines: Fast, reliable, auditable, reversible.

This is why every production system has CI/CD. It's not optional at scale—it's how you reliably ship code.

Try With AI

Ask Claude: "I have a Python FastAPI application with unit tests stored in a GitHub repository. What would a CI/CD pipeline look like for this project? Walk me through the stages—what happens at each one?"

As Claude responds, evaluate:

  • Did it identify all five stages (Trigger, Build, Test, Push, Deploy)?
  • Did it explain why each stage matters?
  • Did it suggest concrete tools (GitHub Actions for CI, Docker for images, registry for push)?
  • For your own project, what would you add to this pipeline? Security scanning? Performance tests? Code coverage requirements?

Spend 10 minutes sketching out your ideal pipeline for your own project:

  1. Trigger: When should the pipeline start? (Every commit? Every tag? Manual?)
  2. Build: What artifact should be produced? (Container image? Compiled binary?)
  3. Test: What quality gates matter? (Unit tests? Integration tests? Security scans?)
  4. Push: Where should artifacts be published? (Docker Hub? GHCR? Private registry?)
  5. Deploy: Where should code run? (Kubernetes? App Engine? Lambda?)

This is the foundation. Later lessons show how to implement these stages with GitHub Actions and ArgoCD.