Introduction
While Pods are the fundamental units in Kubernetes, you’ll rarely create them directly in production. Instead, you’ll use Deployments - a higher-level abstraction that provides declarative updates, scaling, and self-healing capabilities for your applications.
In this comprehensive guide, I’ll walk you through everything you need to know about Kubernetes Deployments, from basic creation to advanced rollout strategies and rollback procedures. This is essential knowledge for anyone managing production Kubernetes workloads.
What is a Kubernetes Deployment?
A Deployment is a Kubernetes object that manages a set of identical Pods, ensuring that the specified number of replicas are running at all times. Think of it as a supervisor that:
- Maintains desired state: Automatically replaces failed Pods
- Enables scaling: Easily scale up or down
- Manages updates: Roll out new versions with zero downtime
- Provides rollback: Revert to previous versions if issues arise
- Tracks history: Maintains revision history for auditing
Key Benefits:
- Self-healing: Automatically restarts failed containers
- Declarative updates: Define desired state, Kubernetes handles the rest
- Rolling updates: Update applications without downtime
- Rollback capability: Quick recovery from bad deployments
- Scalability: Horizontal scaling with a single command
Prerequisites
- kubectl installed and configured
- Access to a Kubernetes cluster (minikube, cloud provider, or on-premises)
- Basic understanding of Pods (see my previous post on Kubernetes Pods)
- Familiarity with YAML syntax
Deployment Architecture
Understanding the relationship between Deployments, ReplicaSets, and Pods:
Deployment
├── ReplicaSet (current version)
│ ├── Pod 1
│ ├── Pod 2
│ └── Pod 3
└── ReplicaSet (previous version - scaled to 0)
└── (no pods)
How it works:
- You create a Deployment
- Deployment creates a ReplicaSet
- ReplicaSet creates and manages Pods
- When you update the Deployment, a new ReplicaSet is created
- Old ReplicaSet is scaled down, new one is scaled up
Creating Your First Deployment
Method 1: Imperative (Quick Testing)
kubectl create deployment firstdeployment --image=nginx:latest --replicas=3
Verify creation:
kubectl get deployments
kubectl get pods
kubectl get replicasets
Method 2: Declarative (Production Recommended)
Create deployment.yaml
:
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
environment: production
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.21-alpine
ports:
- containerPort: 80
resources:
requests:
memory: "64Mi"
cpu: "100m"
limits:
memory: "128Mi"
cpu: "200m"
Deploy:
kubectl apply -f deployment.yaml
Understanding the YAML:
- replicas: Number of Pod copies to maintain
- selector.matchLabels: How Deployment finds its Pods
- template: Pod template (same as standalone Pod definition)
- template.metadata.labels: Must match selector.matchLabels
Scaling Deployments
One of the most powerful features of Deployments is easy scaling.
Scale Up
Imperative:
kubectl scale deployment nginx-deployment --replicas=5
Declarative:
Edit deployment.yaml
, change replicas: 5
, then:
kubectl apply -f deployment.yaml
Watch scaling in action:
kubectl get pods -w
Scale Down
kubectl scale deployment nginx-deployment --replicas=2
What happens during scaling:
- Scale Up: New Pods are created immediately
- Scale Down: Excess Pods are terminated gracefully
- Self-healing: If a Pod crashes, it’s automatically replaced
Auto-scaling (HPA - Horizontal Pod Autoscaler)
For production workloads, consider auto-scaling based on metrics:
kubectl autoscale deployment nginx-deployment --min=3 --max=10 --cpu-percent=80
This automatically scales between 3-10 replicas based on CPU usage.
Deployment Update Strategies
Kubernetes supports two update strategies:
1. Recreate Strategy
Characteristics:
- Terminates all old Pods first
- Then creates new Pods
- Causes downtime
- Use when: Different versions can’t run simultaneously
apiVersion: apps/v1
kind: Deployment
metadata:
name: recreate-deployment
spec:
replicas: 5
strategy:
type: Recreate
selector:
matchLabels:
app: myapp
template:
metadata:
labels:
app: myapp
spec:
containers:
- name: nginx
image: nginx:latest
ports:
- containerPort: 80
Update process:
kubectl apply -f recreate-deployment.yaml
kubectl set image deployment/recreate-deployment nginx=httpd:alpine
What happens:
- All 5 Pods terminated
- Brief downtime
- 5 new Pods created with new image
2. RollingUpdate Strategy (Default & Recommended)
Characteristics:
- Updates Pods gradually
- Zero downtime
- Old and new versions run simultaneously during update
- Default strategy
apiVersion: apps/v1
kind: Deployment
metadata:
name: rolling-deployment
spec:
replicas: 10
strategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 2 # Max 2 Pods can be unavailable during update
maxSurge: 2 # Max 2 extra Pods can exist during update
selector:
matchLabels:
app: myapp
template:
metadata:
labels:
app: myapp
spec:
containers:
- name: nginx
image: nginx:1.21-alpine
ports:
- containerPort: 80
Understanding RollingUpdate parameters:
-
maxUnavailable: Maximum Pods that can be unavailable
- Value: 2 means minimum 8 Pods (out of 10) must be running
- Can be number or percentage (e.g., “25%”)
-
maxSurge: Maximum extra Pods during update
- Value: 2 means maximum 12 Pods (10 + 2) can exist temporarily
- Can be number or percentage (e.g., “25%”)
Update process:
kubectl apply -f rolling-deployment.yaml
kubectl set image deployment/rolling-deployment nginx=nginx:1.22-alpine --record
What happens:
- Creates 2 new Pods (maxSurge=2)
- Waits for them to be ready
- Terminates 2 old Pods (maxUnavailable=2)
- Repeats until all Pods updated
- Zero downtime throughout
Updating Deployments
Method 1: Imperative Update (Quick Changes)
Update image:
kubectl set image deployment/nginx-deployment nginx=nginx:1.22-alpine --record
Update resources:
kubectl set resources deployment/nginx-deployment -c=nginx --limits=cpu=200m,memory=256Mi
Important: Use --record
flag to save the command in revision history!
Method 2: Edit Live Deployment
kubectl edit deployment nginx-deployment --record
This opens your default editor. Make changes, save, and exit.
Method 3: Declarative Update (Production Best Practice)
- Edit
deployment.yaml
- Apply changes:
kubectl apply -f deployment.yaml --record
Monitoring Rollout Status
Watch rollout progress:
kubectl rollout status deployment/nginx-deployment -w
Check rollout history:
kubectl rollout history deployment/nginx-deployment
View specific revision:
kubectl rollout history deployment/nginx-deployment --revision=2
Pause rollout (for testing):
kubectl rollout pause deployment/nginx-deployment
Resume rollout:
kubectl rollout resume deployment/nginx-deployment
Rollback Strategies
One of the most powerful features of Deployments is the ability to rollback to previous versions.
Quick Rollback (Previous Version)
kubectl rollout undo deployment/nginx-deployment
This reverts to the immediately previous revision.
Rollback to Specific Revision
View history:
kubectl rollout history deployment/nginx-deployment
Output:
REVISION CHANGE-CAUSE
1 kubectl apply --filename=deployment.yaml --record=true
2 kubectl set image deployment/nginx-deployment nginx=nginx:1.22-alpine --record=true
3 kubectl set image deployment/nginx-deployment nginx=httpd:alpine --record=true
Rollback to revision 1:
kubectl rollout undo deployment/nginx-deployment --to-revision=1
Verify rollback:
kubectl rollout status deployment/nginx-deployment
kubectl describe deployment nginx-deployment
Real-World Production Example
Here’s a production-ready deployment with best practices:
apiVersion: apps/v1
kind: Deployment
metadata:
name: web-app-deployment
labels:
app: web-app
version: v1.0.0
environment: production
team: backend
spec:
replicas: 5
revisionHistoryLimit: 10 # Keep last 10 revisions
strategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 1 # 25% unavailable
maxSurge: 1 # 25% surge
selector:
matchLabels:
app: web-app
template:
metadata:
labels:
app: web-app
version: v1.0.0
spec:
containers:
- name: web-app
image: nginx:1.21-alpine
ports:
- containerPort: 80
name: http
protocol: TCP
resources:
requests:
memory: "128Mi"
cpu: "100m"
limits:
memory: "256Mi"
cpu: "200m"
livenessProbe:
httpGet:
path: /health
port: 80
initialDelaySeconds: 30
periodSeconds: 10
timeoutSeconds: 5
failureThreshold: 3
readinessProbe:
httpGet:
path: /ready
port: 80
initialDelaySeconds: 5
periodSeconds: 5
timeoutSeconds: 3
failureThreshold: 3
env:
- name: ENVIRONMENT
value: "production"
- name: LOG_LEVEL
value: "info"
Deploy:
kubectl apply -f web-app-deployment.yaml --record
Deployment Management Commands
Essential commands:
# Create deployment
kubectl create deployment myapp --image=nginx:latest --replicas=3
# Get deployments
kubectl get deployments
kubectl get deploy -o wide
# Describe deployment
kubectl describe deployment myapp
# Scale deployment
kubectl scale deployment myapp --replicas=5
# Update image
kubectl set image deployment/myapp nginx=nginx:1.22-alpine --record
# Edit deployment
kubectl edit deployment myapp --record
# Rollout status
kubectl rollout status deployment/myapp
# Rollout history
kubectl rollout history deployment/myapp
# Rollback
kubectl rollout undo deployment/myapp
kubectl rollout undo deployment/myapp --to-revision=2
# Pause/Resume
kubectl rollout pause deployment/myapp
kubectl rollout resume deployment/myapp
# Delete deployment
kubectl delete deployment myapp
Troubleshooting Deployments
Issue 1: Pods Not Starting
Symptoms:
kubectl get pods
# Shows: ImagePullBackOff, CrashLoopBackOff, or Pending
Diagnosis:
kubectl describe deployment myapp
kubectl describe pod <pod-name>
kubectl logs <pod-name>
Common causes:
- Wrong image name/tag
- Insufficient resources
- Failed health checks
- Configuration errors
Issue 2: Rollout Stuck
Symptoms:
kubectl rollout status deployment/myapp
# Shows: Waiting for deployment to finish...
Diagnosis:
kubectl get pods
kubectl describe deployment myapp
kubectl rollout history deployment/myapp
Solutions:
# Check if paused
kubectl rollout resume deployment/myapp
# Rollback if needed
kubectl rollout undo deployment/myapp
Issue 3: Old Pods Not Terminating
Diagnosis:
kubectl get replicasets
kubectl describe replicaset <old-rs-name>
Solution:
# Force delete old replicaset
kubectl delete replicaset <old-rs-name>
Best Practices for Production
1. Always Use –record Flag
kubectl apply -f deployment.yaml --record
kubectl set image deployment/myapp nginx=nginx:1.22 --record
This maintains change history for auditing and rollbacks.
2. Set Resource Limits
resources:
requests:
memory: "128Mi"
cpu: "100m"
limits:
memory: "256Mi"
cpu: "200m"
3. Implement Health Checks
livenessProbe:
httpGet:
path: /health
port: 80
readinessProbe:
httpGet:
path: /ready
port: 80
4. Use Meaningful Labels
metadata:
labels:
app: myapp
version: v1.2.0
environment: production
team: backend
5. Set Revision History Limit
spec:
revisionHistoryLimit: 10 # Keep last 10 revisions
6. Configure Appropriate Update Strategy
strategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 25%
maxSurge: 25%
7. Use Namespaces
kubectl create namespace production
kubectl apply -f deployment.yaml -n production
8. Version Control Your YAML
- Store all deployment files in Git
- Use GitOps tools (ArgoCD, Flux)
- Implement CI/CD pipelines
Advanced Deployment Patterns
Blue-Green Deployment
Maintain two identical environments:
# Blue deployment (current)
apiVersion: apps/v1
kind: Deployment
metadata:
name: myapp-blue
spec:
replicas: 3
selector:
matchLabels:
app: myapp
version: blue
template:
metadata:
labels:
app: myapp
version: blue
spec:
containers:
- name: myapp
image: myapp:v1.0
---
# Green deployment (new)
apiVersion: apps/v1
kind: Deployment
metadata:
name: myapp-green
spec:
replicas: 3
selector:
matchLabels:
app: myapp
version: green
template:
metadata:
labels:
app: myapp
version: green
spec:
containers:
- name: myapp
image: myapp:v2.0
Switch traffic by updating Service selector.
Canary Deployment
Gradually roll out to subset of users:
# Stable deployment (90%)
apiVersion: apps/v1
kind: Deployment
metadata:
name: myapp-stable
spec:
replicas: 9
selector:
matchLabels:
app: myapp
track: stable
---
# Canary deployment (10%)
apiVersion: apps/v1
kind: Deployment
metadata:
name: myapp-canary
spec:
replicas: 1
selector:
matchLabels:
app: myapp
track: canary
Deployment vs Other Controllers
When to use Deployment:
- Stateless applications
- Web servers, APIs
- Microservices
- Applications that can be updated with rolling updates
When NOT to use Deployment:
- Stateful applications → Use StatefulSet
- Node-level services → Use DaemonSet
- Batch jobs → Use Job or CronJob
Practical Exercise
Let’s create a complete deployment workflow:
1. Create initial deployment:
cat <<EOF | kubectl apply -f - --record
apiVersion: apps/v1
kind: Deployment
metadata:
name: exercise-app
spec:
replicas: 3
selector:
matchLabels:
app: exercise
template:
metadata:
labels:
app: exercise
spec:
containers:
- name: nginx
image: nginx:1.21-alpine
ports:
- containerPort: 80
EOF
2. Scale up:
kubectl scale deployment exercise-app --replicas=5
kubectl get pods -w
3. Update image:
kubectl set image deployment/exercise-app nginx=nginx:1.22-alpine --record
kubectl rollout status deployment/exercise-app
4. Check history:
kubectl rollout history deployment/exercise-app
5. Rollback:
kubectl rollout undo deployment/exercise-app
kubectl rollout status deployment/exercise-app
6. Cleanup:
kubectl delete deployment exercise-app
Conclusion
Deployments are the workhorses of Kubernetes, providing robust, production-ready application management. Key takeaways:
- Use Deployments, not bare Pods in production
- RollingUpdate strategy provides zero-downtime updates
- Always use –record to maintain revision history
- Implement health checks for reliable deployments
- Set resource limits to prevent resource exhaustion
- Test rollbacks in staging before production
- Use declarative YAML for version control and reproducibility
Mastering Deployments is essential for running reliable, scalable applications on Kubernetes. Combined with Services (covered in the next post), you’ll have the foundation for production-ready Kubernetes workloads.
Happy deploying! 🚀