How I Cut Infrastructure Costs by 60% by developing a Fast Feedback Development Platform using containerization technologies.
<p>When I joined Ericsson's R&D team as an intern, our development environment had a problem that every engineer on the team silently accepted as normal: spinning up a full local environment took over 30 minutes, consumed enormous cloud resources, and cost the team 40–60% more in infrastructure than it needed to.</p> <p>Six months later, I managed to cut that cost by 60%, reduced memory footprint per machine by 60%, and validation time from 30 minutes to under 10 minutes by smartly optimizing resources.</p> <p>The tools that made it possible? <strong>Docker and Kubernetes.</strong></p> <p>This is exactly how I did it.</p> <h2> 🐳 What Is Docker — And Why Should You Care? </h2> <p>Before Docker, deploying software meant:</p> <ul> <li> <em>"It works on my machine"</em> — classic</li> <li
When I joined Ericsson's R&D team as an intern, our development environment had a problem that every engineer on the team silently accepted as normal: spinning up a full local environment took over 30 minutes, consumed enormous cloud resources, and cost the team 40–60% more in infrastructure than it needed to.
Six months later, I managed to cut that cost by 60%, reduced memory footprint per machine by 60%, and validation time from 30 minutes to under 10 minutes by smartly optimizing resources.
The tools that made it possible? Docker and Kubernetes.
This is exactly how I did it.
🐳 What Is Docker — And Why Should You Care?
Before Docker, deploying software meant:
-
"It works on my machine" — classic
-
Different OS versions breaking builds
-
Manual dependency installation on every server
-
Hours wasted on environment setup
Docker solves this by packaging your application and everything it needs into a single portable unit called a container.
Think of it like this:
Without Docker With Docker
"Install Java 17, then Maven, then..."
docker run my-app
Works on my machine, breaks on server Runs identically everywhere
Manual environment setup per developer One command, same environment
Dependency conflicts between services Each container is isolated
📦 Docker Basics — The Essential Commands
Your first Dockerfile:
# Start from official Java 17 base image FROM eclipse-temurin:17-jre-alpine# Start from official Java 17 base image FROM eclipse-temurin:17-jre-alpineSet working directory inside container
WORKDIR /app
Copy the built jar file
COPY target/notification-service.jar app.jar
Expose the port your app runs on
EXPOSE "port-number"
Command to run when container starts
ENTRYPOINT ["java", "-jar", "app.jar"]`
Enter fullscreen mode
Exit fullscreen mode
Build your image:
# Build the Docker image and tag it docker build -t notification-service:1.0 .# Build the Docker image and tag it docker build -t notification-service:1.0 .Verify it was created
docker images`
Enter fullscreen mode
Exit fullscreen mode
Run your container:
# Run container on port "port_number" docker run -p "port_number":"port_number" notification-service:1.0# Run container on port "port_number" docker run -p "port_number":"port_number" notification-service:1.0Run in background (detached mode)
docker run -d -p "port_number":"port_number" --name notification notification-service:1.0
Check running containers
docker ps
View logs
docker logs notification
Stop container
docker stop notification`
Enter fullscreen mode
Exit fullscreen mode
🔗 Docker Compose — Running Multiple Services Together
A real application has multiple services, your app, a database, Kafka, Redis. Docker Compose lets you define and run them all together.
Our notification service stack:
# docker-compose.yml version: '3.8'# docker-compose.yml version: '3.8'services:
Spring Boot notification service
notification-service: build: . ports:
- "port_number":"port_number" environment:
- SPRING_PROFILES_ACTIVE=dev
PostgreSQL database
postgres: image: postgres:15-alpine environment: POSTGRES_DB: notifications POSTGRES_USER: "user_name" POSTGRES_PASSWORD: "password" volumes:
- postgres-data:/var/lib/postgresql/data networks:
- notification-network
Apache Kafka
kafka: image: confluentinc/cp-kafka:7.4.0 depends_on:
- zookeeper ports:
- "port_number":"port_number"
Zookeeper (required by Kafka)
zookeeper: image: confluentinc/cp-zookeeper:7.4.0
Redis cache
redis: image: redis:7-alpine ports:
- "port_number":"port_number" networks:
- notification-network
networks: notification-network: driver: bridge
volumes: postgres-data:`
Enter fullscreen mode
Exit fullscreen mode
Start everything:
# Start all services docker-compose up -d# Start all services docker-compose up -dCheck all running
docker-compose ps
View logs for specific service
docker-compose logs -f notification-service
Stop everything
docker-compose down
Stop and remove volumes (fresh start)
docker-compose down -v`
Enter fullscreen mode
Exit fullscreen mode
☸️ What Is Kubernetes — And When Do You Need It?
Docker Compose is great for local development. But in production you need:
-
Auto-scaling — handle traffic spikes automatically
-
Self-healing — restart crashed containers automatically
-
Load balancing — distribute traffic across instances
-
Rolling updates — deploy new versions with zero downtime
This is what Kubernetes does.
Kubernetes (K8s) is a container orchestration platform that manages your containers in production, deciding where they run, scaling them up and down, and keeping them healthy.
🏗️ Core Kubernetes Concepts
Pod
The smallest unit in Kubernetes, one or more containers running together.
# pod.yaml apiVersion: v1 kind: Pod metadata: name: notification-pod spec: containers:# pod.yaml apiVersion: v1 kind: Pod metadata: name: notification-pod spec: containers:- name: notification-service image: notification-service:1.0 ports:
- containerPort: "port_number"`
Enter fullscreen mode
Exit fullscreen mode
Deployment
Manages multiple pod replicas and handles rolling updates.
Service
Exposes your pods to network traffic — internal or external.
🛠️ How We Used Kubernetes-in-Docker (Kind) at Ericsson
Running full Kubernetes clusters in the cloud for every developer is expensive. We solved this using Kind (Kubernetes in Docker) — a tool that runs a full Kubernetes cluster inside Docker containers on your local machine. Note: Kind is a multi-node cluster suitable for fast, lightweight, and disposable Kubernetes testing and development directly on a local machine.
Install Kind:
# On Linux/WSL curl -Lo ./kind https://kind.sigs.k8s.io/dl/v0.20.0/kind-linux-amd64 chmod +x ./kind sudo mv ./kind /usr/local/bin/kind# On Linux/WSL curl -Lo ./kind https://kind.sigs.k8s.io/dl/v0.20.0/kind-linux-amd64 chmod +x ./kind sudo mv ./kind /usr/local/bin/kindVerify
kind version`
Enter fullscreen mode
Exit fullscreen mode
Create a local cluster:
# Create cluster kind create cluster --name kind# Create cluster kind create cluster --name kindVerify it's running
kubectl cluster-info --context kind-kind
See nodes
kubectl get nodes`
Enter fullscreen mode
Exit fullscreen mode
Deploy our notification stack locally:
# Apply all manifests kubectl apply -f deployment.yaml kubectl apply -f service.yaml# Apply all manifests kubectl apply -f deployment.yaml kubectl apply -f service.yamlCheck deployments
kubectl get deployments
Check pods
kubectl get pods
Check logs
kubectl logs -f deployment/notification-deployment
Scale up to 5 replicas
kubectl scale deployment notification-deployment --replicas=5`
Enter fullscreen mode
Exit fullscreen mode
🎯 Helm — Kubernetes Package Manager
Managing dozens of YAML files manually gets messy fast. Helm is the package manager for Kubernetes, it templates and packages all your Kubernetes manifests into reusable charts.
Install Helm:
curl https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3 | bash
Enter fullscreen mode
Exit fullscreen mode
Create a Helm chart:
helm create notification-chart
Enter fullscreen mode
Exit fullscreen mode
Chart structure:
notification-chart/ ├── Chart.yaml ← chart metadata ├── values.yaml ← default configuration values └── templates/ ├── deployment.yaml ← deployment template ├── service.yaml ← service template └── configmap.yaml ← config templatenotification-chart/ ├── Chart.yaml ← chart metadata ├── values.yaml ← default configuration values └── templates/ ├── deployment.yaml ← deployment template ├── service.yaml ← service template └── configmap.yaml ← config templateEnter fullscreen mode
Exit fullscreen mode
values.yaml — configure everything in one place:
replicaCount: 3
image: repository: notification-service tag: "1.0" pullPolicy: IfNotPresent
service: type: ClusterIP port: "port_number"`
Enter fullscreen mode
Exit fullscreen mode
Deploy with Helm:
# Install chart helm install notification ./notification-chart# Install chart helm install notification ./notification-chartUpgrade with new values
helm upgrade notification ./notification-chart
--set replicaCount=5
--set image.tag=2.0
Check releases
helm list
Rollback to previous version
helm rollback notification 1`
Enter fullscreen mode
Exit fullscreen mode
This is exactly how we scaled down 10+ microservices at Ericsson, reducing memory footprint by 60% by optimising the resources.requests values in each chart's values.yaml.
📊 The Results at Ericsson
Here's what we achieved after moving to a fully containerised, Kubernetes-orchestrated setup:
Metric Before After Improvement
Environment setup time 30+ mins Under 10 mins 3x faster
Cloud infrastructure cost Baseline Reduced 40–60% saving
Memory per developer machine Baseline Optimized 60% reduction
Manual configuration steps ~20 steps 1 command 95% reduction
Onboarding time for new developers 2 days 2 hours 8x faster
🐛 Mistakes I Made (So You Don't Have To)
1. Not setting resource limits
# ❌ Wrong — no limits, one service can eat all memory containers:# ❌ Wrong — no limits, one service can eat all memory containers:- name: my-service image: my-service:1.0
✅ Correct — always set requests and limits
containers:
- name: my-service image: my-service:1.0 resources: requests: memory: "mem" cpu: "cpu" limits: memory: "mem" cpu: "cpu"`
Enter fullscreen mode
Exit fullscreen mode
2. Storing secrets in environment variables directly
# ❌ Wrong — secret visible in plain text env:# ❌ Wrong — secret visible in plain text env:- name: DB_PASSWORD value: "mysecretpassword"
✅ Correct — use Kubernetes Secrets
env:
- name: DB_PASSWORD valueFrom: secretKeyRef: name: db-credentials key: password`
Enter fullscreen mode
Exit fullscreen mode
Create the secret:
kubectl create secret generic db-credentials \ --from-literal=password=mysecretpasswordkubectl create secret generic db-credentials \ --from-literal=password=mysecretpasswordEnter fullscreen mode
Exit fullscreen mode
3. No health checks
# ✅ Always add readiness and liveness probes readinessProbe: httpGet: path: /actuator/health port: "port_number" initialDelaySeconds: 30 periodSeconds: 10 livenessProbe: httpGet: path: /actuator/health port: "port_number" initialDelaySeconds: 60 periodSeconds: 30# ✅ Always add readiness and liveness probes readinessProbe: httpGet: path: /actuator/health port: "port_number" initialDelaySeconds: 30 periodSeconds: 10 livenessProbe: httpGet: path: /actuator/health port: "port_number" initialDelaySeconds: 60 periodSeconds: 30Enter fullscreen mode
Exit fullscreen mode
🚀 Getting Started Checklist
-
Install Docker Desktop
-
Write your first Dockerfile
-
Build and run a container locally
-
Write a docker-compose.yml for your full stack
-
Install Kind and create a local cluster
-
Write your first Deployment and Service YAML
-
Install Helm and create a chart
-
Deploy with Helm and test scaling
🔮 What's Next
Once you're comfortable with the basics:
-
Kubernetes Ingress — expose services externally with routing rules
-
Horizontal Pod Autoscaler — auto-scale based on CPU/memory
-
Kubernetes Operators — automate complex stateful applications
-
ArgoCD — GitOps continuous deployment for Kubernetes
-
Istio — service mesh for advanced traffic management
Container orchestration is the backbone of modern cloud-native engineering. Every major cloud provider — AWS (EKS), Azure (AKS), Google (GKE) is built around Kubernetes. Learning it now puts you ahead of the curve.
Thanks for reading! I'm Zaina, a Software Engineer based in Perth, Australia, working with Java microservices, Apache Kafka, Docker and Kubernetes at Ericsson. Connect with me on LinkedIn or check out my portfolio.
Found this useful? Drop a ❤️ and share it with a fellow engineer just getting started with containers!
Sign in to highlight and annotate this article

Conversation starters
Daily AI Digest
Get the top 5 AI stories delivered to your inbox every morning.
More about
releaseversionupdateExclusive | The Sudden Fall of OpenAI’s Most Hyped Product Since ChatGPT - WSJ
<a href="https://news.google.com/rss/articles/CBMiogNBVV95cUxOdVlCQ2pGTkZxNW5LeWp0UUlmYU5MTm1jMTBtb1M0VVBjTmZYb2VYdjhGR1FHUWNrbVJvT0xNYzJQLXBNY1RTQ2JDUHlBRGZpUzYySG01S0ZOQzhUVjJIUFhYamU5YWNhWU5zRkl6ZkU5SG1NclFmcnN0cHZlZ2VJOGY0Q2x2Y1h6OXk1Nk5PdHl3MEdfOGlvRS1Wajdab1pzamZZdldtVmt5SVlLY2V5SlRkbWlic1J1OXNuYU9JdmxyR2s1WXozS2k4UXhVUmkzSFJfSUJReDk3U0lOVUJWb1BBVkktYW1zbVViRnhZaE40SVNOcXpURUZuQ2dhZ3NxbEdqRkRDc01tWDlONDhhQkt4Z3RhQWthVURoVmRjUzdCU2dZMkRzazdlZ09ST3VQS2piNlZhYjYycTdsZHF3ZmZDdk1CdEVQY0NVWHZrY1YyaHlQblBpOXNPMzdvWXhuWUhpNzloVlBBcnNvVjlJbWs5OTg0Mk8tdTl4eGlzcTI2TjlNUGk0RkVIY3U0azVTREgxenM2S2t4aTBtTTNHYnVR?oc=5" target="_blank">Exclusive | The Sudden Fall of OpenAI’s Most Hyped Product Since ChatGPT</a> <font color="#6f6f6f">WSJ</font>
In the wake of Claude Code's source code leak, 5 actions enterprise security leaders should take now
Every enterprise running AI coding agents has just lost a layer of defense. On March 31, Anthropic accidentally shipped a 59.8 MB source map file inside version 2.1.88 of its @anthropic-ai/claude-code npm package , exposing 512,000 lines of unobfuscated TypeScript across 1,906 files. The readable source includes the complete permission model, every bash security validator, 44 unreleased feature flags, and references to upcoming models Anthropic has not announced. Security researcher Chaofan Shou broadcast the discovery on X by approximately 4:23 UTC. Within hours, mirror repositories had spread across GitHub. Anthropic confirmed the exposure was a packaging error caused by human error. No customer data or model weights were involved. But containment has already failed. The Wall Street Jour
Knowledge Map
Connected Articles — Knowledge Graph
This article is connected to other articles through shared AI topics and tags.
More in Products
While British Adults Are Less Active on Social Media, More Than Half Now Rely on AI Tools - hollywoodreporter.com
<a href="https://news.google.com/rss/articles/CBMitgFBVV95cUxPYk8zTzFqUERuVjJ4WnBZWTNqQWVDRVNlWXgzaC02enhQZFV2c3Y2NmtvaFh3RUR1U2FlUUpSQXRrbEtqbE5hNmp1VHo3c1hpdTRZbGJ4eUFhTHhoNExSX2tIQXB2N3ktZTZGOU42Ull3SXBBY25JSU9OZ1pPekJvVGhWQjBtU0t6NEh5dkdHdnBTS0F6eUtYdF9hbDRPUEFvNUJrVkJ2blJTR252NVJ3MTJLRTdxQQ?oc=5" target="_blank">While British Adults Are Less Active on Social Media, More Than Half Now Rely on AI Tools</a> <font color="#6f6f6f">hollywoodreporter.com</font>
In the wake of Claude Code's source code leak, 5 actions enterprise security leaders should take now
Every enterprise running AI coding agents has just lost a layer of defense. On March 31, Anthropic accidentally shipped a 59.8 MB source map file inside version 2.1.88 of its @anthropic-ai/claude-code npm package , exposing 512,000 lines of unobfuscated TypeScript across 1,906 files. The readable source includes the complete permission model, every bash security validator, 44 unreleased feature flags, and references to upcoming models Anthropic has not announced. Security researcher Chaofan Shou broadcast the discovery on X by approximately 4:23 UTC. Within hours, mirror repositories had spread across GitHub. Anthropic confirmed the exposure was a packaging error caused by human error. No customer data or model weights were involved. But containment has already failed. The Wall Street Jour

Discussion
Sign in to join the discussion
No comments yet — be the first to share your thoughts!