Networking in Kubernetes
Kubernetes networking lets pods, services, and users talk to each other. If you want to set up or troubleshoot apps properly, you need to get the basics of networking in your cluster.
Service Types and Ingress Rules
Kubernetes has a few service types for exposing apps inside and outside the cluster. The most basic is ClusterIP, which only allows internal cluster access.
If you want external access, NodePort services open up a port on every node. It’s fine for dev, but probably not what you want in production.
LoadBalancer services hook into cloud providers to spin up real load balancers that route traffic to your service. That’s usually the best bet for production traffic.
Ingress resources act like smart routers for HTTP/HTTPS. They handle external access using URL routing, SSL, and virtual hosting.
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: example-ingress
spec:
rules:
- host: example.com
http:
paths:
- path: /app
pathType: Prefix
backend:
service:
name: app-service
port:
number: 80
CoreDNS handles DNS in Kubernetes, so pods can reach each other using service names instead of IPs. Makes things a bit friendlier, don’t you think?
Network Policies and Security
Network Policies work like firewalls inside Kubernetes. They control how pods talk to each other. By default, all pods can reach each other, which honestly isn’t great for production environments.
With Network Policies, you use labels and selectors to set rules for incoming (ingress) and outgoing (egress) traffic. This lets you stick to the “least privilege” approach—always a smart security move.
A basic Network Policy might look like this:
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: db-policy
spec:
podSelector:
matchLabels:
role: db
policyTypes:
- Ingress
ingress:
- from:
- podSelector:
matchLabels:
role: api-server
ports:
- protocol: TCP
port: 3306
This policy lets only pods labeled role: api-server
access the database on port 3306.
You can manage TLS certificates with Kubernetes Secrets. Ingress controllers use these to secure connections with HTTPS.
Storage and Stateful Applications
If your app needs to keep data around, persistent storage becomes crucial. Kubernetes actually gives you a bunch of ways to handle storage and keep state even if containers restart.
Persistent Volumes and Claims
Persistent Volumes (PVs) let you manage storage resources independently from pod lifecycles. To see all PVs in your cluster, run:
kubectl get pv
This command lists all persistent volumes and shows details like capacity, access modes, and status.
Users request storage with Persistent Volume Claims (PVCs). To check what PVCs exist:
kubectl get pvc
Create a new PVC by defining it in YAML and applying it:
kubectl apply -f pvc-definition.yaml
Need more info about a particular PVC? Try:
kubectl describe pvc my-claim
This command displays capacity, access modes, events, and which volume is bound.
StatefulSets for Stateful Applications
StatefulSets handle the deployment of stateful apps like databases. Unlike Deployments, they give each pod a persistent identity.
To see all StatefulSets in your cluster:
kubectl get statefulsets
Create a StatefulSet using a YAML file:
kubectl apply -f statefulset-definition.yaml
If you need to troubleshoot, check out a specific StatefulSet:
kubectl describe statefulset mysql
StatefulSets create pods with predictable names (like mysql-0, mysql-1). They also provide stable storage and network IDs. This makes them ideal for clustered apps needing steady hostnames.
To scale a StatefulSet up or down:
kubectl scale statefulset mysql --replicas=5
This command keeps pod ordering and uniqueness intact while scaling, helping maintain data integrity.