Monitoring and Performance
Monitoring in Kubernetes isn’t just helpful—it’s essential. It lets DevOps engineers spot issues early and keep resource usage in check. The right tools and observability habits make performance management much less painful.
Observability Best Practices
Keeping an eye on Kubernetes clusters takes a few different strategies. The kubectl top
command gives you a quick look at resource consumption:
kubectl top nodes
kubectl top pods -n <namespace>
These commands show CPU and memory usage, so you can spot resource hogs. For a clearer picture, consider these steps:
- Set up resource requests and limits for every workload
- Configure horizontal pod autoscaling:
kubectl autoscale deployment <name> --min=2 --max=5 --cpu-percent=80
- Add custom metrics using Prometheus adapters
- Use structured logging with something like Fluentd or Loki
Check cluster events regularly with kubectl get events --sort-by='.lastTimestamp'
. This helps you catch recurring problems before they snowball.
Using Kubernetes Dashboard and Tools
The Kubernetes Dashboard gives you a visual way to monitor cluster health. Deploy it like this:
kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.7.0/aio/deploy/recommended.yaml
For deeper monitoring, try these tools:
Tool | Purpose | Key Feature |
---|---|---|
Prometheus | Metrics collection | Time-series database |
Grafana | Visualization | Customizable dashboards |
Jaeger | Distributed tracing | Request flow tracking |
Grab logs with kubectl logs <pod-name>
. If you want to watch logs live, add the -f
flag. For performance troubleshooting, check pod resource usage with:
kubectl describe pod <pod-name>
This command shows resource requests, limits, and current usage. It’s handy for finding bottlenecks.
Security and Compliance
Kubernetes security is a big deal. You need a thorough approach to protect clusters from threats. Solid authentication, tight authorization, and regular audits are the backbone of a secure setup.
Managing Users and Access
Implement Role-Based Access Control (RBAC) to lock down your Kubernetes cluster. RBAC lets admins decide who can access what, and what actions they can take.
Set up authentication with a strong method:
kubectl config set-credentials username --token=token
For authorization, create roles and bind them to users:
kubectl create role developer --verb=get,list,watch --resource=pods
kubectl create rolebinding dev-binding --role=developer --user=jane
Service accounts give pods their own identities:
kubectl create serviceaccount app-service-account
kubectl get serviceaccount
Stick to the least privilege principle. Only give users and processes the access they truly need—no more, no less.
Security Context and Network Policies
Security Context sets privilege and access controls for pods and containers. Here’s how you might set it:
securityContext:
runAsNonRoot: true
runAsUser: 1000
capabilities:
drop: ["ALL"]
Network Policies act as firewalls, controlling pod-to-pod traffic:
kubectl apply -f network-policy.yaml
kubectl get networkpolicies
Encrypt sensitive stuff using Kubernetes Secrets:
kubectl create secret generic db-credentials --from-literal=username=admin --from-literal=password=secret
Always use TLS for API server and service communication. Set up network policies to limit pod communication by namespace, label, or IP range.
Auditing and Compliance Checks
Auditing helps you keep Kubernetes secure. Turn on audit logging to track actions in your cluster:
# Check audit logs
kubectl logs kube-apiserver-master -n kube-system | grep audit
To scan for security issues, use:
# Get pod security policies
kubectl get psp
# Verify RBAC settings
kubectl auth can-i create pods --namespace production
Tools like Kubernetes security scanners can point out configuration vulnerabilities.
Make these checks part of your routine:
- Validate RBAC permissions
- Check for exposed services
- Scan images for vulnerabilities
- Review network policies
- Ensure all communication uses TLS
Setting up compliance baselines helps your clusters meet industry and company standards.
Frequently Asked Questions
Kubernetes commands give DevOps engineers the tools to manage containerized apps. These core commands cover everyday needs, from pod management to deployment strategies.
What are the essential commands for Pod management in Kubernetes?
Pod management is where Kubernetes starts. Use kubectl get pods
to list all pods in your current namespace.
If you want details about a specific pod, run kubectl describe pod <pod-name>
. This shows pod specs and current state.
To delete pods, use kubectl delete pod <pod-name>
. You can also remove all pods with a certain label using kubectl delete pods -l app=<app-name>
.
Create pods with kubectl run
or, more commonly, through deployments using kubectl create -f pod-definition.yaml
.
How can you monitor the health and performance of your Kubernetes cluster?
Check cluster health with kubectl cluster-info
. This command shows the status of control plane components.
For node info, kubectl get nodes
lists all nodes, and kubectl describe node <node-name>
gives detailed resource usage.
To monitor resource use, run kubectl top nodes
and kubectl top pods
. These show CPU and memory usage across the cluster.
Engineers also use kubectl get events
to track cluster-wide events that might point to issues or changes.
What is the process to scale applications up or down using kubectl?
Scaling applications with kubectl
feels pretty direct. You just use kubectl scale deployment <deployment-name> --replicas=<number>
to change how many pods are running.
If you want pods to scale on their own, you can set up autoscaling. The command kubectl autoscale deployment <deployment-name> --min=2 --max=10 --cpu-percent=80
lets the system adjust pod counts based on CPU usage.
To see how things are scaling, run kubectl get hpa
. That’ll show you current scaling metrics and targets.
Can you describe the steps for accessing logs and debugging applications in a Kubernetes environment?
Logs are your best friend when something’s off. Use kubectl logs <pod-name>
to pull up standard output from a pod’s container.
If a container crashed, try kubectl logs <pod-name> --previous
. That grabs logs from the last run, which is often where the clues hide.
Need to poke around inside a container? kubectl exec -it <pod-name> -- /bin/bash
drops you right in for interactive debugging.
For network issues, kubectl port-forward <pod-name> <local-port>:<pod-port>
sets up a tunnel to the pod. Sometimes, that’s the only way to see what’s really going on.
What commands would you use to manage Kubernetes Deployments and Rollouts?
Start by listing your deployments with kubectl get deployments
. It’s handy for getting the lay of the land.
To make a new deployment, you can use kubectl create deployment <name> --image=<image>
. Or, if you like YAML (who doesn’t, sometimes?), just run kubectl apply -f deployment.yaml
.
Want to see how an update is rolling out? kubectl rollout status deployment/<deployment-name>
keeps you posted on deployment progress.
If you ever need to look back, kubectl rollout history deployment/<deployment-name>
shows all the previous revisions. It’s a lifesaver when you need to troubleshoot or just double-check what changed.
How can you update or roll back a Kubernetes application efficiently?
You can update applications with kubectl set image deployment/<deployment-name> <container-name>=<new-image>
. This command swaps out the container image for something new.
When things get more complicated, engineers often reach for kubectl apply -f updated-deployment.yaml
. That way, you can apply all the tweaks in your config file at once.
If an update backfires, just run kubectl rollout undo deployment/<deployment-name>
. This quickly brings back the last working version—pretty handy, honestly.
Need to roll back to a particular version? Use kubectl rollout undo deployment/<deployment-name> --to-revision=<revision-number>
and pick from your revision history.
It’s smart to record deployment changes with kubectl apply -f deployment.yaml --record
. That way, you know who did what and why, which makes tracking down problems a whole lot easier.