Kubernetes Series - Part 3: Networking Essentials
/ 4 min read
Series Navigation
- Part 1: Core Fundamentals
- Part 2: Workload Management
- Part 3: Networking Essentials (Current)
- Part 4: Storage and Persistence
- Part 5: Configuration and Secrets
- Part 6: Security and Access Control
- Part 7: Observability
- Part 8: Advanced Patterns
- Part 9: Production Best Practices
Introduction
After managing Kubernetes clusters in production for several years, I’ve learned that networking is often the most challenging aspect to get right. In this article, I’ll share practical insights from real-world experience implementing Kubernetes networking solutions.
Services
Services provide stable networking for pods. Here’s a production-ready example with common annotations we use:
apiVersion: v1kind: Servicemetadata: name: web-app annotations: prometheus.io/scrape: 'true' prometheus.io/port: '8080' service.beta.kubernetes.io/aws-load-balancer-type: nlbspec: type: LoadBalancer ports: - port: 80 targetPort: 8080 protocol: TCP selector: app: web-app sessionAffinity: ClientIP sessionAffinityConfig: clientIP: timeoutSeconds: 10800Service Types and Use Cases
-
ClusterIP
- Internal service communication
- Default type for most services
spec:type: ClusterIPclusterIP: None # Headless service -
NodePort
- Development and testing
- Direct node access needed
spec:type: NodePortports:- port: 80nodePort: 30080 -
LoadBalancer
- Production external access
- Cloud provider integration
spec:type: LoadBalancerloadBalancerSourceRanges:- 10.0.0.0/8
Ingress Controllers
In production, we use Nginx Ingress Controller. Here’s our base configuration:
apiVersion: networking.k8s.io/v1kind: Ingressmetadata: name: web-ingress annotations: nginx.ingress.kubernetes.io/rewrite-target: / nginx.ingress.kubernetes.io/ssl-redirect: "true" cert-manager.io/cluster-issuer: letsencrypt-prodspec: ingressClassName: nginx tls: - hosts: - app.example.com secretName: tls-secret rules: - host: app.example.com http: paths: - path: /api pathType: Prefix backend: service: name: api-service port: number: 80 - path: / pathType: Prefix backend: service: name: web-service port: number: 80Advanced Ingress Patterns
Here’s how we handle advanced scenarios:
- Rate Limiting:
metadata: annotations: nginx.ingress.kubernetes.io/limit-rps: "10" nginx.ingress.kubernetes.io/limit-whitelist: "10.0.0.0/8"- Authentication:
metadata: annotations: nginx.ingress.kubernetes.io/auth-type: basic nginx.ingress.kubernetes.io/auth-secret: basic-auth nginx.ingress.kubernetes.io/auth-realm: "Authentication Required"- Canary Deployments:
metadata: annotations: nginx.ingress.kubernetes.io/canary: "true" nginx.ingress.kubernetes.io/canary-weight: "20"Ingress Best Practices
-
SSL/TLS Configuration
- Always use HTTPS in production
- Implement automatic certificate management
- Configure proper SSL policies
-
Path Management
- Use precise path matching
- Configure proper redirects
- Handle trailing slashes
-
Load Balancing
- Configure session affinity when needed
- Set appropriate timeouts
- Monitor backend health
Network Policies
Security through network isolation is crucial. Here’s our default deny policy:
apiVersion: networking.k8s.io/v1kind: NetworkPolicymetadata: name: default-denyspec: podSelector: {} policyTypes: - Ingress - EgressAnd a more specific policy for microservices:
apiVersion: networking.k8s.io/v1kind: NetworkPolicymetadata: name: api-network-policyspec: podSelector: matchLabels: app: api-service policyTypes: - Ingress - Egress ingress: - from: - namespaceSelector: matchLabels: name: frontend - podSelector: matchLabels: app: web-frontend ports: - protocol: TCP port: 8080 egress: - to: - namespaceSelector: matchLabels: name: database ports: - protocol: TCP port: 5432Network Policy Best Practices
-
Default Deny
- Start with denying all traffic
- Explicitly allow required communication
- Document exceptions
-
Namespace Isolation
- Use namespace labels for policy targeting
- Implement environment separation
- Control cross-namespace traffic
-
Egress Control
- Limit external endpoints
- Monitor egress traffic
- Use DNS policies
DNS and Service Discovery
CoreDNS configuration we use in production:
apiVersion: v1kind: ConfigMapmetadata: name: coredns namespace: kube-systemdata: Corefile: | .:53 { errors health { lameduck 5s } ready kubernetes cluster.local in-addr.arpa ip6.arpa { pods insecure fallthrough in-addr.arpa ip6.arpa ttl 30 } prometheus :9153 forward . /etc/resolv.conf { max_concurrent 1000 } cache 30 loop reload loadbalance }
example.com:53 { errors cache 30 forward . 10.0.0.53 }DNS Troubleshooting Tips
-
Common Issues
- DNS resolution delays
- Cache problems
- Zone transfer failures
-
Resolution Steps
- Check CoreDNS pods
- Verify service DNS
- Monitor DNS metrics
Production Checklist
✅ Service Configuration
- Appropriate service type
- Resource limits set
- Health checks configured
- Monitoring enabled
✅ Ingress Setup
- TLS configured
- Rate limiting
- Authentication
- Path routing
✅ Network Policies
- Default deny policy
- Explicit allow rules
- Namespace isolation
- Egress control
✅ DNS Management
- CoreDNS optimization
- Custom domains
- Caching configuration
- Monitoring setup
Common Networking Issues
-
Service Discovery
- DNS resolution delays
- Endpoint updates
- Service mesh integration
-
Load Balancing
- Session affinity
- Health check configuration
- SSL termination
-
Network Policy
- Policy ordering
- Namespace isolation
- Egress control
Real-world Example
Here’s a complete networking setup we use in production:
---apiVersion: v1kind: Servicemetadata: name: web-app annotations: prometheus.io/scrape: 'true' prometheus.io/port: '8080'spec: type: ClusterIP ports: - port: 80 targetPort: 8080 selector: app: web-app---apiVersion: networking.k8s.io/v1kind: Ingressmetadata: name: web-app annotations: nginx.ingress.kubernetes.io/ssl-redirect: "true" nginx.ingress.kubernetes.io/limit-rps: "10"spec: tls: - hosts: - app.example.com secretName: app-tls rules: - host: app.example.com http: paths: - path: / pathType: Prefix backend: service: name: web-app port: number: 80---apiVersion: networking.k8s.io/v1kind: NetworkPolicymetadata: name: web-app-policyspec: podSelector: matchLabels: app: web-app policyTypes: - Ingress - Egress ingress: - from: - namespaceSelector: matchLabels: environment: production ports: - protocol: TCP port: 8080 egress: - to: - namespaceSelector: matchLabels: environment: production ports: - protocol: TCP port: 5432Conclusion
Kubernetes networking requires careful planning and implementation. Key takeaways:
- Use appropriate service types
- Implement secure ingress configurations
- Enforce network policies
- Optimize DNS settings
In the next part, we’ll explore storage and persistence in Kubernetes, where I’ll share practical insights about managing stateful workloads.