Table of Contents
Issue: Load Balancer service stuck at assigning external IP
- Kubernetes Load Balancer External IP Pending state is a common issue seen in standalone clusters when deploying load balancer service.
- This is more of a expected behaviour rather an issue
- When using a cloud provider like GCP, AWS or Azure, the Cloud provider himself will assign external ip for the load balancer service. But for each ip they will charging extra
Solution:
- MetalLB can be used for solving this issue. MetalLB is a load-balancer implementation for bare metal Kubernetes clusters, using standard routing protocols.
- More info on MetalLB can be found here
- Metal LB will be configured inside our Kubernetes cluster where we are working on.
Pre-Requisites:
- Metal LB can be deployed with 3 simple commands. But before we start the deployment, we need to
- identify free ips for the Metal LB to assign to Kubernetes Load balancer service.
- which mode kube-proxy is running
- In my case , my cluster's nodes are on subnet 10.39.251.128/27, and i have identified 3 free ips in this subnet. Similarly for your setup, you can identify the free ips in your subnet and have the ips handy (10.39.251.130-10.39.251.132)
- If Kube proxy is running in IPVS mode, then we need to enable strict ARP mode
[root@host-10-39-251-137 ~]# kubectl get configmap -n kube-system kube-proxy -o yaml |grep mode mode: "ipvs" [root@host-10-39-251-137 ~]# kubectl get configmap -n kube-system kube-proxy -o yaml |grep strictARP strictARP: false [root@host-10-39-251-137 ~]# kubectl edit configmap -n kube-system kube-proxy [root@host-10-39-251-137 ~]# kubectl get configmap -n kube-system kube-proxy -o yaml |grep strictARP strictARP: true
Metal LB deployment steps
- Kuberenetes declartive method is suggested by Metal LB for deployment using MANIHEST provided by the Metal LB team and at the time of this article is written, metallb/v0.12.1 is latest stable version .
- Official MetalLB deployment and documentation can be found here
- Create namespace
[root@host-controller ~]# kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.12.1/manifests/namespace.yaml namespace/metallb-system created
- Deploy required manifest files
[root@host-controller ~]# kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.12.1/manifests/metallb.yaml Warning: policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ podsecuritypolicy.policy/controller created podsecuritypolicy.policy/speaker created serviceaccount/controller created serviceaccount/speaker created clusterrole.rbac.authorization.k8s.io/metallb-system:controller created clusterrole.rbac.authorization.k8s.io/metallb-system:speaker created role.rbac.authorization.k8s.io/config-watcher created role.rbac.authorization.k8s.io/pod-lister created role.rbac.authorization.k8s.io/controller created clusterrolebinding.rbac.authorization.k8s.io/metallb-system:controller created clusterrolebinding.rbac.authorization.k8s.io/metallb-system:speaker created rolebinding.rbac.authorization.k8s.io/config-watcher created rolebinding.rbac.authorization.k8s.io/pod-lister created rolebinding.rbac.authorization.k8s.io/controller created daemonset.apps/speaker created deployment.apps/controller created
- Once the MANIFESTs are deployed we can see that pods are spawing in namespace metallb-system. But speaker pod of the MetalLB which is deployed as part of Daemon set, will be in CreateContainerConfigError .
- This is because, we have not supplied the external ips which we have identified for Load Balancer as config map to the namespace.
[root@host-controller ~]# kubectl get all -n metallb-system NAME READY STATUS RESTARTS AGE pod/controller-7476b58756-74cmw 0/1 ContainerCreating 0 20s pod/speaker-lqvp6 0/1 ContainerCreating 0 20s pod/speaker-rxclp 0/1 CreateContainerConfigError 0 20s pod/speaker-xbfsn 0/1 CreateContainerConfigError 0 20s
[root@host-controller ~]# cat configmap.yaml apiVersion: v1 kind: ConfigMap metadata: namespace: metallb-system name: config data: config: | address-pools: - name: default protocol: layer2 addresses: - 10.39.251.130-10.39.251.132
- Once the configmap is applied, we can see that the speaker pods are running.
[root@host-controller ~]# kubectl get all -n metallb-system NAME READY STATUS RESTARTS AGE pod/controller-7476b58756-74cmw 1/1 Running 0 3m20s pod/speaker-lqvp6 1/1 Running 0 3m20s pod/speaker-rxclp 1/1 Running 0 3m20s pod/speaker-xbfsn 1/1 Running 0 3m20s
Verification of the LoadBalancer service with MetalLB
- lets try to expose the ngnix deployement via Load Balancer, which was stuck at pending state for assigning external ip. Im using an existing ngnix deployment which i brought up, details of the same can be found here
[root@host-controller~]# kubectl expose deploy ngnix-data-hub -n lb --target-port=80 --port=9843 --type LoadBalancer -n lb service/ngnix-data-hub exposed [root@host-controller~]# kubectl get svc -n lb NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE ngnix-data-hub LoadBalancer 10.103.251.20 10.39.251.130 9843:32474/TCP 8s
- We could see that the external ip is now got assigned and is the first ip from the range which we have passed to the metallb-system namespace
[root@host-10-39-251-137 ~]# curl 10.39.251.130:9843 <!DOCTYPE html> <html> <head> <title>Welcome to nginx!</title> <style> html { color-scheme: light dark; } body { width: 35em; margin: 0 auto; font-family: Tahoma, Verdana, Arial, sans-serif; } </style> </head> <body> <h1>Welcome to nginx!</h1> <p>If you see this page, the nginx web server is successfully installed and working. Further configuration is required.</p> <p>For online documentation and support please refer to <a href="http://nginx.org/">nginx.org</a>.<br/> Commercial support is available at <a href="http://nginx.com/">nginx.com</a>.</p> <p><em>Thank you for using nginx.</em></p> </body> </html>