English

Share with friends

Note

By default, Kubernetes assign one load balancer for every microservice. The problem is cost multiples every time a new service is added. In this blog, we look at how to add microservices without significantly increasing load balancer costs in Kubernetes.

How to Use Just One Load Balancer for Multiple Apps in Kubernetes? cover image

Kubernetes has tons of components that help in application deployments. One such component is K8 services.

Before getting into how to use just one Load Balancer for multiple apps, let's try to understand the Kubernetes services.

What are Kubernetes Services?

Services in Kubernetes provide a stable network endpoint to access a group of pods, allowing for seamless communication between different components of a distributed application.

In simpler words, a service in Kubernetes is an abstraction layer that allows you to enable communication between various components in a cluster.

Pods are basic units that run containers. But the number of pods running within a cluster can change dynamically due to scaling, failures, or other factors.

K8s Services abstract away the underlying pod instances and thus, provide a consistent way to access the functionality they offer.

K8s services can be of the following types:

  1. ClusterIP: Default, randomly forward traffic to any pod with the target port.

  2. Headless: Sends traffic to a specific pod, useful in cases of stateful applications like databases

  3. NodePort: External service, allows you to use Worker Node IP Address.

  4. LoadBalancer: Similar to NodePort except leverages CSPs and loadbalancer.

  5. ExternalName: Acts as a DNS alias for an external service thus allowing you to access them outside the cluster.

Before getting to the steps of how to create multiple apps using just one load balancer in Kubernetes, let's look at the service type "LoadBalancer"?

Also Read: How to Create & Manage Kubernetes Cluster using Kubeadm?

What is Service Type "LoadBalancer" in Kubernetes?

The service type "LoadBalancer" in Kubernetes is used to expose an application or service externally by provisioning a load balancer provided by the underlying infrastructure or cloud provider.

This type of service is useful when you want to make your application accessible to clients outside of the Kubernetes cluster.

When you create a service of type "LoadBalancer", Kubernetes requests a load balancer from the infrastructure provider.

The infrastructure provider provisions an external load balancer, which can be an external IP address or a hostname, and configures it to distribute traffic to the backend pods associated with the service.

Pros & Cons of Using Service Type “LoadBalancer”

Using the service type "LoadBalancer" in Kubernetes has its pros and cons. Let's look at the benefits and disadvantages of using the service type "LoadBalancer" in Kubernetes.

Benefits of Service Type “LoadBalancer” in Kubernetes

  1. Routes the traffic to the different pods and makes it more available and reliable in the clusters.

  2. Detecting the fault tolerance and automatically redirecting the traffic to the other pods in the cluster.

  3. Efficient utilization of resources by scaling up or down the clusters according to the needs.

  4. It also allows you to attach a static IP address and DNS(Domain Name Service) for a set of pods and the fascinating fact is the IP address persists even if the pod dies.

  5. We can also send almost any kind of traffic to it like HTTP, TCP, UDP, gRPC, and Websockets.

A pod without a service has a dynamic IP address. So if the pod dies, so does the IP address.

Also Read: How to Install & Use Prometheus Operator?

Cons of Service Type “LoadBalancer" in Kubernetes

The biggest downside of the service type "LoadBalancer" is that we need to create a service of the type "LoadBalancer" for each service that you want to expose.

In short, Kubernetes creates load balancers for each of these services. A load balancer costs upwards of 30 USD a month.

If you have 'n' microservices, you will need 'n' load balancers per month which will cost you 30n USD at least per month.

Besides, as Load Balancing is managed by external cloud providers (like GCP, AWS and Azure), it becomes difficult to read just and configure the setup.

A lack of optimization options as the cost depends upon the cloud provider and the specific load balancing solution we choose.

This could be a significant cost problem especially if you are using thousands of microservices. So, if you are using 100,000 microservices, the cost of load balancers alone will be $3M.

But, don't worry. You can use one Load balancer for multiple apps in Kubernetes.

Here's how. You can solve this problem by using the service type “ClusterIP” and an Nginx ingress controller.

Let's get more into it by understanding the service type "ClusterIP".

Also Read: Everything You Need to Operate with Kubernetes Namespaces

What is Service Type "ClusterIP in Kubernetes?

It is a default service type for a K8s service and it is generally used for internal traffic.

If you are using the service type "ClusterIP", traffic randomly gets distributed to any targeted pods.

Service type "ClusterIP" is generally used for:

  • Debugging

  • Internal traffic

  • Testing

  • Internal Dashboards

Limitations of Using K8s Service Type: “Cluster IP”

Here are a few drawbacks of the "Cluster IP".

  1. Provides access only within the cluster and to expose the service externally we need a NodePort or Nginx controller.
  2. To discover the service outside the cluster we need advanced tools like Istio.

So, we now know that internal access within the cluster can be done via "ClusterIP" but the drawback is - "ClusterIP" services are not accessible from outside the cluster.

Here comes the savior - Ingress NGINX Controller!

Introducing Ingress NGINX Controller

Ingress is the most powerful method to expose services. It will let you route-based and path-based routing to backend services.

Different types of Ingress Controllers like Nginx, Kong, and Istio exist.

With Ingress, you only need to maintain a single Load balancer which is considerably cheaper than using the load balancer type.

Also Read: Differences between Consul vs Istio vs Linkerd

How to Use One Load Balancer for Multiple Apps in Kubernetes?

The short version is - use Kubernetes service type "Cluster IP" with Ngnix Ingree Controller.

Step 1: We can Install and set up the NGINX Ingress controller in the Kubernetes cluster.

Step 2: Define your microservices as separate Kubernetes services but with the type: "ClusterIP" instead of the type "LoadBalancer". You can see a YAML file below of service type "Cluster IP".

apiVersion: v1
kind: Service
metadata:
  name: Service
spec:
  type: ClusterIP
  selector:
    app: my-app1
  ports:
    - name: http
      protocol: TCP
      port: 80
      targetPort: 8080

Step 3: Create an Ingress resource that maps the desired routing rules to your microservices.

Step 4: Configure the NGINX Ingress controller to use an AWS Elastic Load Balancer (ELB) by specifying the appropriate annotations in the Ingress resource.

Step 5: The NGINX Ingress controller will automatically configure the ELB and handle the routing based on your defined rules.

Nginx provides advanced routing and load balancing making it a popular choice for Kubernetes deployments.

By using the Ingress Nginx controller with K8s service type "ClusterIP", you can deploy multiple applications using one Load balancer in Kubernetes which makes it a more cost-effective option for multiple apps.

Share with friends

Priyansh Khodiyar's profile

Written by Priyansh Khodiyar

Priyansh is the founder of UnYAML and a software engineer with a passion for writing. He has good experience with writing and working around DevOps tools and technologies, APMs, Kubernetes APIs, etc and loves to share his knowledge with others.

Further Reading

Life is better with cookies 🍪

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt out if you wish. Cookie Policy