Kubernetes Ingress Controller

Refer https://kodekloud.com/blog/kubernetes-ingress/

Kubernetes is one of the fastest growing technologies. It’s being adopted by most of the technology-driven organizations. So knowing how it works and how to use it is becoming increasingly important.

Kubernetes has a lot of moving parts that serve different purposes. In this article we’re going to discuss one such part, called Ingress.

Before we dive into this subject, it’s nice to start with a little background about how networking in Kubernetes works.

Networking in Kubernetes

Networking has always been about delivering some traffic from a source to a destination – usually through a set of routing rules – and Kubernetes networking is not very different. Except for the fact that the source or destination of the traffic is typically a Pod.

The Pod is the smallest unit of workload in Kubernetes. It is the object that actually contains the application (typically one microservice of an application). And because pods are ephemeral by nature they can be destroyed and created at any time. This introduces a new challenge in the network, the dynamic / always-changing IP address of the pods.

Every time a new pod gets created, it is assigned a new IP address. A client that communicates with a pod will have a hard time tracking this constantly moving target. Where will it send network traffic next time the IP of this pod changes? How will it know what the new IP is?


To solve this issue, Kubernetes uses an object called a “service”.

The service provides a stable IP address (virtual IP address) for clients that want to connect to Pods. It “hides” the actual IP addresses of the pods behind it and forwards the incoming connections to them.


Now it’s up to this service object to detect the changes in the IP addresses of the backend Pods and detect which new Pods are currently available to send the traffic to. The client no longer has to worry about these changes as it can only see and connect to the IP of the service.

Types of Service:

There are 3 types of service currently in Kubernetes.

  1. ClusterIP service
  2. NodePort service
  3. Loadbalancer service

Each type addresses a specific need in networking, but might have its own challenges and limitations.

1. ClusterIP Service

A ClusterIP service is only used inside a Kubernetes cluster. It cannot be reached from outside the cluster and it’s not routable on the external network. Only objects running inside the Kubernetes cluster can connect to this type of service.

This type of service is used for communication between microservices running inside the Pods.

Image source: Kodekloud

But probably applications running inside Kubernetes will not be of a great use if clients can’t connect to them from outside right? So how does Kubernetes enable external connectivity?

2. NodePort Service

The NodePort is the simplest way to allow external access to Pods running on Kubernetes. It allocates one of the available ports on the worker node (server actually hosting the pods). Next, traffic incoming to this specific port is sent to the service, and then it can finally reach a pod. So we basically create a door on our worker node. Now, external clients can enter through our worker node. They just contact the IP address our node has, and enter through the door created at a specific port number.

It is worth mentioning that when exposing a service using a NodePort, this port is opened on ALL the worker nodes. Traffic hitting ANY of the worker nodes on this port will be redirected to the Pods as we stated earlier.

Image source: Kodekloud

Also note that the NodePort builds on the ClusterIP service. This means that when you create a NodePort there’s also a ClusterIP created. Traffic forwarded to the NodePort is actually redirected to the ClusterIP first and then to the Pods.

So traffic path is: Client 🡪 Worker Node IP 🡪 NodePort 🡪 ClusterIP Service 🡪 Pod

Although NodePort is a relatively simple way to enable external access to the cluster, it has some limitations.

  1. Clients need to know the IP of the worker node which they will connect to. And with adding and removing nodes from the cluster this will be challenging.
  2. It requires opening external access to a port on the node for each service. And with a large number of services running inside the cluster this of course becomes a big security issue. Also, the number of services that can run on the cluster will be limited by the available number of free ports on the nodes.

3. LoadBalancer Service

To overcome some of the limitations of the NodePort service, another solution is provided by Kubernetes. It makes use of an external load balancer (typically provisioned on a cloud platform). This load balancer uses a single virtual IP and a specific port. Any connection received on this IP with the specified port is forwarded to one of the nodes in the cluster, which then forwards it to the Pod.

Now the clients don’t have to worry about the specific IP of each Node. They can use a single IP, corresponding to the load balancer. Then it’s up to the load balancer to contact the correct IP and port belonging to some cluster node.


Same way as the NodePort builds on the ClusterIP, the LoadBalancer service also builds on the NodePort. This means that for every LoadBalancer service, there’s also a NodePort service created. And the loadbalancer forwards the traffic to it, which then reaches all the way up to the Pod.

So the path now is : Client 🡪 Loadbalancer 🡪 Worker Node IP 🡪 NodePort 🡪 ClusterIP Service 🡪 Pod

With each of the previous service types there was a limitation, and the LoadBalancer is not different.

The limitation of the LoadBalancer service arises from the fact that you’ll need a separate load balancer IP created for every such service you want to expose to the outside world.

So each time you create a service with type LoadBalancer, the cloud provider creates a new cloud load balancer with a different IP address. And it will use this IP as the frontend of your service, exposed to the clients.

This of course makes the cost of using LoadBalancer service very high, especially when you want to expose a lot of services. You typically get charged per each load balancer created.

Another disadvantage is that the LoadBalancer service forwards the traffic based on the header in the Layer4 of TCP/IP stack. This means that it only inspects and distributes the traffic based on the destination port. It can’t distribute the traffic based on the application content that is encapsulated (Layer7).

And this is where the Ingress comes into play.

Kubernetes Ingress

Ingress is a Kubernetes API resource. It allows external traffic to be routed to the services inside the cluster. Routing is based on some rules that are set as part of the ingress configuration.

Although the ingress is just another way of exposing services outside the cluster, it is not itself considered a service type. It is a separate API object type.

The way it works at a high level is that you first deploy what is called an Ingress controller.

This controller is the actual engine that executes the rules of the ingress, understands them, and then decides what to do with the traffic.

You can think of an ingress controller like a layer 7 load balancer which has some forwarding rules. And these rules are passed to this load balancer through an ingress object. Yes, we need both an ingress controller, and an ingress resource/object in Kubernetes. Later on we’ll see why.

There are a couple of ingress controller implementations out there but the most popular are Nginx and HAProxy.

Now let’s get into the steps of how this works.

As we mentioned, the first step to enable ingress routing is to install the ingress controller. It is typically deployed as a pod inside the cluster through a deployment or a daemonset object. You can find an example of installing Nginx ingress controller here:

https://docs.nginx.com/nginx-ingress-controller/installation/installation-with-manifests/

Some ingress controllers also support being deployed outside the cluster, but this requires additional configuration for enabling routing through BGP and BIRD.

You can check this article for more information: https://www.haproxy.com/blog/run-the-haproxy-kubernetes-ingress-controller-outside-of-your-kubernetes-cluster/ .

But for now let’s stick with the traditional example of the inside-cluster deployment.

After the controller has been deployed, it is time to start creating the ingress resource. This ingress resource will update the controller with the rules to be applied on the traffic.

Like any other kubernetes object it is created and configured using a manifest file.

Example ingress manifest:

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: ingress-wildcard-host
spec:
  rules:
  - host: "foo.bar.com"
    http:
      paths:
      - pathType: Prefix
        path: "/bar"
        backend:
          service:
            name: service1
            port:
              number: 80
  - host: "*.foo.com"
    http:
      paths:
      - pathType: Prefix
        path: "/foo"
        backend:
          service:
            name: service2
            port:
              number: 80

YAML file source : https://kubernetes.io/docs/concepts/services-networking/ingress/

Inside the ingress we can define:

  • The host to apply the rules on. E.g., for “example.com” host or “other-example.org”.
  • A specific path. E.g., “example.com/path/to/shop“.
  • Whether to use http or https.
  • The service and the port to route the traffic to when it matches these rules.

Here we want traffic destined to foo.bar.com/bar to be routed to the service with name service1 on port 80

So when applying this ingress, if a request hits the ingress controller with destination host foo.bar.com and path /bar, the controller will forward this traffic to the service running on the cluster with name service1 on port 80. Then it’s the service that forwards this traffic to the pods behind it, as usual.

You can see now how much flexibility the ingress rules have. Instead of just relying on an IP and port to load balance the traffic, it can load balance based on content in the HTTP header. We can just create smarter, more advanced load balancing rules.

The ingress controller makes it possible to have dozens of services running inside your cluster and you redirect the traffic to each one using only one external facing IP.

But wait! didn’t we mention previously that the ingress controller pods are deployed inside the cluster? So how does the traffic reach the ingress controller in the first place?

Well, simply through a LoadBalancer service.

The idea is that you want to expose your ingress controller itself to the outside network. This way, it can receive external traffic (just as we discussed in the LoadBalancer part of the article).

But this time the difference is that you only expose a single service through the LoadBalancer, which is the ingress controller service. Then it’s up to this service to route the traffic to other backend services in the cluster.

And this way, you can use a single load balancer IP to forward traffic to all your services. And it’s the responsibility of the ingress controller to decide where to direct traffic. Based on the rules it has, and how the content matches those rules, it will decide to which service it should forward that traffic.

So the Path now is : Client 🡪 LoadBalancer 🡪 Ingress controller 🡪 Other services 🡪 Pod

Let’s summarize this flow with a simple example.

You have two services running in your cluster. The backend pods for the first and second service are serving traffic for app.test.com and app.prod.com respectively. Both services should receive traffic on port 8080 to forward it to the pods.

Now you want to expose these services outside the cluster to allow external clients to reach them. So you are about to configure an ingress.

First you deploy an ingress controller into your cluster, which is basically a set of pods as part of a deployment. These pods also have their own cluster service – let’s call it ingress-service.

Now you start creating your ingress object. You add two rules inside this ingress. One to forward the traffic destined for app.test.com host to service 1 on port 8080, and another rule to forward traffic destined for app.prod.com to service 2 on port 8080.

The ingress controller will continuously monitor the cluster for newly or updated ingress objects created. Once it finds an ingress rule it will configure it on itself and start applying it to the traffic.

In our example the ingress controller will detect the ingress rules created for the app.test.com and app.prod.com hosts and start applying them.

So whenever it sees traffic with a destination host app.test.com it will redirect it to service 1 on port 8080. And the same applies for service 2 with host app.prod.com.

The remaining part now is that you want your ingress controller itself to be able to receive traffic from the external clients to match it against these rules.

So you create a LoadBalancer service that receives traffic on port 80 and forwards it to the ingress-service.

What happens now is that the cloud provider will provision a new external load balancer with a public IP address. Then it adds a rule to it to forward traffic to the ingress-service.

So when traffic reaches the load balancer at the <Loadbalancer-IP>:80, it will be redirected to the ingress-service. Next, the ingress controller will check the contents of the traffic and apply a matching rule, if found, and forward it to service 1 or service 2. Finally, this traffic reaches the Pods.

Of course, you typically use a DNS service to resolve hostnames to IP addresses. In this scenario you want your app.test.com and app.prod.com to point to the load balancer IP. So traffic will be received on the load balancer and then follows the above flow.

Hopefully, this clears up any questions you might have about Kubernetes Ingress. If something is still unclear, we have an awesome course for absolute Kubernetes beginners. If you intend to become a master, check out this Kubernetes learning path that will teach you everything you might want to know about this technology.