Share with friends
Welcome to our tutorial on Kubernetes NGINX Ingress! In this tutorial, we'll explore the NGINX Ingress controller and discuss 13 useful configuration options that can enhance your Kubernetes deployments.
Also, you don't need to be an expert to follow along. Let's dive in and discover how NGINX Ingress can optimize your Kubernetes applications!
What is Ingress?
Ingress is a Kubernetes resource that provides a way to configure an HTTP or HTTPS load balancer for routing external traffic to services within a Kubernetes cluster.
It acts as an entry point or gateway for external clients to access the services running inside the cluster.
To use Ingress, you must have an Ingress controller deployed in your Kubernetes cluster.
Basically, it is responsible for implementing the actual load balancing and routing logic based on the Ingress resource configuration.
There are actually several Ingress controllers available in the market, like NGINX Ingress Controller (preferred), Traefik, or HAProxy Ingress, compare your pros and cons and choose what suits your business needs.
Also Read: Consul vs Linkerd vs Istio - A Service Meshes Comparison
What is Ingress Controller?
Ingress Controller operates in a cluster and configures HTTP load balancer in accordance with Ingress resources.
The load balancer may be a software load balancer operating within the cluster or an external hardware or cloud load balancer.
Different implementations of the Ingress Controller are required for various load balancers.
When using NGINX, the load balancer and Ingress controller are both deployed in a pod.
Now, let's see some practical configuration options of NGINX Ingress briefly.
Host-Based Routing: You can route traffic based on the host header of incoming requests.
Path-Based Routing: It supports routing based on the path of incoming requests.
SSL Termination: It enables you to handle SSL/TLS encryption for your applications. It can terminate SSL connections, decrypt the traffic, and forward it to the appropriate backend services.
Load Balancing: It offers built-in load-balancing capabilities. You can distribute traffic evenly across multiple backend services.
Rate Limiting: It allows you to set rate limits on incoming requests.
Authentication and Authorization: You can secure your applications using NGINX Ingress by implementing authentication and authorization mechanisms. (OAuth and LDAP)
Request Rewrites: It enables you to modify incoming request URLs using rewrite rules.
Custom Error Pages: You have the flexibility to customize error pages returned by NGINX Ingress.
Connection and Request Timeouts: It allows you to configure timeouts for connections and requests.
Secure Headers: It enables you to add security-related HTTP headers to your responses.
Session Affinity: It supports session affinity, also known as sticky sessions.
Health Checks: It provides health check functionality to monitor the status of your backend services.
Global Rate Limiting: It allows you to set global rate limits to control the overall traffic coming into your cluster.
Also Read: Understanding Liveness Probes in Kubernetes - A Tutorial
The Ingress Controller at a High-Level
There are basically two ways to customize NGINX:
1. ConfigMap
Using a ConfigMap is a common method for customizing NGINX in Kubernetes environments. With a ConfigMap, you can define global configurations for NGINX that are applied to all instances of NGINX in your cluster.
This allows you to set options such as worker processes, worker connections, timeouts, and more. By modifying the ConfigMap, you can update NGINX configurations without redeploying the application.
It basically allows you to decouple configuration artifacts from image content to keep containerized applications portable.
The ConfigMap API resource stores configuration data as key-value pairs. The data provides the configurations for system components for the nginx-controller.
Here's an example.
data:
map-hash-bucket-size: "128"
ssl-protocols: SSLv2
The key and values in a ConfigMap can only be strings. This means that we want a value with boolean values we need to quote the values, like "true" or "false". Same for numbers, like "100".
Some common configuration options are:
-
add-headers
-
allow-backend-server-header
-
allow-snippet-annotations
-
annotation-value-word-blocklist
-
hide-headers
-
access-log-params
-
access-log-path
-
http-access-log-path
-
stream-access-log-path
-
enable-access-log-for-default-backend
-
error-log-path
-
enable-modsecurity
-
enable-owasp-modsecurity-crs
Read more about these configurations here.
2. Annotations
Next, Annotations are used specifically in the context of Kubernetes Ingress resources. As we already learned above, an Ingress is a Kubernetes API object that manages external access to your services strictly within a cluster.
Now, by using annotations, we can define specific NGINX configurations for a particular Ingress rule. This approach allows you to fine-tune NGINX behavior for your application (one or more) or endpoint(s) exposed through the Ingress.
In short, use this if you want a specific configuration for a particular Ingress rule.
Here, the key-value pairs can only be strings. Other types, like boolean or numeric values must be used in quotes, i.e. "true", "false", "100".
We have discussed annotations in more detail later in this article.
Also, you can read more about these annotations here from the docs.
Also Read: How to Install & Setup Kubernetes Dashboard?
Revisiting Kubernetes NGINX Ingress
In the world of software and web applications (and AI as well), we often have multiple services running simultaneously, sometimes eating up other's resources if not properly configured.
These services could be things like your website, a MongoDB database, or an API.
So here Kubernetes is a tool that helps manage and orchestrate these services, making sure they run well and your services don't go down often.
Now, imagine you have several of these services running in your Kubernetes cluster, and you want to make them accessible to users from the internet.
What would you do?
This is where NGINX Ingress comes into play.
NGINX Ingress is like a traffic controller for your Kubernetes cluster. It acts as an entry point for external traffic coming into your cluster.
It can also help route requests to the appropriate services based on things like the URL or domain name (load balancers come later in the picture).
Think of it this way: when someone tries to access your website or use your application, their request goes through NGINX Ingress first.
NGINX Ingress looks at the incoming request and checks its rules (that you defined in some .yaml file) to determine which service should handle the request.
For example, let's say you have two services running namely a web application and an API.
The web application should handle requests like "www.unyamlrocks.com" while the API should handle requests like "api.unyamlrocks.com".
Got the difference?
NGINX Ingress can be configured to route requests with "www.unyamlrocks.com" to the web application service and requests with "api.unyamlrocks.com" to the API service.
It also provides additional features like SSL termination (which enables secure communication over HTTPS), load balancing (which distributes incoming traffic among multiple instances of your services), and request routing based on different rules and conditions.
Also Read: How to Cleanup Docker Resources?
Top 13 Useful Configuration Options for Kubernetes NGNIX Ingress
1. NGNIX Ingress Timeout Settings
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: example-ingress
annotations:
nginx.ingress.kubernetes.io/proxy-read-timeout: "30s"
nginx.ingress.kubernetes.io/proxy-connect-timeout: "30s"
nginx.ingress.kubernetes.io/proxy-send-timeout: "30s"
spec:
In the above example, the proxy-read-timeout
and proxy-send-timeout
annotations are set to "30s".
These values determine the maximum time the NGINX proxy will wait for a response from the upstream server (read timeout) and the maximum time it will wait while sending a request to the upstream server (send timeout). Pretty neat.
The default time for proxy-connect-timeout is 60s, and 75s is upper limit.
You can specify timeouts in seconds (s), milliseconds (ms), or other time units.
For example, you can use "10s" for a 10-second timeout or "500ms" for a 500-millisecond timeout.
2. NGNIX Ingress gRPC Support
To support a gRPC application with NGINX Ingress Controllers, you need to add the nginx.org/grpc-services annotation to your Ingress resource definition.
Prerequisites
-
HTTP/2 must be enabled. See http2 ConfigMap key in the ConfigMap
-
Ingress resources for gRPC applications must include TLS termination.
Syntax
nginx.org/grpc-services: "service1[,service2,...]"
Example
In the following example, we load balance three applications, one of which is using gRPC:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: grpc-ingress
annotations:
nginx.org/grpc-services: "grpc-svc"
spec:
3. Default Backend in NGNIX Ingress
The default backend in NGINX Ingress is a fallback for requests that do not match any defined Ingress rules.
It ensures that all incoming requests are handled, even if they don't have a specific Ingress configuration.
The default backend exposes two URLs with the following behaviors:
-
/healthz: This URL is used for health checks. It typically returns an HTTP 200 status code, indicating that the backend is healthy and ready to handle requests.
-
/ (root path): When a request doesn't match any Ingress rules, the default backend returns a 404
Not Found
status code for the root path (/).
Example:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: example-ingress
annotations:
nginx.ingress.kubernetes.io/default-backend: <svc name>
Spec:
...
4. Support for HTTP Basic Authentication
Here's an example to add authentication in an Ingress rule using a secret.
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: cafe-ingress-master
annotations:
nginx.org/mergeable-ingress-type: "master"
nginx.org/basic-auth-secret: "tea-passwd"
nginx.org/basic-auth-realm: "Tea"
spec:
ingressClassName: nginx
tls:
- hosts:
- cafe.example.com
secretName: cafe-secret
rules:
- host: cafe.example.com
5. Support for JSON Web Tokens (JWTs)
NGINX Plus supports validating JWTs with ngx_http_auth_jwt_module.
The Ingress Controller provides the following 4 annotations for configuring JWT validation:
-
nginx.com/jwt-key (Required)
-
nginx.com/jwt-realm: "realm" (Optional)
-
nginx.com/jwt-token: "token" (Optional)
-
nginx.com/jwt-login-url: "url" (Optional)
In the following example, we enable JWT validation for the cafe-ingress Ingress for all paths using the same key cafe-jwk.
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: cafe-ingress
annotations:
nginx.com/jwt-key: "cafe-jwk"
nginx.com/jwt-realm: "Cafe App"
nginx.com/jwt-token: "$cookie_auth_token"
nginx.com/jwt-login-url: "https://login.example.com"
spec:
6. NGNIX Ingress - WWW Redirect
To enable a WWW redirect for NGINX Ingress, you can use the nginx.ingress.kubernetes.io/from-to-www-redirect
annotation with the value true
.
Here's an example.
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: example-ingress
annotations:
nginx.ingress.kubernetes.io/from-to-www-redirect: "true"
spec:
...
7. NGNIX Ingress SSL Redirect
To enable an SSL redirect for NGINX Ingress, you can use the nginx.ingress.kubernetes.io/ssl-redirect
annotation with the value true
.
Here's an example.
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: example-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
nginx.ingress.kubernetes.io/force-ssl-redirect: "true"
nginx.ingress.kubernetes.io/ssl-redirect: "true"
nginx.ingress.kubernetes.io/preserve-trailing-slash: "true"
spec:
...
Also Read: Understanding Kubectl Config Set Context
8. NGNIX Ingress - CORS
To enable CORS (Cross-Origin Resource Sharing) for NGINX Ingress, you can use the nginx.ingress.kubernetes.io/enable-cors
annotation with the value true
.
Here's an example.
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: example-ingress
annotations:
nginx.ingress.kubernetes.io/enable-cors: "true"
nginx.ingress.kubernetes.io/cors-allow-methods: "PUT, GET, POST, OPTIONS"
nginx.ingress.kubernetes.io/cors-allow-headers: "X-Forwarded-For, X-app123-XPTO"
nginx.ingress.kubernetes.io/cors-expose-headers: "*, X-CustomResponseHeader"
nginx.ingress.kubernetes.io/cors-max-age: 600
nginx.ingress.kubernetes.io/cors-allow-credentials: "false"
spec:
...
9. NGNIX Ingress - Rate Limiting
NGINX Ingress supports rate limiting through the use of the nginx.ingress.kubernetes.io/limit-rps
annotation to limit requests per second.
Here's an example.
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: example-ingress
annotations:
nginx.ingress.kubernetes.io/limit-rps: "10"
nginx.ingress.kubernetes.io/limit-rpm: "100"
nginx.ingress.kubernetes.io/limit-connections: "9"
spec:
...
10. Custom Max Body Size
When a request's size exceeds the limit size for the client request body, a 413 error will be delivered to the client. Client_max_body_size, a configurable parameter that by default has a value of 1m (1 Megabyte), can be used to set this size.
Proxy-body-size can be configured globally for all Ingress rules using the Ingress NGINX Controller ConfigMap.
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: example-ingress
annotations:
nginx.ingress.kubernetes.io/proxy-body-size: "10m"
nginx.ingress.kubernetes.io/client-max-body-size: "10m"
spec:
11. NGNIX Ingress - Whitelist Source Range
Here, we specify allowed client IP source ranges via nginx.ingress.kubernetes.io/whitelist-source-range annotations.
Values are comma-separated lists of CIDRs (Classless Inter-Domain Routing)
Let's look at an example showcasing this.
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: example-ingress
annotations:
nginx.ingress.kubernetes.io/whitelist-source-range: "192.168.0.0/24,10.0.0.0/16"
spec:
12. Enable Access Log
Access logs are normally enabled, but in some circumstances, it could be necessary to disable them for a specific entrance. Make use of the following NGNIX Ingress annotation example.
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: example-ingress
annotations:
nginx.ingress.kubernetes.io/enable-access-log: "true"
nginx.ingress.kubernetes.io/access-log-path: "/var/log/nginx/access.log"
spec:
13. Backend Protocol
It is possible to specify how NGINX should connect with the backend service using backend protocol annotations. (In earlier versions, secure backends are replaced.) HTTP, HTTPS, GRPC, GRPCS, and FCGI are all acceptable values.
NGINX often uses HTTP. Let's look at an example of this NGNIX Ingress configuration option.
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: example-ingress
annotations:
nginx.ingress.kubernetes.io/backend-protocol: "HTTP"
Replace example-ingress
with the actual name of your Ingress resource.
Custom Annotations - NGNIX Ingress
Custom annotations enable you to quickly extend the Ingress resource to support many advanced features of NGINX, such as rate limiting, caching, etc.
Let's create a set of custom annotations to support rate-limiting:
-
custom.nginx.org/rate-limiting - enables rate-limiting.
-
custom.nginx.org/rate-limiting-rate - configures the rate of rate-limiting, with the default of 1r/s.
-
custom.nginx.org/rate-limiting-burst - it configures the maximum bursts size of requests with the default of 3.
Step 1: Customize the Template
Customize the template for Ingress resources to include the logic to handle and apply the annotations.
Create a ConfigMap file with the customized template (nginx-config.yaml).
kind: ConfigMap
apiVersion: v1
metadata:
name: nginx-config
namespace: nginx-ingress
data:
ingress-template: |
.. .
# handling custom.nginx.org/rate-limiting` and custom.nginx.org/rate-limiting-rate
{ {if index $.Ingress.Annotations "custom.nginx.org/rate-limiting"} }
{ {$rate := index $.Ingress.Annotations "custom.nginx.org/rate-limiting-rate"} }
limit_req_zone $binary_remote_addr zone={ {$.Ingress.Namespace} }-{ {$.Ingress.Name} }:10m rate={ {if $rate} }{ {$rate} }{ {else} }1r/s{ {end} };
{ {end} }
.. .
{ {range $server := .Servers} }
server {
.. .
{ {range $location := $server.Locations} }
location { {$location.Path} } {
.. .
# handling custom.nginx.org/rate-limiting and custom.nginx.org/rate-limiting-burst
{ {if index $.Ingress.Annotations "custom.nginx.org/rate-limiting"} }
{ {$burst := index $.Ingress.Annotations "custom.nginx.org/rate-limiting-burst"} }
limit_req zone={ {$.Ingress.Namespace} }-{ {$.Ingress.Name} } burst={ {if $burst} }{ {$burst} }{ {else} }3{ {end} } nodelay;
{ {end} }
The customization above consists of two parts:
-
handling the custom.nginx.org/rate-limiting and custom.nginx.org/rate-limiting-rate annotations in the HTTP context.
-
handling the custom.nginx.org/rate-limiting and custom.nginx.org/rate-limiting-burst annotation in the location context.
for brevity, the unimportant for example parts of the template are replaced with ...
Apply the customized template:
kubectl apply -f nginx-config.yaml
Also Read: A Complete List of Kubectl Commands
Step 2: Use Custom Annotations in an Ingress Resource
Create a file with the following Ingress resource (cafe-ingress.yaml) and use the custom annotations to enable rate-limiting:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: cafe-ingress
annotations:
custom.nginx.org/rate-limiting: "on"
custom.nginx.org/rate-limiting-rate: "5r/s"
custom.nginx.org/rate-limiting-burst: "1"
spec:
ingressClassName: nginx
rules:
- host: "cafe.example.com"
http:
paths:
- path: /tea
pathType: Prefix
backend:
service:
name: tea-svc
port:
number: 80
- path: /coffee
pathType: Prefix
backend:
service:
name: coffee-svc
port:
number: 80
Finally, apply the Ingress resource:
kubectl apply -f cafe-ingress.yaml
Final Words on NGNIX Ingress Configuration Options
Glad you made it this far into this tutorial. We've explored 13 useful configuration options available in NGINX Ingress for Kubernetes.
If you can leverage these options, you can significantly enhance your application deployment and management in Kubernetes. We really hope this tutorial has provided you with a clear and approachable introduction to NGINX Ingress and its capabilities.
Now, it's time for you to try them out and unlock the Pandora box of NGINX Ingress in your Kubernetes environment!
Happy Ingressing!
Share with friends