Skip to content

NGINX Ingress with AWS ALB

What happens to traffic when terminating ssl at the AWS application load balancer?

When terminating SSL/TLS (Secure Sockets Layer/Transport Layer Security) at the AWS Application Load Balancer (ALB), incoming HTTPS traffic is decrypted at the ALB before being forwarded to the backend targets in plain HTTP. This process is also known as SSL termination or SSL offloading.

The benefits of SSL termination at the ALB include:

  1. Reduced Load on Backend Targets: By offloading the SSL processing to the ALB, backend targets receive decrypted traffic, which reduces their computational load.

  2. Centralized Certificate Management: Managing SSL certificates for multiple backend targets can be challenging. Termination at the ALB allows for centralized certificate management, which simplifies certificate deployment and management.

  3. Enhanced Security: The ALB can perform various security-related functions, such as access control, web application firewall (WAF), and SSL termination. By terminating SSL at the ALB, security-related tasks can be performed before traffic reaches the backend targets.

However, there are also some drawbacks to SSL termination at the ALB, including:

  1. Reduced Visibility: As traffic is decrypted at the ALB, backend targets cannot see the original SSL traffic, which can make it challenging to troubleshoot issues related to SSL.

  2. Increased Network Latency: Decrypting SSL traffic at the ALB can add some additional network latency. However, this latency is usually negligible unless the traffic volume is significant.

What if the service has its own certificate?

If the backend service has its own SSL/TLS certificate, terminating SSL at the ALB is still possible. In this case, the ALB acts as an SSL/TLS client, and it will verify the certificate of the backend service before establishing an SSL/TLS session. This is known as SSL/TLS passthrough, and it allows the backend service to maintain control over its SSL/TLS configuration.

What if doesn’t use https?

If the backend service does not use HTTPS, terminating SSL at the ALB is not possible, as there is no SSL/TLS traffic to terminate. In this case, the ALB can still be used as a load balancer to distribute traffic to the backend service, but it will not perform any SSL/TLS-related functions. In other words, the ALB will simply forward the traffic in plain HTTP without performing any SSL/TLS decryption or encryption.

TLS passthrough Overview

TLS Passthrough is a configuration option for terminating SSL/TLS at the backend targets instead of the Application Load Balancer (ALB). With TLS Passthrough, the ALB forwards the encrypted SSL/TLS traffic to the backend targets without decrypting it. This allows the backend targets to handle the SSL/TLS decryption and processing of the traffic, instead of the ALB.

TLS Passthrough is useful when the backend targets require access to the SSL/TLS information for processing, such as for client certificate authentication or session resumption. By forwarding the encrypted SSL/TLS traffic to the backend targets, they have full access to the SSL/TLS information and can perform SSL/TLS-related functions as needed.

The process of configuring TLS Passthrough involves configuring the ALB to forward SSL/TLS traffic to the backend targets without decrypting it. This requires configuring the ALB listener to use the TCP protocol instead of HTTPS, which is the default protocol used when terminating SSL/TLS at the ALB. Additionally, the backend targets must be configured to handle the SSL/TLS traffic, including configuring SSL/TLS certificates and configuring the web server software to handle SSL/TLS traffic.

It is important to note that using TLS Passthrough can add some additional complexity to the configuration and deployment of the backend targets, as they are responsible for handling SSL/TLS decryption and processing. Additionally, TLS Passthrough may not be suitable for all use cases, as it can result in increased network latency and may not be necessary if the backend targets do not require access to the SSL/TLS information.

Can you have a single LB configured for TLS passthrough for one backend target but terminate at the load balancer for another?

Yes, it is possible to configure an Application Load Balancer (ALB) to use SSL/TLS termination for some backend targets and TLS Passthrough for others. This can be useful when you have multiple backend targets with different SSL/TLS requirements, such as some targets that require SSL/TLS termination at the ALB for security and performance reasons, and other targets that require direct access to SSL/TLS information through TLS Passthrough.

To configure an ALB to use SSL/TLS termination for some backend targets and TLS Passthrough for others, you will need to create multiple target groups, each with its own SSL/TLS configuration. For example, you can create one target group that uses SSL/TLS termination for backend targets that do not require direct access to SSL/TLS information, and another target group that uses TLS Passthrough for backend targets that require direct access to SSL/TLS information.

Once you have created the target groups, you can configure the ALB listener rules to route traffic to the appropriate target group based on the URL path, host header, or other criteria. For example, you can configure the listener rules to route traffic to the SSL/TLS termination target group for requests that match a specific URL path or host header, and to route traffic to the TLS Passthrough target group for requests that match a different URL path or host header.

It is important to note that using multiple target groups with different SSL/TLS configurations can add complexity to the ALB configuration and management. Additionally, it may require additional backend targets to be created to support the different SSL/TLS configurations. However, this approach can provide more flexibility and control over SSL/TLS configuration for different backend targets, allowing you to meet their specific SSL/TLS requirements while using a single ALB for load balancing.

How does traffic get routed to different Kubernetes services using path-based routing with an AWS ALB and NGINX Ingress?

When using an AWS Application Load Balancer (ALB) and an NGINX Ingress controller for Kubernetes, path-based routing can be used to route traffic to different Kubernetes services based on the URL path.

Path-based routing allows you to map different URL paths to different Kubernetes services, providing a way to implement microservices architectures and multi-tenancy scenarios.

Here’s how traffic gets routed to different Kubernetes services using path-based routing with an AWS ALB and NGINX Ingress:

  1. The client sends a request to the ALB, specifying a URL path in the request.

  2. The ALB forwards the request to the NGINX Ingress controller.

  3. The NGINX Ingress controller evaluates the URL path in the request and compares it to the path rules defined in the Ingress resource.

  4. If there is a match between the URL path and a path rule in the Ingress resource, the NGINX Ingress controller routes the request to the Kubernetes service associated with that path rule.

  5. If there is no match between the URL path and any path rule in the Ingress resource, the NGINX Ingress controller returns an HTTP 404 Not Found error to the client.

For example, suppose you have two Kubernetes services named “service-1” and “service-2” and you want to route traffic to these services based on the URL path. You can create an Ingress resource with two path rules, one for each service:

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: my-ingress
  annotations:
    kubernetes.io/ingress.class: alb
spec:
  rules:

  - http:
      paths:
      - path: /service-1
        pathType: Prefix
        backend:
          service:
            name: service-1
            port:
              name: http
      - path: /service-2
        pathType: Prefix
        backend:
          service:
            name: service-2
            port:
              name: http

With this Ingress resource, any request with a URL path starting with “/service-1” will be routed to the “service-1” Kubernetes service, and any request with a URL path starting with “/service-2” will be routed to the “service-2” Kubernetes service.

The difference between host-based, path-based, and header-based routing for HTTP traffic?

  1. Host-based routing: Host-based routing is a method of routing traffic based on the domain name specified in the request’s “Host” header. When using host-based routing, different domain names can be mapped to different backend services. For example, you can route traffic for “example.com” to one backend service and traffic for “api.example.com” to a different backend service. Host-based routing is useful when you want to host multiple applications on the same IP address and port.

  2. Path-based routing: Path-based routing is a method of routing traffic based on the URL path specified in the request. When using path-based routing, different URL paths can be mapped to different backend services. For example, you can route traffic for “/api/v1” to one backend service and traffic for “/web” to a different backend service. Path-based routing is useful when you want to implement microservices architectures or multi-tenant scenarios.

  3. Header-based routing: Header-based routing is a method of routing traffic based on the value of a specific HTTP header in the request. When using header-based routing, different header values can be mapped to different backend services. For example, you can route traffic for requests that include a specific custom header value to one backend service and traffic for requests that include a different custom header value to a different backend service. Header-based routing is useful when you want to implement more complex routing scenarios based on custom header values.

Comparing Istio + AWS LB & NGINX Ingress + AWS LB

Both Istio and NGINX Ingress can be used as ingress controllers for Kubernetes clusters, and both can be used with an AWS Application Load Balancer (ALB) to route traffic to different backend services.

Here are some differences between using Istio and AWS LB vs. NGINX Ingress and AWS LB:

  1. Traffic management: Istio provides advanced traffic management features such as intelligent routing, circuit breaking, load balancing, and service mesh observability. It also has built-in support for mTLS (Mutual Transport Layer Security) and can provide advanced security features like JWT (JSON Web Token) authentication and authorization. NGINX Ingress, on the other hand, provides basic traffic management features such as path-based routing and SSL termination.

  2. Learning curve: Istio has a steeper learning curve than NGINX Ingress, as it provides a more complex set of features and requires a higher level of expertise to configure and manage. NGINX Ingress, on the other hand, is easier to set up and configure and has a more straightforward configuration model.

  3. Performance: Istio provides a high level of observability and control over network traffic, but it also comes with additional overhead due to its service mesh architecture. NGINX Ingress, on the other hand, is lightweight and provides faster performance, but with fewer features.

  4. Open source vs commercial: Istio is an open source project backed by Google and other companies, while NGINX Ingress is offered both as an open source project and a commercial product by NGINX Inc.

If you require advanced traffic management and security features, and are willing to invest the time and effort to learn and configure a more complex system, Istio with AWS LB may be a good choice. On the other hand, if you need a lightweight and straightforward ingress controller that is easy to set up and configure, NGINX Ingress with AWS LB may be a better fit.


Last update : 10 mai 2023
Created : 10 mai 2023