Ambassador Labs

Code, ship, and run apps for Kubernetes faster and easier than ever — powered by Ambassador’s industry-leading developer experience.

Follow publication

Implementation Guide

How to Implement HTTP/3 Support with Ambassador Edge Stack 3.0

Edge Stack 3 includes native HTTP/3 support powered by Envoy Proxy

Tenshin Higashi
Ambassador Labs
Published in
12 min readJun 29, 2022

--

Ambassador Labs is excited to announce support for HTTP/3 for our Emissary-ingress and Edge Stack products! With minimal configuration, you can now get all the benefits of HTTP/3 for your application running in Kubernetes. This article provides a 5-minute getting started guide for anyone keen to get hands-on with HTTP/3.

The Hypertext Transfer Protocol, more commonly known as HTTP, is a protocol used to exchange data on the world wide web. HTTP/3 is the latest evolution of this protocol, and it has been designed to reduce latency and increase resilience when compared to the existing HTTP/1 and HTTP/2, especially over lossy networks such as mobile connections and those seen in IoT and emerging market use cases.

HTTP/3 is already supported by 73% of running browsers and over 25% of websites. This article explores some of the history of the protocol, and we’ve also conducted preliminary benchmark tests to study HTTP/3 and test it against the previous versions of the protocol.

What is HTTP/3?

HTTP/3 is the next major version of the Hypertext Transfer Protocol that uses QUIC and UDP (rather than TCP) to provide a low-latency and resilient network connection for the World Wide Web.

In order to understand why HTTP/3 is so important, it can be useful to understand where it came from. HTTP/1.0 was originally standardized in 1996 and required a separate Transmission Control Protocol (TCP) connection to be made to the same server for every request. In 1997 the protocol evolved to HTTP/1.1 which utilized a keep-alive connection that allowed TCP connections to be reused for multiple requests. Taking out the requirement to establish a new connection for every request leads to less latency. However, this still wasn’t ideal, as despite sharing a connection, each request still needed to be executed one by one.

HTTP/2 was released in 2015 and provided less latency compared to HTTP/1.1. It accomplished this by introducing streams that allowed for multiple requests to be transmitted over the same connection at the same time. However, there are still flaws with HTTP/2, for example, the fact that all requests are equally impacted by packet loss even if only one request loses data. This issue is related to head-of-line-blocking. It soon became obvious that one of the major flaws of HTTP to this point was the fact that it still used TCP as its foundation.

Compared with HTTP/2, there are many benefits to HTTP/3, the main focus being that it replaces TCP with QUIC which is built on top of the User Datagram Protocol (UDP). QUIC streams share the same connection, yet are delivered independently. This allows for much more flexibility compared to TCP and eliminates the head-of-line-blocking issue.

HTTP/3 support with Ambassador Edge Stack and Emissary-ingress

Although many organizations are just beginning to roll out HTTP/3, we wanted to make sure that Ambassador Labs’ Edge Stack and Emissary-ingress products supported the new version of the protocol as soon as possible. With the finalization of the HTTP/3 RFC announced in June, now is the perfect timing!

There are a lot of unique considerations to be made when it comes to HTTP/3 support. We’ve tested different ways to set up Edge Stack and Emissary-ingress for HTTP/3 and have converged on the methods listed below. As new methods develop, we’ll update the documentation with any changes.

Edge Stack and Emissary-ingress use the alt-svc response header to advertise to clients that it supports HTTP/3. When a client sees the alt-svc header, it can then upgrade to HTTP/3 and connect to Edge Stack or Emissary-ingress with the QUIC protocol.

The QUIC protocol also requires Transport Layer Security (TLS) version 1.3 to communicate. Without TLS 1.3, the client falls back to HTTP/2 or HTTP/1.1, which support the older TLS versions. Due to this restriction, some clients will also enforce that certificates are fully valid, which causes problems when you use self-signed certs. For example, the Chrome web browser will not upgrade the connection to HTTP/3 unless a valid certificate is present.

Setting up HTTP/3 with Edge Stack and Emissary-ingress

There are four main steps to configure HTTP/3 support on Edge Stack and Emissary-ingress:

  1. Configure Listener resources.
  2. Configure a Host.
  3. Make sure you have a valid certificate.
  4. Configure an external load balancer.

Step 1 — Configuring your Listener resources

To make Edge Stack and Emissary-ingress listen for HTTP/3 connections over the QUIC network protocol, you need to configure a Listener with TLS, HTTP, and UDP configured within protocolStack.

The order of the elements within the protocolStack is important and should be configured as [“TLS”, “HTTP”, “UDP”].

The Listener configured for HTTP/3 can be bound to the same address and port (0.0.0.0:8443) as the Listener that supports HTTP/2 and HTTP/1.1. This is not required but it allows Edge Stack and Emissary-ingress to inject the default alt-svc: h3=”:443"; ma=86400, h3–29=”:443"; ma=86400 header into the responses returned over the TCP connection with no additional configuration needed.

# This is a standard Listener that leverages TCP to serve HTTP/2 and HTTP/1.1 traffic.
# It is bound to the same address and port (0.0.0.0:8443) as the UDP listener.
apiVersion: getambassador.io/v3alpha1
kind: Listener
metadata:
name: emissary-ingress-listener-8443
namespace: emissary
spec:
port: 8443
protocol: HTTPS
securityModel: XFP
hostBinding:
namespace:
from: ALL
---
# This is a Listener that leverages UDP and HTTP to serve QUIC and HTTP/3 traffic.
# NOTE: Raw UDP traffic is not supported. UDP and HTTP must be used together.
apiVersion: getambassador.io/v3alpha1
kind: Listener
metadata:
name: emissary-ingress-listener-8443-udp
namespace: emissary
spec:
port: 8443
# Order is important here. UDP must be the last item, and HTTP is required.
protocolStack:
— TLS
— HTTP
— UDP
securityModel: XFP
hostBinding:
namespace:
from: ALL

Step 2 — Configuring the Host

Because the QUIC network requires TLS, the certificate needs to be valid so that the client can upgrade a connection to HTTP/3. See the Host documentation for more information on how to configure TLS for a Host.

An example of the Host setup looks like this:

apiVersion: getambassador.io/v3alpha1
kind: Host
metadata:
name: my-domain-host
spec:
hostname: “emissary-ingress.isawesome.com”
# acme isn’t required but just shown as an example of how to manage a valid TLS cert
acmeProvider:
email: emissary@emissary.io
authority: https://acme-v02.api.letsencrypt.org/directory
tls:
# QUIC will require it and recommended if client supports it to ensure you are staying up-to-date on the latest security
min_tls_version: v1.3
# either protocol can be upgraded but usually good to leverage http/2 when possible
alpn_protocols: h2,http/1.1

Step 3 — Certificate verification

Clients can only upgrade to a HTTP/3 connection with a valid TLS 1.3 certificate. If the client won’t upgrade to HTTP/3, verify that you have a valid TLS certificate and not a self-signed certificate. For example:

apiVersion: getambassador.io/v3alpha1
kind: Host
metadata:
name: my-domain-host
spec:
hostname: “emissary-ingress.isawesome.com”
# acme isn’t required but just shown as an example of how to manage a valid TLS cert
acmeProvider:
email: emissary@emissary.io
authority: https://acme-v02.api.letsencrypt.org/directory
tls:
# QUIC will require it and recommended if client supports it to ensure you are staying up-to-date on the latest security
min_tls_version: v1.3
# either protocol can be upgraded but usually good to leverage http/2 when possible
alpn_protocols: h2,http/1.1

Step 4 — External load balancers setup

​The two most common service types to expose traffic outside of a Kubernetes cluster are:

  • LoadBalancer service type: The load balancer controller generates and manages the cloud provider-specific external load balancer.
  • NodePort service type: The platform administrator has to manually set up things like the load balancers, firewall rules, and health checks. When you use NodePort, it is also best practice to use a LoadBalancer as well.

The desired setup would be to configure a single service of type LoadBalancer, but this comes with some current restrictions.

First, you need a recent version of Kubernetes with the MixedProtocolLBService feature enabled.​Second, your cloud service provider needs to support the creation of an external load balancer with mixed protocol types (TCP/UDP) and port reuse. Support for Kubernetes feature flags may vary between cloud service providers. Refer to your provider’s documentation to see if they support this scenario.

For context, a simple LoadBalancer configuration looks like the following:

# note: extra fields such as labels and selectors removed for clarity
apiVersion: v1
kind: Service
metadata:
name: emissary-ingress
namespace: emissary
spec:
ports:
— name: http
port: 80
targetPort: 8080
protocol: TCP
— name: https
port: 443
targetPort: 8443
protocol: TCP
— name: http3
port: 443
targetPort: 8443
protocol: UDP
type: LoadBalancer

External cloud provider Configuration

The final stage of the setup process for Google Kubernetes Engine (GKE) and ​​Azure Kubernetes Service (AKS) requires you to do the following:

  1. Reserve a public static IP address.
  2. Create two services of type LoadBalancer; one for TCP and one for UDP.
  3. Assign the loadBalancer IP of the two services to the public static IP address.

The load balancer, as described above, should look like the following:# note: selectors and labels removed for clarity

# note: selectors and labels removed for clarity
apiVersion: v1
kind: Service
metadata:
name: emissary-ingress
namespace: emissary
Spec:
type: LoadBalancer
LoadBalancerIP: xx.xx.xx.xx # Enter your public static IP address here.
Ports:
- name: http
port: 80
targetPort: 8080
protocol: TCP
- name: https
port: 443
targetPort: 8443
protocol: TCP
—--
apiVersion: v1
kind: Service
metadata:
name: emissary-ingress-udp
namespace: emissary
Spec:
type: LoadBalancer
LoadBalancerIP: xx.xx.xx.xx # Enter your public static IP address here.
Ports:
- name: http3
port: 443 # HTTP/3 requires you use 443 for the client-facing port.
targetPort: 8443
protocol: UDP

Based on the above example, your cloud provider generates two LoadBalancers, one for UDP and the other for TCP.

A note for AKS users: You may need to make sure that the Managed Identity or Service Principal has permissions to assign the IP address to the newly created LoadBalancers. See Azure Docs — Managed Identity for more information.

Bonus: An alternate external load balancer setup

​Another option that doesn’t require you to pay for additional LoadBalancers is to use a NodePort service as follows:​

# note: extra fields such as labels and selectors removed for clarity
apiVersion: v1
kind: Service
metadata:
name: emissary-ingress
namespace: emissary
spec:
Type: NodePort
ports:
— name: http
port: 80
targetPort: 8080
protocol: TCP
— name: https
port: 443
targetPort: 8443
protocol: TCP
— name: http3
port: 443
targetPort: 8443
protocol: UDP

This approach exposes the traffic on a static port for each node in the cluster. Once you’ve created this service, you need to perform the following steps to finalize your setup:​

  1. Create an external load balancer that sends UDP and TCP traffic to the nodes. (External load balancer configurations vary between cloud service providers. Refer to your provider’s documentation for more information.)
  2. Port forward client Port to NodePort (80:30080 and 443:30443)
  3. Configure Firewall/SecurityGroup rules to allow traffic between load balancer and cluster nodes.
  4. Configure health checks between the LoadBalancer and Nodes in the NodePort.

Amazon Elastic Kubernetes Service HTTP/3 configuration

Create a network load balancer (NLB)

The virtual private cloud (VPC) for your load balancer needs one public subnet in each availability zone where you have targets.

SUBNET_IDS=(<your-subnet1-id> <your-subnet2-id> <your-subnet3-id>)aws elbv2 create-load-balancer \
—-name ${CLUSTER_NAME}-nlb \
—-type network \
—-subnets ${SUBNET_IDS}

Create a NodePort service

Now create a NodePort service for Emissary-ingress installation with two entries. Use port: 443 to include support for both TCP and UDP traffic.

# note: extra fields such as labels and selectors removed for clarity
apiVersion: v1
kind: Service
metadata:
name: emissary-ingress
namespace: emissary
Spec:
Type: NodePort
ports:
— name: http
port: 80
targetPort: 8080
protocol: TCP
Nodeport: 30080
— name: https
port: 443
targetPort: 8443
protocol: TCP
Nodeport: 30443
— name: http3
port: 443
targetPort: 8443
protocol: UDP
Nodeport: 30443

Create target groups

Run the following command with the variables for your VPC ID and cluster name:

VPC_ID=<your-vpc-id>
CLUSTER_NAME=<your-cluster-name>
aws elbv2 create-target-group --name ${CLUSTER_NAME}-tcp-tg \
—-protocol TCP --port 30080 --vpc-id ${VPC_ID} \
—-health-check-protocol TCP \
--health-check-port 30080 \
—-target-type instance
aws elbv2 create-target-group --name ${CLUSTER_NAME}-tcp-udp-tg \
--protocol TCP_UDP --port 30443 --vpc-id ${VPC_ID} \
--health-check-protocol TCP \
—-health-check-port 30443 \
--target-type instance

Register your instances

Next, register your cluster’s instance with the instance IDs and Amazon Resource Names (ARN).

To get your cluster’s instance IDs, enter the following command:

aws ec2 describe-instances \
—-filters Name=tag:eks:cluster-name,Values=${CLUSTER_NAME} \
—-output text
—-query ‘Reservations[*].Instances[*].InstanceId’ \

To get your ARNs, enter the following command:

TCP_TG_NAME=${CLUSTER_NAME}-tcp-tg-name
TCP_UDP_TG_NAME=${CLUSTER_NAME}-tcp-udp-tg-name
aws elbv2 describe-target-groups \
—-query
‘TargetGroups[?TargetGroupName==`’${TCP_TG_NAME}’`].TargetGroupArn’ \
—-output text
aws elbv2 describe-target-groups \
—-query ‘TargetGroups[?TargetGroupName==`’${TCP_UDP_TG_NAME}’`]. TargetGroupArn’ \
—-output text

Now register the instances with the target groups and load balancer using the instance IDs and ARNs you retrieved.

INSTANCE_IDS=(<Id=i-07826…> <Id=i-082fd…>)
# from Step — 5
REGION=<your-region>
TG_NAME=<your-tg-name>
TCP_TG_ARN=arn:aws:elasticloadbalancing:${REGION}:079…..:targetgroup/${TG_NAME}/…TCP_UDP_TG_ARN=arn:aws:elasticloadbalancing:${REGION}:079…..:targetgroup/${TG_NAME}/…
aws elbv2 register-targets — target-group-arn ${TCP_TG_ARN} —- targets ${INSTANCE_IDS}
aws elbv2 register-targets — target-group-arn ${TCP_UDP_TG_ARN} —- targets ${INSTANCE_IDS}

Create listeners in AWS.

Register your cluster’s instance with the instance IDs and ARNs.

To get the load balancer’s ARN, enter the following command:

aws elbv2 describe-load-balancers --name ${CLUSTER_NAME}-nlb \
—-query ‘LoadBalancers[0].LoadBalancerArn’ \
—-output text

Create a TCP listener on port 80 that forwards to the TargetGroup {TCP_TG_ARN}.

aws elbv2 create-listener --load-balancer-arn ${LB_ARN} \
—-protocol TCP —- port 80 \
—-default-actions Type=forward,TargetGroupArn=${TCP_TG_ARN}

Create a TCP_UDP listener on port 443 that forwards to the TargetGroup {TCP_UDP_TG_ARN}.

aws elbv2 create-listener —-load-balancer-arn ${LB_ARN} \
—-protocol TCP_UDP —-port 443 \
—-default-actions Type=forward,TargetGroupArn=${TCP_UDP_TG_ARN}

Update the security groups

Now you need to update your security groups to receive traffic. This security group covers all node groups attached to the EKS cluster:

aws eks describe-cluster — name ${CLUSTER_NAME} | grep clusterSecurityGroupId

Then authorize the cluster security group to allow internet traffic:

for x in ${CLUSTER_SG}; do \
aws ec2 authorize-security-group-ingress --group-id $$x --protocol tcp --port 30080 --cidr 0.0.0.0/0; \
aws ec2 authorize-security-group-ingress --group-id $$x --protocol tcp --port 30443 --cidr 0.0.0.0/0; \
aws ec2 authorize-security-group-ingress --group-id $$x --protocol udp --port 30443 --cidr 0.0.0.0/0; \
done

Get the DNS name for the load balancers

Enter the following command to get the DNS name for your load balancers and create a CNAME record at your domain provider:

aws elbv2 describe-load-balancers --name ${CLUSTER_NAME}-nlb \
--query ‘LoadBalancers[0].DNSName’ \
--output text

Create Listener resources

Now you need to create the Listener resources for Emissary-ingress. The first Listener in the example below handles traffic for HTTP/1.1 and HTTP/2, while the second Listener handles all HTTP/3 traffic.

kubectl apply -f — <<EOF
# This is a standard Listener that leverages TCP to serve HTTP/2 and HTTP/1.1 traffic.
# It is bound to the same address and port (0.0.0.0:8443) as the UDP listener.
apiVersion: getambassador.io/v3alpha1
kind: Listener
metadata:
name: emissary-ingress-https-listener
namespace: emissary
spec:
port: 8443
protocol: HTTPS
securityModel: XFP
hostBinding:
namespace:
from: ALL
---
# This is a Listener that leverages UDP and HTTP to serve HTTP/3 traffic.
# NOTE: Raw UDP traffic is not supported. UDP and HTTP must be used together.
apiVersion: getambassador.io/v3alpha1
kind: Listener
metadata:
name: emissary-ingress-https-listener-udp
namespace: emissary
spec:
port: 8443
# Order is important here. UDP must be the last item. HTTP is required.
protocolStack:
- TLS
- HTTP
- UDP
securityModel: XFP
hostBinding:
namespace:
from: ALL
EOF

Create a Host resource

Create a Host resource for your domain name.

kubectl apply -f — <<EOF
apiVersion: getambassador.io/v3alpha1
kind: Host
metadata:
name: emissary-ingress-aws-host
namespace: emissary
spec:
hostname: <your-hostname>
acmeProvider:
authority: none
tlsSecret:
name: tls-cert # The QUIC network protocol requires TLS with a valid certificate.
tls:
min_tls_version: v1.3
max_tls_version: v1.3
alpn_protocols: h2,http/1.1
EOF

Apply the quote service and a Mapping to test the HTTP/3 configuration.

Finally, apply the quote service to an Emissary-ingress Mapping.

kubectl apply -f https://app.getambassador.io/yaml/v2-docs/2.3.1/quickstart/qotm.yaml
kubectl apply -f — <<EOF
---
apiVersion: getambassador.io/v3alpha1
kind: Mapping
metadata:
name: quote-backend
spec:
hostname: “*”
prefix: /backend/
service: quote
docs:
path: “/.ambassador-internal/openapi-docs”
EOF

Now verify the connection:

$ curl -i http://<your-hostname>/backend/

Your domain now shows that it is being served with HTTP/3.

Conclusion

The new HTTP/3 protocol provides a lot of benefits over the previous generations of HTTP/1 and HTTP/2, including reduced latency and increased resilience when communicating over lossy networks. A key reason for these benefits is that HTTP/3 utilizes the QUIC protocol over UDP rather than TCP as used by previous generations of HTTP.

As HTTP/3 becomes more and more widely adopted across the web, it is imperative that organizations get ready for and adopt this new protocol. As shown in this article, the latest versions of Emissary-ingress and Ambassador Edge Stack require minimal configuration to enable HTTP/3, and so you can now get all the benefits of this new protocol for your application running in Kubernetes.

Learn More with our Implementing HTTP/3 Workshop

If you want to get hands-on guided experience with implementing HTTP/3 using Ambassador Edge Stack, join me for our upcoming workshop “Implementing HTTP/3 Workshop

Many thanks to Lance Austin and Johnny Kartheiser for their review and contributions to this article

Sign up to discover human stories that deepen your understanding of the world.

Free

Distraction-free reading. No ads.

Organize your knowledge with lists and highlights.

Tell your story. Find your audience.

Membership

Read member-only stories

Support writers you read most

Earn money for your writing

Listen to audio narrations

Read offline with the Medium app

--

--

Published in Ambassador Labs

Code, ship, and run apps for Kubernetes faster and easier than ever — powered by Ambassador’s industry-leading developer experience.

No responses yet

Write a response