Session affinity, load balancing controls, gRPC-Web, and Ambassador 0.52

Today’s cloud native applications are composed of multiple heterogeneous (micro)services, communicating with clients and each other in a wide variety of protocols and over a wide range of topologies. We’ve seen this first hand with Ambassador, which is being deployed in increasingly diverse workloads and environments.
Our goal with Ambassador Edge Stack is to make it the best cloud-native API Gateway on the planet. To that end, we’re excited to announce Ambassador 0.52., which adds the following new capabilities:
- Support for the gRPC-Web protocol. gRPC-Web is a protocol based on native gRPC that is designed for browser-to-server communication. Thanks to Gert van Dijk and Rotem Tamir.
- Advanced load balancing. Ambassador Edge Stack can now natively route traffic directly to physical IP addresses instead of DNS hostnames.
- Session affinity (“sticky session”) support. Ambassador can associate all HTTP requests coming from an end-user (based on a cookie, HTTP header, or source IP) to a specific Kubernetes pod.
Due to some architectural shifts, 0ur support for session affinity and advanced load balancing is early access. Read on for details.
gRPC-Web support
Today’s cloud services are exposed through a multitude of protocols. Ambassador now supports virtually every popular L7 protocol: HTTP, HTTP/2, gRPC, WebSockets, and, now, gRPC-Web. And, if you’re using a protocol that isn’t directly supported, Ambassador also supports native TCP routing.
gRPC-Web extends many of the benefits of gRPC to front-end developers: high performance, bi-directional streaming, and broad library support. However, because of browser limitations, gRPC-Web is not directly compatible with gRPC. Instead, a server-side proxy is used to translate between gRPC-Web requests and gRPC HTTP/2 responses.
Thanks to Envoy’s support for gRPC-Web, Ambassador now supports gRPC-Web as well with the enable_grpc_web: True
annotation. Note that this is a global setting.
Advanced Load Balancing
Ambassador has always provided a wide variety of options for routing: routing based on host, HTTP method, HTTP header, regular expressions, and so forth. We’ve learned that having flexible, fine-grained control over routing is critical for addressing various use cases. However, Ambassador provided operators with limited control over how requests were routed to endpoints. Historically, Ambassador would route requests directly to Kubernetes services, distributing the requests between different pods. This approach has worked well, as it’s simple to reason and test. A curl
to a Kubernetes service goes through the same path as an Ambassador request.
Kubernetes networking
In a typical Kubernetes cluster, requests sent to a Kubernetes service are routed by kube-proxy
. Somewhat confusingly, kube-proxy
isn’t a proxy in the classic sense but a process that implements a virtual IP for a service via iptables
rules. This architecture adds additional complexity to routing: not only do you introduce a small amount of latency, but iptables
isn’t designed for routing. Thus your load balancing strategy is limited to round-robin.
While the implementation is complex, this approach has one overwhelming advantage for Ambassador users: simplicity. Service discovery and load balancing are delegated to Kubernetes and testing the routing with common tools since curl
was straightforward.
Load balancing in 0.52
In Ambassador 0.52, we’re introducing a new set of controls for load balancing. These controls are opt-in, so if you don’t change anything, you’ll get the tried-and-true load balancing behavior. However, if you set the AMBASSADOR_ENABLE_ENDPOINTS
environment variable, you’ll enable the new controls. Specifically:
- Ambassador will watch all Kubernetes endpoints for state changes instead of just Kubernetes services.
- Ambassador can then be configured to use different load balancing algorithms and route directly to Kubernetes endpoints, bypassing
kube-proxy
.
Here’s a sample mapping where we add the load_balancer
annotation:
apiVersion: ambassador/v1
kind: Mapping
name: qotm_mapping
prefix: /qotm/
service: qotm
load_balancer:
policy: round_robin
Note that the default load balancing policy can be set globally with annotations in the Ambassador module.
Session Affinity
Besides the default round_robin
policy, Ambassador 0.52 now supports session affinity (aka “sticky sessions”) through the ring_hash
policy. You must specify how to identify the client for routing uniquely. Supported techniques include an arbitrary HTTP header, a cookie, or the actual source IP address.
Advance Load Balancing
We’re releasing advanced load balancing in 0.52 for broader testing and feedback. We’re particularly interested in the effect that enabling this feature has on different workloads and Kubernetes cluster configurations. We expect that there are many more endpoints than services, which results in an increased workload for your Kubernetes API server. We’re extremely interested in your feedback, positive or negative, on this feature.
Other changes in 0.52
We’re also shipping a number of customer-reported bug fixes and enhancements.
- Ambassador now supports bridging between HTTP/1.1 requests and gRPC backend services.
- The
extauth
filter now adds a tracing header when using the HTTP API (the gRPC API already adds a tracing header). - Allow
extauth
to create a header that wasn’t there before (#1313). - You can now use the Lua filter to embed simple scripts as part of a mapping. Thank you to Liam Costello for the PR.
- Startup performance improvements.
- Switch to the C YAML parser instead of the Python implementation for improved parsing performance (#1294, #1318).
- Add
xff_num_trusted_hops
configuration. This is important if you’re using a CDN such as CloudFlare and depend on theX-Forwarded-For
header for use cases such as rate limiting.
The updated Core Configuration documentation covers the new options above (e.g., Lua, gRPC HTTP/1.1 bridge, etc.).
Upcoming 0.60 breaking change
Ambassador 0.60 will, by default, listen for cleartext HTTP on port 8080 (rather than 80), and for HTTPS on port 8443 (rather than 443), to simplify running Ambassador without root privileges. If you are relying on the default port numbering in your installation, you will need to change your configuration and Ambassador 0.52 will attempt to warn you of this in the diagnostic service.
Installing 0.52
0.52 is available with the Docker tag quay.io/datawire/ambassador:0.52.0
. Update your existing deployment manifest with this tag and kubectl apply
to install 0.52 into your cluster.
You can also install via Helm:
helm install stable/ambassador
Upgrading to Ambassador 0.52
Ambassador relies on Kubernetes deployments for updates. To update Ambassador, change your Kubernetes manifest to point to quay.io/datawire/ambassador:0.52.0
and run kubectl apply
on the updated manifest. Kubernetes will apply a rolling update and update to 0.52.
If Ambassador Labs is working well for you, we’d love to hear about it. Drop us a line in the comments below, or @ambassadorlabs on Twitter.