User-defined Webhooks in Puppet Relay with Knative and Ambassador API Gateway

What is Puppet Relay?
Relay is an event-driven automation platform that makes wrangling diverse operational environments easy. Relay executes workflows, which consist of multiple related steps, to perform actions like opening Jira tickets, merging pull requests, or even deploying an application to a Kubernetes cluster.
Relay is built on containerization and leverages Tekton to execute workflows. Each step in a Relay workflow runs an OCI-compatible container image. Unlike conventional workflow automation tools, this allows you to make your own completely custom steps; you’re not restricted to our curated steps or even to a particular programming language.
When we set out to implement triggers to run workflows from event data automatically, we wanted to ensure you’d have similar flexibility within a workflow to receive external data. So we decided to provide three initial options: schedule triggers, which run your workflows on a specified interval; push triggers, which allow your services to send data to Relay directly; and webhook triggers, which integrate with lots of external services that can push event data to arbitrary HTTP endpoints.
Webhook triggers presented the biggest technical challenge to implement, as every service provides slightly different payloads representing their events. Keeping with our container-based approach, we let you define webhook triggers by running your web server in a container you provide to us. This post describes how we implemented our webhook trigger handling using Knative Serving and the Ambassador Edge Stack API Gateway.
Installing Knative Serving
We chose Knative Serving as a reverse proxy for webhook triggers. We could have configured a Kubernetes deployment for each webhook trigger, but we felt the resource burden on our cluster would be too high to have every customer pod running all the time.
Knative Serving’s unique model uses an activator and autoscaler to dynamically provision pods when it receives requests. A rough state machine for a Knative service looks like:
- When a service is created or updated, a revision is created initially in an inactive state. This corresponds to a Kubernetes deployment scaled to zero pods. In this inactive state, HTTP requests are routed to the Knative activator.
- When an HTTP request is received for the service, the Knative activator queues the request and switches the current revision to an active state. Knative then scales the deployment up and switches the service to route to the deployment’s pods. Once the service is pointing at the deployment, the queued HTTP request is dispatched.
- The Knative autoscaler monitors inbound request rate and scales the deployment as needed.
- If the service doesn’t receive any HTTP requests after a configured timeout, the deployment is scaled back to zero, the revision is marked as inactive, and the service is switched back to the activator.
- Repeat from the beginning!
For a cached container image, the whole activation process generally takes less than 2 seconds, quickly enough for our webhook handling use case. And for webhook triggers that only receive events every few minutes or less frequently, it saves us considerable cluster resources.
Installing Knative Serving is straightforward. You need their CRDs and core components like the activator. Here we’ll use version 0.13.0, but check their installation instructions for the latest version.
$ kubectl apply -f https://github.com/knative/serving/releases/download/v0.13.0/serving-crds.yaml
$ kubectl apply -f https://github.com/knative/serving/releases/download/v0.13.0/serving-core.yaml
Then you need to pick a networking layer, or gateway, to route requests.
Choosing a Knative Serving Gateway
Like most Knative Serving users, we started by evaluating Istio, a popular service mesh and ingress gateway offering for Kubernetes. However, Istio’s focus on connecting microservices didn’t really support our use case:
- We didn’t need the service mesh component of Istio at all, only its Envoy gateway. A complete Istio installation on our cluster consumed a lot of resources, most of which were ultimately going to waste.
- Because we’re configuring webhook triggers dynamically from our database, we put our own reverse proxy in front of Knative Serving. It is difficult (although not impossible) to change the behavior of Istio to run as an internal-facing service instead of a public gateway.
Most other networking layer options for Knative Serving were positioned more strongly as ingress gateways, focusing mainly on exposing services directly to the internet.
Ultimately we settled on Ambassador because its lightweight single-container model made deploying it for our internal use case easy.
Installing Ambassador and Configuring Knative to Use it
For Relay, we use a custom Helm chart to set up the Ambassador API Gateway. We have a single deployment, an optional horizontal pod autoscaler, a service account with corresponding role bindings, and finally a ClusterIP service to use as the target networking layer for Knative.
Our deployment YAML is largely the same as the one from the Ambassador installation instructions in ambassador-rbac.yaml
. However, we must explicitly enable Knative support:
Likewise, our service account and role bindings are similar to those in ambassador-rbac.yaml
, but use {{ .Release.Namespace }}
instead of default
.
For the gateway service, note especially:
- The
app.kubernetes.io/component
label must be present and set exactly to the valueambassador-service
or Ambassador won’t pick it up to set the Knative service target correctly. - We use
type: ClusterIP
to make the service cluster-local. This instance of Ambassador won’t be reachable from the internet.
As a reference, you can find our entire Ambassador Helm chart on GitHub.
Finally, we need to configure Knative to use cluster-local routing and Ambassador as its default gateway. Simply apply this manifest to your cluster using kubectl apply
:
Deploying, Testing, and Managing Internal-Only Knative Services
Now you can create a cluster-local Knative service. Use kubectl apply
on a manifest like this:
Within a few seconds, Ambassador will process the service and set up a mapping for it. Assuming you installed Ambassador to the ambassador
namespace, you’ll see this service when you run kubectl get svc
:
If you don’t see the service pointed at your Ambassador deployment, inspect the Knative service and ingress using kubectl describe ksvc my-test-service
and kubectl describe king my-test-service
. The status conditions and events should provide useful hints to remediate any problems.
Now we can try sending a request to the service by running a one-off pod:
$ kubectl run \
--generator=run-pod/v1 internal-requester \
--image=alpine --rm -ti --attach --restart=Never \
-- wget -q -O - http://my-test-service.default.svc.cluster.local
Hello World!
pod "internal-requester" deleted
Yay! Your service works. You’ll also see a pod running to handle the request you just made. If you don’t make any more requests, within a few minutes, that pod will be automatically terminated.
You can view all the mappings Ambassador has configured for your Knative services by forwarding the admin endpoint of your deployment:
$ kubectl port-forward -n ambassador deployment/ambassador 8877
Forwarding from 127.0.0.1:8877 -> 8877
Forwarding from [::1]:8877 -> 8877
Then navigate to http://localhost:8877/ambassador/v0/diag/.
For Relay, we manage our Knative services by writing out a higher-level CRD that our operator processes. This lets us perform lifecycle management operations more efficiently. For example, we create and clean up webhook triggers and workflow runs in batches we call tenants. We get a ton of value from the combination of Tekton, Knative, Ambassador, and our own operator, with relatively little cluster resource overhead.
Deployment Scenarios for Ambassador and Knative
The developer use cases for Knative fall into three categories:
- Replace glue/aggregation functions with Knative
Function as a Service (FaaS) offerings has become popular as a way to deploy and run services that “glue” functionality together. The main challenge for development teams is that the workflow for deploying cloud-based FaaS is different than that for Kubernetes. If you’ve already invested in training engineers to work with Kubernetes, then the added time and money to train them is extraneous. - Build smaller microservices as functions
Some simple functions are event-driven and provisioning (or running) an entire microservice/app framework seems unnecessary. Knative provides “just enough” framework to deploy and manage the lifecycle of a very simple microservice or “nanoservice” using the primitives provided within modern language stacks. - Deploy high-volume functions, cost-effectively
Pay-as-you-go serverless offerings can be very cost-effective for certain use cases. For longer-running or high-volume functions, pay-as-you-go isn’t as practical. Running Knative on your hardware, or even running this via Kubernetes deployed on cloud VMs, can enable the easier execution costing when you know that you will be running a service with high-volume traffic.
Summary
Running on-demand microservices in your infrastructure is easier now than ever before. With the help of Knative and Ambassador, you can drastically reduce costs and resource utilization while maintaining clean separations of APIs across your environment.
At the frontier of cloud-native experience, Knative also unlocks some very exciting opportunities that haven’t been practical in conventional deployment environments. In this post, we explored low-trust user-defined workloads using custom containers as one example, but there are many more!
Next Steps
- Install the Ambassador Edge Stack.
- Install Knative with Ambassador.
- Contact the Ambassador team to learn more about using Ambassador with Knative.
- Sign up for Relay to try webhook triggers yourself.