Guest Author

Using Edge Stack & Cloud Native Architecture in Building Python Apps

Muhammed Ali
Ambassador Labs
Published in
7 min readJul 27, 2023

--

Cloud native architecture has become increasingly popular in the development of software applications. With the growing demand for faster, more efficient, and scalable software solutions, more and more developers and companies have transitioned from monoliths to microservices.

Edge Stack is an API gateway and Ingress controller designed for microservices. It is built on top of the open source Kubernetes container orchestration platform. It provides a complete set of tools for managing and securing these cloud native apps in a cloud environment with public, private, or hybrid clouds.

This article explores how Edge Stack, in line with how cloud native principles and DevOps practices can be effectively utilized to build Python applications. The journey begins by deploying the Python app to Kubernetes, a critical part of cloud native systems, and then securely exposing the application using Edge Stack’s capabilities.

Prerequisites

To follow this tutorial, you should have the following:

  1. A running Kubernetes cluster
  2. An Ambassador Cloud account

How do we know the app uses cloud native architecture?

We can confirm that this Python application utilizes cloud native architecture based on the following characteristics:

  • The app is containerized, making it portable and easily deployable across cloud native systems using Kubernetes.
  • It is orchestrated using Kubernetes, demonstrating the cloud native approach to application management.
  • The app is a small, decoupled service from the ingress, showcasing flexibility and suitability for deployment anywhere.
  • A load balancer is used to serve any number of similar apps, ensuring scalability and resilience in a cloud native environment.

Connect to Ambassador Cloud

In this section, we will install Edge Stack into our Kubernetes cluster and then connect the cluster to Ambassador Cloud.

To do this, run the commands below:

kubectl apply -f https://app.getambassador.io/yaml/edge-stack/3.5.2/aes-crds.yaml && \
kubectl wait - timeout=90s - for=condition=available deployment emissary-apiext -n emissary-system
kubectl apply -f https://app.getambassador.io/yaml/edge-stack/3.5.2/aes.yaml && \
kubectl -n ambassador wait - for condition=available - timeout=90s deploy -lproduct=aes

The commands above install Edge Stack into your cluster. Once this has been completed successfully, log into Ambassador Cloud, go to the navigation bar and click on Services.

On the services page, click on the “+ SERVICE” button at the top right corner, select the cluster you want to connect to Ambassador Cloud, and then click on the “GENERATE A CLOUD TOKEN” button.

On doing this, you will be provided with some commands to run on your CLI. Running these commands will connect your Kubernetes cluster to Ambassador Cloud and enable you to visualize your cluster’s information in Ambassador Cloud (see image below). This allows for real-time monitoring and centralized management of the user’s Kubernetes environment.

Run the app on Kubernetes locally

A major part of cloud native architecture is the orchestration of the application on Kubernetes. Running a microservice on Kubernetes within an operating system is sometimes synonymous with running a cloud native app, as it aligns with cloud native application architecture.

Now we can go ahead and apply the application deployment and services so Ambassador Cloud can see it.

I’ve built a sample Python application and pushed it to GitHub and Docker Hub. For context, this application has three endpoints:

  1. /users with the GET method, which returns a list of all users in the system.
  2. /users/<int:user_id> with GET method, which returns the details of a specific user based on the ID provided in the URL parameter.
  3. /users with POST method, which allows new users to be added to the system by sending a JSON payload in the request body.

Let’s pull the application from Docker Hub into our cluster using the docker pull khabdrick/users command and then proceed to create and apply a deployment and service to our Kubernetes cluster to run the application using the YAML configuration file below.

---
apiVersion: apps/v1
kind: Deployment
metadata:
name: users
spec:
replicas: 1
selector:
matchLabels:
app: users
strategy:
type: RollingUpdate
template:
metadata:
labels:
app: users
spec:
containers:
- name: users
image: khabdrick/users:v1
ports:
- name: http
containerPort: 8080
---
apiVersion: v1
kind: Service
metadata:
name: users
annotations:
a8r.io/description: "users app service"
a8r.io/owner: "No owner"
a8r.io/chat: "#ambassador"
spec:
ports:
- name: http
port: 80
targetPort: 8080
selector:
app: users

For context, the deployment is named “users” and is configured to run one replica of the khabdrick/users:v1 container image from Docker Hub. It is set up with a RollingUpdate strategy, which means that any changes made to the deployment will be rolled out gradually across the replicas to ensure minimal disruption. Also, the container is configured to listen on port 8080, which is exposed to the Kubernetes cluster via the Pod’s IP address.

The service is also named “users”. It is defined to expose the “users” deployment within the Kubernetes cluster. The service is configured to listen on port 80 and route incoming traffic to port 8080 on the pods selected by the label selector “app: users”. Any traffic received on port 80 will be automatically forwarded to the Pod running the “users” container.

Now that you understand the content of the YAML configuration, let’s proceed with applying it to our Kubernetes cluster.

kubectl apply -f flask.yaml -n ambassador

After applying the YAML configuration file to your Kubernetes cluster, you’d see that it will now be reflected on Ambassador Cloud’s services page.

You can access the application running on the cluster by running the kubectl port-forward <pod_name> 8080:8080 -n ambassador command. Notice that the application is running on localhost.

Routing traffic from the cluster’s edge

This part of the cloud native architecture involves handling incoming requests and directing them to the appropriate services within the Kubernetes cluster.

To get this done, we have to set up Listener and Mapping CRDs on our cluster. The Listener CRD informs Edge Stack about the specific port it needs to listen on, while the Mapping CRD instructs Edge Stack on how to direct incoming requests based on the host and URL path from the edge of the cluster to the relevant Kubernetes services.

To set up the Listener, apply the YAML configuration file below to your Kubernetes cluster following the principles of cloud native architecture. The Listener instance is named edge-stack-listener-8080, demonstrating the cloud native approach, and it listens on port 8080 using the HTTPS protocol with the security model of X-Forwarded-Proto (XFP). The hostBinding section, another essential element of cloud native architecture, specifies that it will listen for requests from all namespaces.

kubectl apply -f - <<EOF
---
apiVersion: getambassador.io/v3alpha1
kind: Listener
metadata:
name: edge-stack-listener-8080
namespace: ambassador
spec:
port: 8080
protocol: HTTPS
securityModel: XFP
hostBinding:
namespace:
from: ALL
EOF

For the Mapping, the edge of the cluster will be mapped to a URL prefix, in this case, “/”. You can do this by running the following command:

kubectl apply -f - <<EOF
---
apiVersion: getambassador.io/v3alpha1
kind: Mapping
metadata:
name: users
namespace: ambassador
spec:
hostname: "*"
prefix: /
service: users
docs:
path: "/.ambassador-internal/openapi-docs"
EOF

Now you can go to your “users” Service on Ambassador Cloud. Under the ambassador namespace, you should see the new mapping you just created.

Now the steps for developing the app using cloud native architecture are more or less complete. You can confidently call this a cloud native app.

Run the app from Edge Stack’s load balancer

At the moment, the application is running on an Edge Stack load balancer. Edge Stack provides a load balancer IP address in edge-stack service. To run the application, we can get this IP address by running the following command:

export LB_ENDPOINT=$(kubectl -n ambassador get svc edge-stack \
-o "go-template={{range .status.loadBalancer.ingress}}{{or .ip .hostname}}{{end}}")

Now, you can access the /user endpoint with the following cURL command. You will notice that you can now access your application on https://.curl -Lki https://$LB_ENDPOINT/users

This will return:

HTTP/1.1 200 OK
server: envoy
date: Sun, 09 Apr 2023 08:49:22 GMT
content-type: application/json
content-length: 184
x-envoy-upstream-service-time: 2
[
{
"age": 30,
"id": 1,
"location": "New York",
"name": "John Doe"
},
{
"age": 25,
"id": 2,
"location": "San Francisco",
"name": "Jane Doe"
}
]

Conclusion

This article provides a good starting point for developers who want to utilize cloud native architecture in building Python apps with Edge Stack.

By embracing cloud native architecture and leveraging Edge Stack’s managed services, software developers can build highly scalable and efficient Python apps. Cloud native technologies, combined with cloud infrastructure and DevOps practices, empower developers to design and deploy resilient and flexible applications in a cloud environment.

To learn more about Edge Stack, visit their official website.

--

--

Technical Writer with experience in building awesome stuff with Django, Python, JavaScript, React, Kubernetes, etc. || Cloud-Native enthusiast.