Ambassador Labs

Code, ship, and run apps for Kubernetes faster and easier than ever — powered by Ambassador’s industry-leading developer experience.

Follow publication

Microservice Service Discovery: API Gateway or Service Mesh?

Anita Ihuman
Ambassador Labs
Published in
8 min readApr 4, 2022

When managing cloud-native connectivity and communication, there is always a recurring question on which technology is preferred for handling how microservice-based applications interact with each other. That is; “Should I start with an API gateway or use a Service Mesh?”.

When we talk about both technologies, we refer to the end-user’s experience in achieving a successful API call within an environment. Ultimately, these technologies can be classified as two pages of the same book, except they differ in how they operate individually. It is essential to understand the underlying differences and similarities between both technologies in software communication.

In this article, you will learn about service discovery in microservices, and also discover when you should use a Service Mesh or API gateway.

Introduction

In a microservice architecture, for clients to communicate with the backend, they need a service such as a service mesh or API gateway to relay these API requests. This technology receives the clients’ requests and transports them to the back end. As a result of how dynamic the ports to these backend services get, they may change from time to time for different reasons (nodes fail, new nodes are added to the network etc).

The API gateway, however, doesn’t know by itself how to identify the particular backend service a client requests, so it forwards the request to another service: called service-discovery. The API gateway asks the service-discovery software (e.g ZooKeeper, HashiCorp Consul, Eureka, SkyDNS) where it can locate different backend services according to API requests (by sending the name). Once the service-discovery software provides the necessary information, the gateway forwards the request to that address.

Before we dive in, let’s quickly talk about the service registry, as most of the service discovery concepts are based on it.

What is a service registry?

The service registry is a database that holds the data structures for network service instances. It serves as a messaging system that transports data for application-level communication.

Every service registers itself with the service registry providing all details on where it can be located; this includes host, port, node name, and any other service-specific metadata.

In a situation where the service registry is unavailable, connecting to the microservices might be difficult or impossible. That’s why the service registry is expected to be available and updated at all times for clients to get information on network locations.

What is service discovery?

The Microservices architecture is made up of smaller applications that need to communicate with each other through REST APIs constantly.

Service discovery (otherwise known as service location discovery) is how applications and microservices can automatically locate & communicate with each other. It is a fundamental pattern in service architecture that helps track where every microservice can be found. These microservices register their details with the discovery server, making it easy to communicate.

Types of service discovery

There are two types of service discovery patterns you should know about — server-side discovery and client-side discovery.

Let’s break them down in detail. 👇🏾

  • A server-side discovery: This discovery pattern permits client applications to request a service via a load balancer. The load balancer then queries the service registry before routing the client’s request. Think of the server-side discovery like the receptionist (load balancer) that attends to you when you phone an organisation. The receptionist will enquire about the details of the person you wish to communicate with and redirect your call to the person.
  • A client-side discovery: With the client-side discovery, the client is responsible for selecting network services available; by querying the service registry. It then proceeds to use the load balancing algorithm to select available service instances and requests. This is similar to how we interact with our search engines — you search for a topic on your browser and your browser (service registry) will search and return a list of URLs and port numbers. As a user, you will look for URLs that provide accurate answers to the request you made, then select the preferred URL that meets your demands.

Now that we understand what service discovery is, let’s look at what API gateway and service mesh are, and which is preferred for your microservice architecture.

What is an API gateway?

An API gateway is a management service that accepts API requests from clients, directs the requests to the correct backend services, aggregates the results retrieved, and returns a synchronous response to the client.

To better understand the meaning of an API gateway, consider an e-commerce site where users invoke requests to different microservices like a shopping cart, checkout, or user profile. Most of these requests trigger API calls to more than one microservice, and due to the vast number of API calls that are made to the backend, an API gateway acts as a mid-layer between the clients and the services and retrieves all product details with a single request.

Developers can encode the API gateway features within the application to execute such tasks, without having to use an API gateway. However, that would be a tedious task for the developers to take on. This method also poses security risks of exposing the API to unauthorized access.

In essence, an API gateway help to simplify communication management such as API requests, routing, composition, and balancing demands across multiple instances of a microservice. It can also perform log tracing and aggregation without disrupting how API requests are handled.

Examples of API gateways in the cloud native ecosystem include: Ambassador Edge Stack, Apigee, Amazon API Gateway, MuleSoft, Kong, Tyk.io, Nginx, etc.

What is a service mesh?

A service mesh is an infrastructure layer that handles how internal services within an application communicate. It adds microservice discovery, load balancing, encryption, authentication, observability, security, and reliability features to “cloud-native” applications making them reliable and fast.

Fundamentally, service meshes allow developers to create robust enterprise applications by handling management and communication between multiple microservices.

It is usually implemented by providing a proxy instance, called a sidecar for service instances. These proxies handle inter-service communications and act as a point where the service mesh features are introduced.

Returning to the e-commerce illustration used earlier, let’s imagine the user proceeds to check out their order from the shopping cart. In this case, the microservice retrieving the shopping cart data for checkout will need to communicate with the microservice that holds user account data to confirm the user’s identity. This is where the service mesh comes into play! It aids the communication between these two microservices thereby ensuring the user’s details are confirmed correctly from the database.

Just like API gateways, service mesh features can also be hardcoded into an application. However, this will be a tedious job for the developers as they might be required to modify application code or configuration as network addresses change.

Examples of service meshes in the cloud native ecosystem include: Linkerd, Kuma, Consul, Istio, etc.

Similarities and differences between a service mesh and API gateway

Implementing an API gateway or a service mesh for enterprise-level application development is a recurring question amongst developers.

This section of the article will help you understand the differences and similarities between them and help you decide which to go with.

Similarities between an API gateway & a service mesh:

  • Resilience: With either or both technologies in place, your application can recover quickly from difficulties or failures encountered in your cloud-native application.
  • Traffic management: Without a service mesh or API gateway in place, the traffic from API calls made by clients would be difficult to manage. This will eventually delay the request processing and response time.
  • Client-side discovery: In both the API gateway and service mesh, the client is responsible for requesting and selecting available network services.
  • Service discovery: Both technologies facilitate how applications and microservices can automatically locate and communicate with each other.
  • System observability: Both technologies can manage services that can be accessed by clients. They also keep logs of clients that have accessed specific services. This helps to track the health of each API call made across to the microservice.

Differences between an API gateway & a Service mesh

  • Capabilities: API gateways serves as an edge microservice and perform tasks helpful to your microservice’s business logic, like request transformation, complex routing, or payload handling, while the service mesh only addresses a subset of inter-service communication problems.
  • External vs. internal communication: A major distinction between these technologies is their operation. The API gateway operates at the application level, while the service mesh operates at the infrastructure level. An API gateway stands between the user and internal applications logic, while the service mesh stands between the internal microservices. As discussed above, API gateways focus on business logic, while service mesh deals with service-to-service communication.
  • Monitoring and observability: API gateways can help you track the overall health of an application by measuring the metrics to identify flawed APIs. Meanwhile, service mesh metrics assist teams in identifying issues with the various microservices and components that make up an application’s back end rather than the entire program. Service mesh helps in determining the cause of specific application performance issues.
  • Tooling and support: API gateways work with almost every application or architecture, and can work with monolithic and microservice applications. Service mesh is typically designed only to work in specific environments, such as Kubernetes. Also, API gateways have automated security policies, and features that are easy to get started with; service meshes often have complex configurations and processes that have a steep learning curve.
  • Maturity: API gateways are a more established technology. Based on how popular this technology has grown, there are many vendors of API gateways. In comparison, the service mesh is a new and nascent open source technology with very few vendors today.

Can an API gateway & service mesh co-exist?

Both technologies have so many things in common, but their significant difference lies in how they operate. The API gateway is a centralized control plane that works at the application level, managing traffic from edge level client-to-service. The service mesh operates on the infrastructure level, dividing application functionality into microservices & managing internal service-to-service communication.

When combined with a service mesh, the API gateway can operate as a mediator. This can improve delivery security and speed, ensuring application uptime and resiliency while ensuring your applications are easily consumable. This will, in turn, bring additional functionality to your application stack.

Conclusion

API gateways and service meshes overlap in several ways, and when these technologies are combined, you get a great end-to-end communication experience.

To maximize the agility of your application and minimize the effort developers spend on managing communications, you may need both a service mesh and an API gateway for your application.

Simplified Kubernetes management with Ambassador Edge Stack API Gateway

Routing traffic into your Kubernetes cluster requires modern traffic management. And that’s why we built Ambassador Edge Stack to contain a modern Kubernetes ingress controller that supports a broad range of protocols, including HTTP/3, gRPC, gRPC-Web, and TLS termination.

Ambassador Edge Stack provides traffic management controls for resource availability. Try Ambassador Edge Stack today or learn more about Ambassador Edge Stack.

Free

Distraction-free reading. No ads.

Organize your knowledge with lists and highlights.

Tell your story. Find your audience.

Membership

Read member-only stories

Support writers you read most

Earn money for your writing

Listen to audio narrations

Read offline with the Medium app

Published in Ambassador Labs

Code, ship, and run apps for Kubernetes faster and easier than ever — powered by Ambassador’s industry-leading developer experience.

Written by Anita Ihuman

A Developer advocate and technical writer. Passionate about sharing information on Open source, DevOps and Cloud native topics.

No responses yet

Write a response