Using API Gateways to Facilitate Your Transition from Monolith to Microservices

Daniel Bryant
Ambassador Labs
Published in
7 min readMar 22, 2018

--

Photo by Billy Huynh on Unsplash

As a consultant, I bump into many engineering teams migrating from monolith to microservices. “So what?” you may say; while I understand that this migration pattern is almost becoming a cliche, aspects of a migration often get forgotten. I’m keen to talk about one of these topics today — the role of an edge gateway or API gateway.

Migration of Monolith to Microservices

Typically at the start of a migration, the obvious topics are given plenty of attention: domain modeling via defining Domain-Driven Design inspired “bounded contexts”, the creation of continuous delivery pipelines, automated infrastructure provisioning, enhanced monitoring and logging, and sprinkling in some shiny new technology (Docker, Kubernetes, and perhaps currently a service mesh or two?). However, the less obvious aspects can cause a lot of pain if they are ignored. But how to orchestrate the evolution of the system and the migration of the existing user traffic. Although you want to refactor the existing application architecture and potentially bring in new technology, you do not want to disrupt your end-users.

As I wrote in a previous article “Continuous Delivery: How Can an API Gateway Help (or Hinder)”, patterns like the “dancing skeleton” can greatly help in proving the end-to-end viability of new applications and infrastructure. However, the vast majority of underlying customer interaction is funneled via a single point within your system — the ingress or edge gateway — and therefore, to enable experimentation and evolution of the existing systems, you will need to focus considerable time and effort here.

Every (User) Journey Begins at the Edge

I’m obviously not the first person to talk about the need for an effective edge solution when moving from monolith to microservices. In fact, in Phil Calcado’s proposed extension of Martin Fowler’s original Microservices Prerequisites article — Calcado’s Microservices Prerequisites — his fifth prerequisite is “easy access to the edge”. Phil talks based on his experience that many organisation’s first forays into deploying a new microservice alongside their monolith consist of simply exposing the service directly to the internet. This can work well for a single (simple) service, but the approach tends not to scale and can also force the calling clients to jump through hoops regarding authorization or aggregation of data.

It is possible to use the existing monolithic application as a gateway. Suppose you have complex and highly-coupled authorization and authentication codes. In that case, this can be the only viable solution until the security components are refactored out into a new module or service.

This approach has obvious downsides, including the requirement that you must “update” the monolith with any new routing information (which can involve a full redeploy) and the fact that all traffic must pass through the monolith. This latter issue can be particularly costly if you are deploying your microservices to a separate new fabric or platform (such as Kubernetes), as now any request that comes into your application has to be routed through the old stack before it even touches the new stack.

You may already be using an edge gateway or reverse proxy — for example, NGINX or HAProxy — as these can provide many advantages when working with any backend architecture. Features typically include transparent routing to multiple backend components, header rewriting, TLS termination, etc. and crosscutting concerns regardless of how the requests are ultimately being served. The question to ask in this scenario is whether you want to keep using this gateway for your microservices implementation and if you do, should it be used in the same way?

From VMs to Containers (via Orchestration)

As I mentioned before, many engineering teams also decide to migrate to new Infrastructure at the same time as changing the application architecture. The benefits and challenges for doing this are heavily context-dependent, but I see many teams migrating away from VMs and pure Infrastructure as a Service (IaaS) to containers and Kubernetes.

Assuming that you decide to package your shiny new microservices within containers and deploy these into Kubernetes, what challenges do you face regarding handling traffic at the edge? In essence, there are three choices, one of which you have already read about:

  • Use the existing monolithic application as an edge gateway that routes traffic to either the monolith or new services. Any routing logic can be implemented here (because all requests are traveling via the monolith), and calls to authn/authz can be made in process
  • Deploy and operate an edge gateway in your existing infrastructure that routes traffic based on URIs and headers to either the monolith or new services. Authn and authz is typically done via calling out to the monolith or a refactored security service.
  • Deploy and operate an edge gateway in your new Kubernetes infrastructure that routes traffic based on URIs and headers to either the monolith or new services. Authn and authz is typically done via calling out to a refactored security service running in Kubernetes.

The choice of where to deploy and operate your edge gateway involves tradeoffs:

Once you have chosen how to implement the edge gateway, the next decision you will have to make is how to evolve your system. Broadly speaking, you can either try and “strangle” the monolith as-is, or you put the “monolith-in-a-box” and start chipping away here.

Strangling the Monolith

Martin Fowler has written a great article about the principles of the Strangler Application Pattern, and even though the writing is over ten years old, the same guidelines apply when migrating functionality out from a monolith into smaller services. The pattern at its core describes that functionality should be extracted from the monolith in the form of services that interact with the monolith via RPC or REST-like “seams” or via messaging and events.

Over time, functionality (and associated code) within the monolith is retired, which leads to the new microservices “strangling” the existing codebase. The main downside with this pattern is that you will still have to maintain your existing infrastructure alongside any new platform you deploy your microservices to for as long as the monolith is still in service.

One of the first companies to talk in-depth about using this pattern with microservices was Groupon, with “I-Tier: Dismantling the Monolith”. There are many lessons to be learn from their work, but we definitely don’t need to write a custom NGINX module, as Groupon originally did with “Grout”. Now modern open source API gateways like Ambassador Labs and Traefik exist, which provide this functionality using simple declarative configuration.

Monolith-in-a-Box: Simplifying Continuous Delivery

An increasingly common pattern I see with teams migrating from monolith to microservices and deploying onto Kubernetes is what I refer to as a “monolith-in-a-box”. I talked about this alongside Nic Jackson when we shared the story of migrating notonthehighstreet.com’s monolithic Ruby on Rails application — affectionately referred to as the MonoNOTH — towards a microservice-based architecture back in 2015 at the ContainerSched conference.

This migration pattern consists of packaging your existing monolithic application within a container and running it like any other new service. If you implement a new deployment platform, such as Kubernetes, you will run the monolith here too. The primary benefit of this pattern is the homogenisation of your continuous delivery pipeline. Each application and service may require customised build steps (or a build container) to correctly compile and package the code. Still, after the runtime container has been created, all of the other steps in the pipeline can use the container abstraction as the deployment artifact.

The ultimate goal of the monolith-in-a-box pattern is to deploy your monolith to your new infrastructure and gradually move all of your traffic over to this new platform. This allows you to decommission your old infrastructure before completing the full decomposition of the monolith. If you follow this pattern then I would argue that running your edge gateway within Kubernetes makes even more sense, as this is ultimately where all traffic will be routed.

Moving from (VM) to a Cloud Native Platform

When moving from Virtual Machine (VM)-based infrastructure to a cloud native platform like Kubernetes it is well worth investing time in implementing an effective edge/ingress solution to help with the migration. You have multiple options to implement this:

  • Using the existing monolithic application as a gateway.
  • Deploying or using an edge gateway in your existing infrastructure to route traffic between the current and new services.
  • Deploying an edge gateway within your new Kubernetes platform.

Deploying an edge gateway within Kubernetes can provide more flexibility when implementing migration patterns like the “monolith-in-a-box”, and it can make the transition from monolith to microservices much more rapid.

Additional Resources

--

--

DevRel and Technical GTM Leader | News/Podcasts @InfoQ | Web 1.0/2.0 coder, platform engineer, Java Champion, CS PhD | cloud, K8s, APIs, IPAs | learner/teacher