Review of Gloo, The Function Gateway

The Internet runs on complexity – the sum total of internet communication is really a multitude of communications between disparate APIs, each transforming the data in a unique way. No single API can do everything, and frankly, no one API should ever even try to. As we’ve shifted away from this monolithic view of development, though, new problems have shown themselves. With so many APIs, a hybrid gateway layer for communication becomes an interesting proposition.

Gloo hopes to be the solution to that problem. In this piece, we’re going to take a look at Gloo, and see what’s under the hood. We’ll take a high-level view at its design and structure, and see how it fits within the greater API ecosystem.

What Does Gloo Do?

The developer behind Gloo, Solo, has a simple ethos: APIs should be built out of functions, not services. In other words, each routed call should be related in terms of functions and how those functions work together, rather than the specific proprietary or special knowledge that each API has. The idea is that when APIs relate to one another by what they have in common as opposed to what they differ from, greater inter-functionality is achieved.

Gloo considers functions to be the “smallest unit of computing”, that is, the smallest base at which computations occur for the user. Accordingly, by moving the relationship between APIs to the function level and granting the user the ability to call these functions, rather than specific APIs, users, therefore, have extreme control over the way their requests are routed and handled.

This is made possible in large part due to the fact that Gloo learns about the APIs that it routes to upstream. By learning how these APIs route functions, and what those functions do, a layer of understanding is created. The client and the server no longer need to know the same language, have the same architecture, etc. – everything can be ported at the function level. Gloo refers to this as “function-level routing,” which is an entire paradigm shift.

It should also be noted that Gloo refers to itself as “entirely pluggable”, meaning that each module can be configured, changed, and put into new workflows. This configuration and extensibility mean that there’s flexibility through a bunch of different upstream types, discovery services, and system query flows.

Gloo itself is somewhat of an API that is accessed through a layer defined by the user. This is somewhat of a flipped relationship, of course, as Gloo is really routing functions, not data or calls. User configuration thus defines where this process occurs and in what mechanism, without disrupting the core functionality.

Core Concepts

Gloo notes two core concepts in its documentation. The first is Virtual Services, the idea that routing rules under a specific domain can route functions and calls in a transparent way. The mechanism that Gloo utilizes for this function is known as a matcher, which holds data on the kinds of calls and the named destination that it is tied to.

Second to Virtual Services is the concept of the Upstream. Upstreams in Gloo determine the route destinations and is really the core enabling mechanism for the function-level routing underlying Gloo. Together with Virtual Services, this process enables functional routing, and thereby the entirety of Gloo’s offerings.

Gloo is described as “a high-performance, plugin-extendable, platform-agnostic Hybrid Application Gateway built on top of Envoy.” Here’s a high-level structure of how it operates.

The Use Case

Gloo’s justification in use case is the fact that there’s a gap between many service architecture approaches. Most companies are essentially a hybrid environment of microservices, monoliths, serverless systems, and external APIs – making these work in concert, let alone opening these to external users, is extremely difficult.

Gateways are ok solutions, but even with gateways, the onus for understanding the backend API is still on the user, which requires significant time and knowledge resource dedication. Accordingly, Gloo found that the middle ground would be better filled by a custom solution rather than a pre-built one.

Gloo is this custom solution. By removing the responsibility of the user to understand and allowing for a much more granular interaction, it allows the user to have a custom-built experience, making their commands route at the function, not at the API level. It does this by aggregating the APIs of the backend services and presenting it to the user as a single collection of functions. Essentially, Gloo shifted the relationship from provider to what is provided and took over the routing of those provisions expertly.

Isn’t This Just a Gateway?

In essence, yes, this is just a gateway – but the focus is entirely different. Most gateways are what it says on the tin – a gateway to additional APIs. In the case of Gloo, this routing is done at the function level. While yes, it’s fair to say that it’s a “gateway” in the sense that it fills that niche that is usually performed by a gateway, it does so with a layer of abstraction at the function level that allows for the linkage of a ton of disparate sources. In essence, this is an API for APIs.

In many ways, this is almost akin to the backend-for-frontends approach. In that approach, multiple user experiences bridge via a single API to multiple backend APIs – in this approach, however, multiple functions bridge via a single API to multiple external APIs.

Envoy

Since Gloo borrows a lot from its secondary-layer implementation Envoy, it bears some discussion. Envoy is essentially a distributed proxy, which allows you to combine content into what they term the “universal data plane”. This creates a sort of service mesh, tying a bunch of APIs together rather than forcing each system to route to a single API at a time.

Envoy is pretty powerful itself, but that would be its own piece altogether. Just keep in mind that Envoy is self-contained, high-performance, low-memory usage oriented. It supports HTTP/2 and GRPC, and integrates advanced load balancing systems, such as global rate limiting and automatic retries, into its codebase.

Design Principles

Gloo has a few basic stated design principles that help contextualize exactly what is going on here. First of all, Gloo is clearly user-focused – the ability to route functions rather than route functionality to an API is hugely user-centric and shifts the entire paradigm of the user-resource relationship to an imbalance in the user favor.

Gloo also focuses on the idea of proxy enhancement. In this approach, Envoy is used to manipulate the request into the form needed, including changing the request and response transformation pattern. These transformations can include diverse changes to AWS lambda filters, Google functions filters, and more. Filters at this function level allow for a veritable “babel fish” of APIs.

Extensibility is also heavily discussed in the documentation. From the ground up, Gloo is meant to be pluggable and modular. By allowing the user to have such customizability, the system at hand is that much more flexible and allows for a wider range of functionality and interoperability between different systems.

To that end, Gloo was also built with usability as a focus. Making projects easy to consume is one part of that, but making them naturally easy to extend is the other side of that coin. Several tools were developed to help this, including:

  • glooctl – a command line tool that allows for powerful transformation on functions and calls.
  • TheTool – a great automation system that gives users the ability to automate much of their work within Gloo proper and throughout the integrated services.
  • Support as a Kubernetes Ingress Controller by default.

All of this ultimately adds up to a service that was clearly designed with the “user first” mentality in mind.

The Basic Workflow

The basic Gloo workflow is relatively simple and takes place over four general steps. These steps can, in theory at least, be done in any order, but it’s best to do it in the order that’s been laid out in the documentation for the sake of troubleshooting should something go wrong.

  1. Deploy Gloo. First of all, you need to actually deploy Goo. This can be done as a Kubernetes pod, as a Docker container, on a server – Gloo can run pretty much anywhere with ease.
  2. Deploy Discovery Services. Next, you should deploy the Goo discovery services that help automatically set up the Gloo config. The documentation considers this an optional step but failing to do this step just sets you up with more work moving forward.
  3. Proxy. Next, deploy an Envoy proxy with its configuration pointing at Gloo.
  4. Routing. Finally, you need to set up a route and an upstream for Gloo to route your functions. From here, you can essentially use Gloo for all of your function routing needs.

Caveats

All of this being said, Gloo is not always the best solution. In APIs that have no need for external function tie-ins, Gloo is essentially a wasted system. The value-adding nature of Gloo is beyond question, but it requires the need to port your functions to external APIs – if your API is not reaching out to external APIs, then there’s really no reason to integrate.

Additionally, Gloo may not be appropriate in business-to-business environments. In those cases, the APIs that are stated for use are typically stated by contract, and as such, your interactions should be limited and specific. There is, of course, the caveat that, often, contracts state end goals, not the way in which that goal must be achieved – and in those cases, Gloo is more than appropriate.

Conclusion

Gloo is a powerful tool in that it fills a very niche requirement, and does so with a user-first mentality. Whether or not the implementation is adequate for an APIs given needs should be determined by each provider, but Gloo should definitely be considered as part of a larger toolkit.

Have you used Gloo? What were your thoughts? Would you recommend it to others? Let us know below.