Micromessaging: Connecting Heroku Microservices w/Redis and RabbitMQ

Erin Swenson-Healey ·

While attempting to deploy a system of interconnected microservices to Heroku recently, I discovered that processes running in dynos in the same application cannot talk to each other via HTTP. I had originally planned on each microservice implementing a “REST” API – but this wasn’t going to be an option if I wanted to stick with Heroku. Much head-scratching ensued.

The solution, it turns out, is to communicate between microservices through a centralized message broker – in my case, a Redis database (but I’ll show you how do it with RabbitMQ as well, free of charge). The design of each microservice API has been decoupled from HTTP entirely: Client/server communication is achieved by enqueueing JSON-RPC 2.0-encoded messages in a list, with BRPOP and return-queues used to emulate HTTP request/response semantics. The Redis database serves as a load balancer of sorts, enabling easy horizontal scaling of each microservice (in Heroku dyno) on an as-needed basis. Redis will ensure that a message is dequeued by only a single consumer, so you can spin up a lot of dynos without worrying that they’ll clobber each other’s work. It’s pretty sa-weet.

So how’d I do it, you ask? Read on!

The Microservice Core

To keep things simple, the server side of our application will consist of a service that calculates the sum or difference between two numbers:

Pretty straightforward. Nothing about transport here at all – just domain logic: adding, subtracting.

JSON-RPC 2.0

Communication between client and server is achieved by sending and receiving JSON-RPC 2.0-encoded messages. These messages include the information required to – you guessed it – affect a method call in a remote system. To give you an idea of what this looks like in the context of our calculator:

JSON-RPC 2.0 Request

JSON-RPC 2.0 Response

Our request-messages have four properties: “jsonrpc”, which is always “2.0” as per the spec; “id”, which is a unique identifier created by the client; “method”, which is the name of the method we intend to call on the server; and “params,” which include values that we intend to pass to the method call. The response-message includes “jsonrpc” and “id” properties in addition to “result” (if the method call was successful) or “error” (if it was not).

Note that the message is transport-agnostic. Seeing where I’m going here? Using JSON-RPC allows us to communicate between components in our system – even when HTTP isn’t an option.

Redis as a Message Broker

As you’ve seen, nothing about our JSON-RPC messages say anything about how they’re transported between client and server; we can send our message however we want (HTTP, SMTP, ABC, BBD, TLC, OPP, etc.). In this case, we’ll implement “sending” a message to the server as an LPUSH on to a Redis list. Let’s bang out a quick client and I’ll explain the interesting parts:

First, we instantiate a Redis client. We use the Redis client to LPUSH our messages to a list that both the API client and server know about (“calc“). After enqueueing a request-message we can use BRPOP to block on receiving a response-message that will be enqueued into a separate return list. This return list’s name will be equal to the id on the JSON-RPC request-message. Once we get a result (encoded as JSON), we simply parse it and write to the console.

Next, let’s build out the server:

The server implementation reduces to a loop in which we block on the receival of an inbound JSON-RPC message in our calc list. When a message is received, we parse it and pass its arguments to the appropriate method on our calculator instance. Using the id on the request, we enqueue our response-message in a return list and resume polling.

The cool thing about this approach is that we can spin up as many Heroku dynos as we want and Redis will do the load balancing for us. Web scale!

Connecting Through RabbitMQ

In case Redis is not for you, we can achieve the same goals by using RabbitMQ as a message broker. The implementation is straightforward and has a similar feel to what we’ve done with Redis:

The only major difference here is that instead of a never-ending while loop, the call to subscribe is passed a block: true. This will cause the calling thread to block and will prevent the program from exiting until we interrupt it.

Now, for our client:

If You Come Crawling Back to HTTP…

If you decide to migrate to a new platform that allows you to use HTTP, you can do so with low impact to your codebase. We’ll use Sinatra to handle HTTP requests, parsing the request body and marshalling the necessary bits to our calculator:

Clients can now communicate with our calculator by issuing HTTP POST requests to our “/calc” endpoint:

Cleaning Up the Cruft

Our client code is easy to understand, but a bit verbose. Let’s reduce some of the boilerplate by introducing a new class, using method_missing to remote calls to your API.

Now we have a reusable Redis-enabled API client whose interface hides the details of serializing hashes to JSON and other boring stuff. Something similar could be done on the server side, deserializing JSON to a method name and args to pass to the calculator instance.

In Summary

Occasionally, platforms prevent us from using transport technologies that we’re familiar with – HTTP, in this case – and we’re stuck investigating new ways of linking things together. In this tutorial I’ve shown you a few ways to connect the pieces of your system through a centralized message broker. By decoupling our API design from any one particular transport, we’ve achieved a flexibility unattainable by traditional “REST” APIs, unlocking the ability to horizontally scale our microservices across Heroku dynos with ease.

Notes

  1. I’m referring to a “Level Two” implementation as per the Richardson Maturity Model.