Message Queue Overload from High Processing Latency

Message Queue Overload can occur when consumers cannot keep up with the work being created by producers. This can happen unexpectedly when processing latency increases dramatically. Here’s one spot to look out for when using network calls such as HTTP when processing messages. Without handling them correctly with timeouts, you can increase processing latency which will overload a message queue.

YouTube

Check out my YouTube channel where I post all kinds of content that accompanies my posts including this video showing everything that is in this post.

Message Queue Overload

What exactly do I mean by overloading a message queue? Simply that consumers cannot process messages fast enough to keep the # of messages in the queue relatively low or none. In other words, when a message is produced the consumers are so busy that the message must sit and wait before a consumer finally is able to consume it.

As an example, let’s say you have 10 consumers that are each able to process 1 message at a time. Each message takes 200ms to process. This means that you can process 50 messages per second.

If you start producing more than 50 messages per second, you’ve overloaded your queue. You’re basically like a boat taking on water faster than you can take the water out.

This goes without saying that most systems are linear. Meaning they aren’t having to consume the same # of messages at the same rate constantly. There are often peaks and valleys that allow the system to catch up. A lot of this also depends on what kind of overall latency is acceptable.

Solutions

There are two primary solutions for handling this. The first and most obvious is to increase the number of consumers. The more consumers you add, the more messages you can process.

In my example above, if you’re producing 55 messages per second then you must add one more consumer to a total of 11.

Processing Latency

The second solution is to reduce processing latency. Instead of having each message take 200ms, we optimize the message processing to take 100ms per message. With our 10 consumers, we can now process 100 messages per second.

The opposite can also occur where the processing latency actually increases. Instead of messages taking 200ms, they all of a sudden take longer. If for example a message starts taking 500ms to process, you’re throughput will now be a total of 20 messages per second. This will overload your queue.

Network I/O

The most common reason I’ve noticed processing latency increase is because of network I/O.

When processing a message, let’s say you must make an HTTP call to an external service. If this normally takes on average 100ms, but suddenly takes longer, then the overall processing latency will increase.

This can happen with any network call, but I’ve found this most often to occur with services that are out of your control/ownership.

In .NET, the HttpClient default timeout is an absurd 100 seconds.

If a service you were calling when processing a message, all of a sudden took more than 100 seconds to respond, it would take that long for HttpClient to throw an exception because of the Timeout. This would overload your queue very very quickly.

Adding more consumers would not likely solve your problem at a 100-second timeout.

Timeouts

The moral of the story is to be very aware of network calls you’re making when processing messages. Understand what time of latency is acceptable and what cannot be exceeded.

Add timeouts to any HttpClient calls or if you’re using a library that sits on top of a message broker, use built-in timeouts around the entire processing a message.

Follow @CodeOpinion on Twitter

Enjoy this post? Subscribe!

Subscribe to our weekly Newsletter and stay tuned.

Links

Using Hangfire and MediatR as a Message Dispatcher

Two popular libraries in Hangfire and MediatR can be used together to create a pretty powerful out-of-process messaging dispatcher. I’ve covered this a bit many years ago but I figured I’d give it a refresh since it’s a bit easier in the world of ASP.NET Core.

YouTube

Check out my YouTube channel where I post all kinds of content that accompanies my posts including this video showing everything that is in this post.

Hangfire and MediatR Bridge/Wrapper

The first thing you need to do is create a bridge/wrapper around MediatR. At first this may seem completely pointless, but it has a purpose. Hangfire has a lot extensibility in terms of how jobs are executed that are often times defined by using attributes. Because we don’t own the MediatR library, we need to have our own class that we can defined these attributes on.

Here’s a simple start to our Bridge/Wrapper

In the example above, I have an overload for Send() that accepts the jobName as the first parameter. The DisplayName attribute will be used by Hangfire to show in the UI Dashboard the name of the job. This is a simple example of why we need this wrapper.

MediatR Extensions

The next piece of the puzzle is creating extension methods to be able to use Hangfire to create background jobs. In the example below I’ve created Enqueue() extension methods that use the Hangfire BackgroundJobClient to enqueue work using our Bridge/Wrapper

Hangfire Serialization

In the BackgroundJobClient.Enqueue() above, Hangfire will use Json.NET to serialize the IRequest that we are passing to Send() that ultimately gets put into storage. When Hangfire then pulls that job out of storage to perform the work, it needs to deserialize it. Because it’s just an IRequest it has no way to turning that into a concrete type.

To handle this, we need to configure Hangfire to add type handling when it serializes/deserializes.

ConfigureServices

The last thing we need to do is actually configure Hangfire in the ConfigureServices of either your ASP.NET Core Startup or your HostBuilder.

Here’s an example of my Worker process. This is just a console app that is purely a Hangfire server that just processes jobs from Hangfire.

Enqueuing a Request

In order to enqueue a MediatR request to Hangfire is a matter of calling the Enqueue() extension method we created off of IMediator.

In the example above, the HTTP Request to our Controller action will return a 204 NoContent, even though we are throwing in our PlaceOrderHandler because it’s execution is actually done out of the context of the HTTP Request. This could either be in a different thread, an entire different process, or on a totally on a different server).

More

There’s a lot more you can do with this and take it much farther to handle things like event publishing to have each event handler be it’s own job within Hangfire.

You can get all the source code of this running example on my GitHub.

If you have any thoughts or questions, please leave a comment on my blog, Twitter, or on the YouTube video.

Follow @CodeOpinion on Twitter

Enjoy this post? Subscribe!

Subscribe to our weekly Newsletter and stay tuned.

Links

Moving work Out-of-Process using Brighter and RabbitMQ

Once you start doing in-process request & event dispatching, you’ll soon want to move out-of-process so you can isolate work from the caller/invoker. This is often times the next logical step if you’re using MediatR for commands and especially for events/notifications. Here’s how you can do use the same paradigm for in-process and out-of-process using Brighter and RabbitMQ.

YouTube

Check out my YouTube channel where I post all kinds of content that accompanies my posts including this video showing you how to move work out-of-process.

MediatR

For those unfamiliar with MediatR library or the mediator pattern:

In software engineering, the mediator pattern defines an object that encapsulates how a set of objects interact. This pattern is considered to be a behavioral pattern due to the way it can alter the program’s running behavior.

I covered MediatR and why you might want to consider it but also one big drawback in my why use MediatR post.

One of the biggest downside to in-process processing (which MediatR does), is it does isolate the actual handling of a request. This isn’t a problem with MediatR at all, just the nature of it executing in-process.

For example, in the case of ASP.NET Core, you’re tied to the originating HTTP request. If the request your processing throws an exception, that will get bubbled up to ASP.NET Core which will return an HTTP 500 to the client.

Brighter

Brighter is a command dispatcher and command processor. I’m using it for these examples because if you’re familiar with MediatR, Brighter will seem very similar in terms of the API. It can dispatch requests in-process as well as go out-of-process by using a Message Broker like RabbitMQ.

There’s a whole bunch Brighter can do, but for the purpose of this post, I just want to show you how you can move from In-Process to Out-of-Process rather seamlessly.

In my opinion, using the same paradigm with the same library can be incredibly beneficial. For example, if you’re using an in-process approach for ASP.NET Core routes, you can at a later time decide to move them out-of-process with some rather minor code changes.

Here’s an example of using Brighter in the exact same fashion as using MediatR. This is simply creating a PlaceOrder object and then using Brighters CommandProcessor to send that request. Brighter will then invoke the PlaceOrderHandler.Handle method and pass the PlaceOrder object you created.

If you’re familiar with MediatR, this is very very similar. And as mentioned, this is still in-process.

In the above example, because we’re throwing an InvalidOperationException, this will bubble up and cause ASP.NET Core to return the client an HTTP 500.

Out-of-Process

Moving work out-of-process means we’re going to execute the work that needs to be done for a request to an entirely different physical process than where the request originated. Because of this, the work will be done asynchronously.

Here are a couple of simple diagrams of how that looks.

Brighter

There are two processes, the HTTP Server using ASP.NET Core, and a Message Processor. Both of these are separate .NET Console Apps. The Contracts and Implementation projects are class libraries that are reference by both HTTP Server and Message processor.

When an HTTP Request comes to ASP.NET Core, it invokes the OrdersController and the PlaceOrder route action. Instead of doing the work required then, we will use Brighter to send a message to the message broker (RabbitMQ).

Once the message has been sent to RabbitMQ, ASP.NET will immediately continue and the client will immediately receive the HTTP 204 NoContent that we’re returning.

It will no longer get the InvalidOperationException and an HTTP 500 error. This is because we are not doing that work in-process.

Here’s how the message process works:

Brighter

Brighter is configured in the Message Processor to receive messages from RabbitMQ. Brighter will receive the PlaceOrder request and then execute the PlaceOrderHandler.Handle passing the PlaceOrder object.

The code change to move from in-process to out-of-process is changing one method.

Instead of calling Send() on the CommandProcessor, we’re calling Post().

That’s it.

Mind you, there’s some configuration required in the ASP.NET Core and the MessageProcessor projects. Here’s a sample of the ConfigureServices in the ASP.NET Core Startup class.

Retries & More

One of the nice things about running out-of-process is having the ability to retry on failures. This is pretty straight forward with Brighter by using a UsePolicy attribute on the handle method.

More

Brighter has a a lot of features that can be found in their docs.

I realize in this post I have not covered the reasons why or when you should move work out-of-process. I’ll be covering those topics more in future blog posts and videos.

The intent with this post was to display Brighter and how a library that handles both in-process and out-of-process can be very beneficial when you need to make the transition.

If you have any questions or comments, let me know in the comments, Twitter, or YouTube.

Follow @CodeOpinion on Twitter

Enjoy this post? Subscribe!

Subscribe to our weekly Newsletter and stay tuned.