Synchronous vs Messaging: When to use which?

Not all communication will be synchronous request/response with HTTP/RPC or asynchronous messaging within a system. But how do you choose between Synchronous vs Messaging? Well, it depends on if it’s a command and/or a query as well as where the request is originating from. If you want reliability and resiliency, then use messaging where it’s appropriate.

YouTube

Check out my YouTube channel where I post all kinds of content that accompanies my posts including this video showing everything that is in this post.

Synchronous

The most common places you’ll encounter making synchronous Request/Response calls are to 3rd party services, infrastructure (like a database, cache, etc), or from a UI/Client.

A typical example of this would be a Javascript frontend application making an HTTP call to a Web API. Or perhaps your backend making a call to a Cloud Service such as Blog Storage. Integration involving newer B2B services is usually done as synchronous request/response calls using HTTP.

Asynchronous

Communicating asynchronous most often happens between other internal services that your team or another team owns within an organization. Another good approach is to use asynchronous messaging to your own service or monolith. Check out my post on using Message Driven Architecture to decouple a monolith for more.

In B2B, a common use case for asynchronous communication is EDI where you’re exchanging files via mailboxes.

Commands & Queries

One way to distinguish where to use Synchronous vs Messaging is if you’re performing a command or a query. A command being a request to change state and a query being a request to return state.

If you’re using a javascript application in the browser, that’s communicating with an HTTP API backend service, you’re going to be using HTTP for asynchronous request/response. If the backend service is communicating with a database, that will also be synchronous. This is to be expected and how much applications work.

Synchronous Messaging: When to use which?

However, when you’re communicating between internal services, I recommend communicating asynchronously using a Message Driven Architecture via Events and Commands.

This means your services are communicating via a Message Broker to send commands to a queue or publish messages to a topic.

Synchronous Messaging: When to use which?

The reason I do not recommend HTTP to communicate between services is from the complexity of dealing with latency issues, availability concerns, resilience, difficulty debugging, and most importantly coupling.

Check out my post on REST APIs for Microservices? Beware! for more info on why you should avoid it as the primary way to communicate between services.

Origination

An important distinction besides commands and queries in the Synchronous vs Messaging debate is where the request originated from. For example, a client UI/browser sends an HTTP Request to our Web API Backend service, which then makes a synchronous call to a 3rd party service.

Synchronous Messaging: When to use which?

But what happens when that 3rd party service is unavailable or is timing out? What do we return to the client UI? Do we just send back the Client UI an error message? Is the 3rd party service is critical to your application/service, does that mean as long as it’s unavailable, your service is also unavailable?

Take this exact same situation, but change the originator to be a message broker, the implications are very different.

Synchronous Messaging: When to use which?

The synchronous call from our app/service was caused by a message from a message broker and the 3rd party service is unavailable, then we have many different courses of action.

We can do an immediate retry to resolve any transient errors. We can implement an exponential backoff, where we retry and wait a period of time before retrying again. You can also move the message to a dead letter queue that will allow you to investigate and manually retry the messages once you know the 3rd party service is available again.

You have many different options. Check out my post on Handling Failures in a Message Driven Architecture for more info.

The point being is that you aren’t losing work that needs to be completed, nor are any users going to be aware that is potentially an issue.

To accomplish this for the above example, we simply need to change from synchronous to asynchronous at some point through the call path.

Synchronous Messaging: When to use which?

This means that our Client UI will still make a synchronous call to our app/services, however instead of calling the 3rd party immediately, we’ll enqueue a message to the message broker and return back an immediate response to our client UI.

Then we will consume that same message asynchronously from the message broker and complete the work that needs to communicate with the 3rd party. If there are any failures, we now have the ability to handle those failures with different resiliency and fault tolerance and we do not lose any work that needs to be done.

Use Case

A good example of how this applies in real applications is when you’re in the AWS EC2 Console. This would apply to many different services with any cloud provider. I’ll use AWS EC2, which is an AWS Virtual Machine service.

If you have a running instance and you choose to stop the instance, it doesn’t immediately stop.

When you click the “Stop Instance” menu option, the browser doesn’t sit and wait for the HTTP request to finish. The request is fairly quick and then your browser displays that the Instance state is “Stopping”.

This is because of the exact example I illustrated earlier where the request from the browser was synchronous, but the actual work being performed is asynchronous.

Synchronous Messaging: When to use which?

Not all interactions can be done this way. If the end-user expects to immediately see their changes, then forcing an asynchronous workflow will prove challenging. If you can provide the user with the correct expectation about long-running processes then you can leverage events to drive real-time web.

Check out my post on using Real-Time Web by leveraging Event Drive Architecture.

Source Code

Developer-level members of my CodeOpinion YouTube channel get access to the full source for any working demo application that I post on my blog or YouTube. Check out the membership for more info.

Related Posts

Follow @CodeOpinion on Twitter

Enjoy this post? Subscribe!

Subscribe to our weekly Newsletter and stay tuned.

Competing Consumers Pattern for Scalability

The Competing Consumers Pattern enables messages from Message Queues (or Topics) to be processed concurrently by multiple Consumers. This improves scalability, availability but also has some issues that you need to consider such as message ordering and moving the bottleneck.

YouTube

Check out my YouTube channel where I post all kinds of content that accompanies my posts including this video showing everything that is in this post.

Competing Consumers Pattern

Message Queues are a great way to offload work to be that can be handled separately by another process. Meaning if you have a web application that takes an HTTP request, it can add a new message to the queue and then immediately return to the client. Since the work is being done asynchronously, the web application is non-blocking because the work is being done in another process.

In this example, the client/browser makes an HTTP call to the web application.

If the request itself can be done asynchronously and the client doesn’t need a consistent response, instead of performing the actual work, it creates a message.

That message is then sent to the message queue from the API, which is our Producer.

Then the API/Producer can return back to the client.

The actual work to be done will be handled by a Consumer. A consumer will get the message from the Message Queue and perform the work required.

As the consumer is processing the message, our API could still be creating new messages from HTTP Requests.

And producing more and more messages that are being put into the queue.

At this point, we have 3 messages in the queue. Once the consumer is done processing the message, it will go and grab the next message in the queue to process.

Competing Consumers Pattern

In this illustration, the rate at which we are producing messages and adding them to the queue exceeds how fast we can consume messages.

This may or not may be an issue depending on how quickly these messages need to be processed. If they are time-sensitive then we must increase our throughput.

To do that is simply to increase the number of consumers that are processing messages off the queue.

Competing Consumers Pattern

This is called the Competing Consumers Pattern because each consumer is competing for the next message in the queue.

If we have two Consumers that are both busy processing a message, and two messages waiting in the queue.

Competing Consumers Pattern

The moment one of the consumers finishes processing it’s message, it will get the next message in the queue.

Competing Consumers Pattern

Since the second consumer is free, it now gets the next message in the queue.

Competing Consumers Pattern

Increasing Throughput

The Competing Consumers Pattern allows you to horizontally scale by adding more consumers to process more messages concurrently. By increasing the number of consumers you will increase throughput and improve scalability and availability to manage the length of the queue.

Because the number of messages can fluctuate in most systems, you may see more messages during business hours than after hours. This allows you to scale the number of consumers and the resources they use. At peak hours you increase the number of deployed Consumers and off-peak hours you can reduce the number of consumers deployed.

Moving the Bottleneck

There are a couple issues with Competing Consumers Pattern and the first is moving the bottle neck.

If you have a lot of messages in the queue and you add more consumers to try and increase throughput, this means you’re going to be processing more work concurrently. This can have a negative effect downstream to any resources that are used in processing messages.

For example, if the messages being processed involved interacting with a database, you’ve moved the bottle neck to the database.

Competing Consumers Pattern

Any resources (database, caches, services) will all feel the effects of adding more consumers and processing more messages concurrently. Beware of moving the bottleneck that downstream resources can handle the increased load.

Message Ordering

The second common issue with the Competing Consumers Pattern is the expectation of processing messages in a particular order. For example, if the Producer is creating messages based on workflow the client is doing, then the messages are created in a particular order and added to the queue in a particular order.

Most message brokers support FIFO (First In First Out), which means that consumers will take the oldest message available to be processed.

This does not mean however that you won’t have correlated messages being processed at the same time.

Here’s an example of a yellow message being processed by a consumer.

Competing Consumers Pattern

If the producer produces another message that relates to the first message, and there are consumers available to process it, they will.

Competing Consumers Pattern

This can have negative consequences to your system if you’re expecting to process messages sequentially in a particular order. Although you will receive messages in order, that does not mean you will process them sequentially in a particular order because you’re processing messages concurrently.

Source Code

Developer-level members of my CodeOpinion YouTube channel get access to the full source for any working demo application that I post on my blog or YouTube. Check out the membership for more info.

Related Posts

Follow @CodeOpinion on Twitter

Enjoy this post? Subscribe!

Subscribe to our weekly Newsletter and stay tuned.

Event Based Architecture: What do you mean by EVENT?

The term “Event” is really overloaded. There are many different utilities that leverage events. Event Sourcing, Event Carried State Transfer, and Event Notifications. None of these are for the same purpose. When talking about an Event Based architecture, realize which one you’re using and for what purpose.

YouTube

Check out my YouTube channel where I post all kinds of content that accompanies my posts including this video showing everything that is in this post.

Overloaded

With the popularity of Microservices, Event Driven Architecture, Event Sourcing, and tooling, the term “Events” has become pretty overloaded and I find has been causing some confusion.

There’s a lot of content available online through blogs or videos that are often using the term Event incorrectly or in an ambiguous way. Almost every time I see the term event being used incorrectly is when the topic being covered is Microservices or Event Driven Architecture.

Here are the three different ways that the term “Event” is being used and in what context or pattern and for what purpose.

Event Sourcing

Event Sourcing is a different approach to storing data. Instead of storing the current state, you’re instead going to be storing events. Events represent the state transitions of things that have occurred in your system. Events are facts.

To illustrate the exact same product of SKU ABC123 that had a current state quantity of 59, this is how we would be storing this data using event sourcing.

This means that your events are the point of truth and you can derive current state from them.

The confusion when talking about Event Based Architecture comes in because I most often see people refer to Event Sourcing in a Microservices architecture as a means to communicate between services. Microservices and event sourcing are orthogonal. You do not need to be event sourcing in order to have a Microservice. Event sourcing has nothing to do with communication to other services. Again, event sourcing is about how you store data.

The reason, I believe, that there’s confusion is because when you’re event sourcing, you might be tempted to expose those events to other services. I say tempted because the events you’re storing as facts are not something you directly want to expose to other services. These events are internal and not used for integration.

Also, depending on the database or event store you’re using, it may also act as a message broker. This is often used within a service/boundary in order to create projections or read models from your event stream.

For more on projections, check out my post Projections in Event Sourcing: Build ANY model you want!

Event Carried State Transfer

The most common way I see events being used and explained is for state propagation. Meaning, you’re publishing events about state changes within a service, so other services (consumers) can keep a local cache copy of the data.

This is often referred to as Event Carried State Transfer.

The reason services will want a local cache copy of another service’s data, is so they do not need to make RPC calls to other services to get data. The issue with making the RPC call is if there are issues with availability or latency, the call might fail. In order to be available when other services are unavailable, they want the data they need locally.

In the example above, Warehouse and Billing require the Sales service. If the Sales service is unavailable, they may also be unavailable. To alleviate this, if they have the relevant data they need locally, it prevents them from having to make RPC calls to Sales.

What this looks like in practice is to use fat messages that generally contain all the data related to an entity.

Sales will publish a ProductChanged event that both the Warehouse and Billing will consume to update their local cache copies of a Product.

The contents of ProductChanged will generally look something like this:

The event will represent the entire current state of the entity. Meaning these events can get pretty large depending on the size of the entity.

While this approach is often used, I’d argue in most places if you’re doing this, you probably have some boundaries that are wrong. For more on defining boundaries check out my post on Defining Service Boundaries by Splitting Entities

Events as Notifications

The reason why Event Carried State Transfer is so popular when discussing Event Based Architecture is that Events will be used for notification purposes to other services, however being used incorrectly.

Most times events used for notifications are generally pretty slim. They don’t contain much data. If a consumer is handling an event but needs more information, to, for example, react and perform some action, it might have to make an RPC call back to the producing service to get more information. And this is what leads people to Event carried State Transfer, so they do not have to make these RPC calls.

To illustrate, the Sales service publishes an OrderPlaced event.

Event Based Architecture

The Billing Service is consuming this event so it can then create an Invoice.

Event Based Architecture

But because the event doesn’t contain much information, the Billing service then needs to make an RPC call back to Sales to get more data.

And this is how people then land on using Event carried State transfer to avoid this pattern.

Again, I sound like a broken record. But the likely reason this is occurring is because of incorrect boundaries. Services should own the data that relates to the capabilities they perform.

If you do have boundaries correct and each service has the relevant data for its capabilities, this means that events are used as a part of a workflow or long-running process. Events are used as notifications to tell other services that something has occurred.

To illustrate this again, Sales is publishing an OrderPlaced event that Billing is consuming.

Since Billing has all the data it needs, it creates an Invoice and publishes an OrderBilled event.

Event Based Architecture

Next, the Warehouse service will consume the OrderBilled event so that it can create a ShippingLabel for the order to be shipped.

Event Based Architecture

Once the shipping label has been created by the Warehouse, it publishes a LabelCreated event.

Finally, Sales is consuming the LabelCreated event so that it can update it’s Order status to show that the Order has been billed and is ready to be shipped.

This is called Event Choreography and is driven by events that are used for notifications. For more info check out my post Event Choreography & Orchestration (Sagas)

Event Based Architecture

Hopefully, this clears up some confusion about how the term even is used in different situations around event based architecture. I also hope it illustrates why Event Carried Transfer exists and the problem it’s trying to solve. However, that problem may likely be caused by incorrect boundaries.

Source Code

Developer-level members of my CodeOpinion YouTube channel get access to the full source for any working demo application that I post on my blog or YouTube. Check out the membership for more info.

Follow @CodeOpinion on Twitter

Enjoy this post? Subscribe!

Subscribe to our weekly Newsletter and stay tuned.