Organizing (Commands, Events & Handlers) in Microservices

In a message-driven architecture such as SOA or Microservices, organizing commands and events, along with their respected handlers, can have a big impact on have you navigate your codebase. You might be surprised by the progression of how I organized them initially to what I do now! Also a tip on how to name commands and Events instead of after CRUD.

YouTube

Check out my YouTube channel where I post all kinds of content that accompanies my posts including this video showing everything that is in this post.

Building Blocks

When moving into a message-driven system, you’re likely going to fall down the path of creating very explicit actions for your system rather than CRUD. When you do this, you’ll find yourself following CQRS and likely also moving to an event-driven architecture (where it makes sense).

The building blocks are messages and message handlers. Messages can be in the form of Commands, Queries, and Events. Commands and Queries will have a single handler that consumes the message. Events can have zero or many handlers that consume them.

Organizing commands and events (and their handlers) can be a bit of a message when you first start. I get asked often on my channel how I organize these building blocks of messages and handlers. Here’s how my progression over time.

Organize by Feature Folder

Put all relevant messages and handlers in the same folder. This is likely the most common or intuitive approach you might start with when moving into a message-driven system. Putting all the relevant types into their own files in a single folder.

I often put them nested inside a folder called Features. This is because I view a single or collection of messages and handlers as an individual feature. For example, setting an order as ReadyToShip is a command, but also occurs when the ShippingCreatedEvent occurs.

Single File with Suffix

The next step in the journey of organization for me was putting all relevant types into a single file. Especially for commands and queries, where there is only a single handler. To me, this simplifies not having to jump around files as the command/query and handler are directly related to each other. Why bother with two files?

The following example contains a Controller, Command, Handler, and Saga. Everything related to the feature of PlaceOrder is located in this file.

Events

In the above example, there is an OrderPlaced event that is published. However, it is not defined in this file. This is because this event is consumed and shared with other boundaries/services. For that reason, I put shared messages into a Contracts project which is shared with other services.

Static Wrapper Class

My next progression in organizing was I did not like having a redundant name for Command/Handler/Event. In the above examples for the PlaceOrder feature, there was a PlaceOrderCommand and PlaceOrderHandler. I found the naming to be a bit redundant and tiresome for typing.

To combat this, I simply place the types inside a static class. The static class is the name of the feature.

In the following example, the feature CancelOrder contains a static class named that, and nested within are the Command and Handler. When creating a new instance of a command: new CancelOrder.Command()

Suffix & Consistency

I tend not to suffix events with “Event”. Hence why you see OrderPlaced and OrderCancelled and not OrderPlacedEvent or OrderCancelledEvent. This is a matter of preference because events are always named in the past tense.

At the end of the day, it’s your preference. My recommendation is ultimately to be consistent in your naming. Regardless of how you organize or name, just be consistent as it will make navigating your codebase that much easier.

Source Code

Developer-level members of my CodeOpinion YouTube channel get access to the full source for the working demo application available in a git repo. Check out the membership for more info.

Additional Related Posts

Follow @CodeOpinion on Twitter

Enjoy this post? Subscribe!

Subscribe to our weekly Newsletter and stay tuned.

REST APIs for Microservices? Beware!

Are you using REST APIs for a Microservices architecture? If you’re using REST, HTTP APIs, gRPC, or any other Request/Response model as the primary way to communicate between microservices, you’re going to need to deal with possibly hard to debug latency issues and address availability concerns.

YouTube

Check out my YouTube channel where I post all kinds of content that accompanies my posts including this video showing everything that is in this post.

In-Process Single Transaction

First, let’s take a step back and talk about a monolith. When in a monolith, you’d typically have a single process communicating with your database. If you’re using an ACID-compliant database, then you’d be using transactions to make all database statements atomic.

REST APIs for Microservices? Beware!

In the example above, our monolith has some clear boundaries of Sales, Catalog, Billing, and Warehouse. If an order is placed, which is started in sales, all three of these boundaries might be involved in that business process. Billing will have to create an Invoice and the Warehouse will have to allocate the products to ship. That entire business process would be encapsulated in one transaction. If anything goes wrong through that process, the transaction will rollback. We don’t have to worry about data consistency because we have the safety net of the atomic transaction.

Distributed Transaction

When you move out of a single process and into a distributed system, you’re going to lose that single database connection and transaction.

If you’re developing a microservices architecture with REST APIs, then what happens to your single transaction?

In the same scenario as above, when placing an order, there are multiple services that are required as apart of this long-running process.

If all the interactions from Sales to Billing and Warehouse are done over a REST HTTP API (or any synchronous request/response), then all the dependent services must be available in order to place an order.

If an order was placed, then Sales called Billing to create an Invoice and it succeeded. Then it calls the Warehouse to allocate products for shipping. If that request to the Warehouse fails, for whatever reason, then we have to manually perform some “undo” or compensating action to try and revert out changes along the way. This means you must make a call back to Billing to cancel the invoice.

But what happens if the request to cancel the Invoice in Billing fails? Now you’ve created an order, that has an invoice, has no allocated products.

REST APIs for Microservices? Beware!

Since there is no single transaction, you need a distributed transaction (or a two-phase commit). One way to solve this is what a transaction coordinator. But this means that all services must be using compatible technologies for databases. The likely hood of this is not high if you haven’t planned this initially.

The solution to this is through Event orchestration or choreography. But hang on, before that, there are more issues to discuss.

Latency

When making a network hop, you’re going to add latency. The issue is, if you’re inside one service, making an HTTP call to another service, that’s where the visibility ends. You are unaware of what the internals are of the other service is that you’re calling. What happens when the service you’re calling, has to call another service? And so on!?

In the example above, there is a network call chain from service to service. If each service takes (on a happy path) 200ms to respond to the other service, in my example above, it would take a total of 800ms roundtrip to the client.

REST APIs for Microservices? Beware!

The above is a best-case scenario. But because there’s so much coupling, that can’t directly be seen, latency can skyrocket if availability issues arise anywhere in the call chain.

What happens if the catalog is having performance issues and isn’t responding at 200ms, but at 700ms?

From our client, this is now going to take 1.3s. All because of downstream issues that are going to be very difficult to identify. From the perspective of Sales, it’s getting a slow response from the Warehouse. But the warehouse is getting a slow response from Billing. Who’s fault is it? Clearly, in my diagram, it’s the Catalog service, but without clear metrics, this is going to be very difficult to address. Beyond that, every network hop added in any downstream service is going to add latency.

Distributed Ball of Mud

It gets worst. The reality is if a service must make HTTP calls to other services, either for getting data or performing an action, then all the dependent services must be available for it to perform correctly.

For example, let’s say that Sales, Warehouse, and Billing all require the Catalog Service. If the Catalog service is unavailable or is having performance issues, then not only is it an issue, all the services are going to be failing.

This is a distributed monolith. Services require each other to be available and cannot function without other services.

Boundaries & Asynchronous Messaging

One solution to this distributed monolith is using asynchronous messaging between well defined boundaries.

Services should be autonomous. They shouldn’t depend directly on other services through synchronous interactions. If communication is done asynchronously through messaging, this means a service does not require other services to be available.

In order to make services autonomous, you need to have boundaries defined correctly. This means focusing on the behavior of the services and then the data required for those behaviors.

Once you have well-defined boundaries around capabilities, then you can use asynchronous messaging to define the workflow. Check out my post on Event Choreography and Orchestration to develop long-running business processes between services.

Source Code

Developer-level members of my CodeOpinion YouTube channel get access to the full source for the working demo application available in a git repo. Check out the membership for more info.

Additional Related Posts

Follow @CodeOpinion on Twitter

Enjoy this post? Subscribe!

Subscribe to our weekly Newsletter and stay tuned.

Decomposing CRUD to a Task Based UI

Moving from CRUD (Create, Read, Update Delete) based UI to a Task Based UI means creating a user interface that makes users’ tasks explicit. Tasks (or actions, commands) are a way to guide a user into the specific actions they can take for a given state or workflow. When you break down by actions, you then can start seeing where the boundaries might lie, which can help split entities into multiple boundaries.

YouTube

Check out my YouTube channel where I post all kinds of content that accompanies my posts including this video showing everything that is in this post.

CRUD

For this example, I’m using an Product Entity that contains various properties used in our application.

All the properties should be self-explanatory. If we were creating a CRUD-based UI, we would simply have a page/form that allows the user to set all the various properties.

Task Based UI

When you click the save button, all the data is sent to the server and the entire entity is updated.

So what’s the problem with CRUD?

In some situations, there may not be anything wrong with creating CRUD-based UIs. However, when you start to infer the intent of the users based on what data they are changing is when you’re better off moving to a Task Based UI.

In the CRUD-based UI from above, if a user were to change the price from $85 to $80, then click save, did they just change the price? Was that their intent? Or was their intent to do a price decrease? What other parts of your system need to be aware of that occurring?

Could you notify customers that had that product in their “Wishlist” that the price is now lower?

CRUD Based UIs do not capture the intent of the user. It’s not explicit. You need to derive what the user intent was. In a Task-Based UI, you’re making the users’ actions explicit. Make the implicit explicit.

Boundaries

The first step to building a task-based UI is understanding which properties on your entity change together or are related

Task Based UI

The red (top) box that has the Name and Description could be changed together, or independently. These two likely don’t have much behavior/tasks behind them so they could just be CRUD.

The purple (lower left) box with Price, For Sale, and Free Shipping are likely for sales purposes.

The green (lower middle) box has the Cost and this is for the Cost from the Vendor or Manufacturer. It does not relate at all to the Sale price.

The yellow (lower right) box has the Quantity which is the Quantity on Hand in the warehouse. It does not relate to the cost or the sale price.

Task Based UI

If you take the CRUD Based UI and separate each box into its own boundary, you then can start asking the question, what is the intent of users when they are changing these fields. Why are they changing them? It’s likely different users/roles have an interest in different pieces of data.

Task Based UI

The SKU, Name, Description might be owned by the Catalog service/boundary. This may be totally CRUD in nature. However, the Price might be owned by Sales. Purchasing would own the Cost and the Quantity on Hand would be owned by the Warehouse.

From a user/role perspective, what tasks are they trying to perform when they want to change data within that boundary.

We can see now that Sales you can IncreasePrice or DecreasePrice. If we wanted to increase the Price, we would provide the ability for that specific task/command.

Task Based UI
Task Based UI

If the warehouse did a stock count of a product and determined there weren’t as many on the shelves as what the system says, they would be doing an Inventory Adjustment. They aren’t just updating the QuantityOnHand value, they are explicitly changing the quantity to add or remove from the QuantityOnHand for a specific reason.

Boundaries

Not only have we defined explicitly what users’ tasks/actions are, but we’ve defined boundaries. The concept of a product could live within multiple boundaries/services. You do not need to have an entity live in one single place and it contains all the data for the entire system.

Additional Related Posts

Source Code

Developer-level members of my CodeOpinion YouTube channel get access to the full source in this post available in a git repo. Check out the membership for more info.

Follow @CodeOpinion on Twitter

Enjoy this post? Subscribe!

Subscribe to our weekly Newsletter and stay tuned.