CQRS & Event Sourcing Code Walk-Through

Want to see an example of how CQRS & Event Sourcing work together? Here’s a code walk-through that illustrates sending commands to your domain that stores Events in an Event Store. The Events are then published to Consumers that updated Projections (read models) that are then consumed by Queries. This is the stereotypical set of patterns used when using CQRS and Event Sourcing together.

YouTube

Check out my YouTube channel where I post all kinds of content that accompanies my posts including this video showing everything that is in this post.

CQRS & Event Sourcing

Because CQRS and Event Sourcing are so often talked about or illustrated together, you’ll end up seeing a diagram like the one below.

This is the stereotypical diagram to illustrate both concepts together. Unfortunately there often isn’t a distinction between what portion is CQRS and what portion is Event Sourcing. And as you’ll see later in this post, there’s another concept involved as well in this diagram as well.

CQRS

Command Query Responsibility Segregation (CQRS) is simply the separation of Writes (Commands) and Reads (Queries). In the diagram above, that’s illustrated by the horizontal paths on the top and bottom. I’ve talked about the simplicity of CQRS and the 3 Most Common Misconceptions.

Event Sourcing

Event Sourcing is about how you record state. It’s a different approach to persistence since most applications are built to only record the current state. Event sourcing is about using a series of events (facts) that represent all the state transitions that get you to the current state. If you want more of the basics, check out my post Event Sourcing Example & Explained in plain English.

Simplest Possible Thing

I’m going to use Greg Young’s Simplest Possible Thing sample that illustrates both CQRS, Event Sourcing, and Projections. This sample is rather old so I’ve upgraded it to .NET 6 and Razor pages.

The sample app is just showing an Inventory Item and has various commands that mutate state, queries that return current state, and event sourcing is used as a way to persist state.

Commands & Events

Commands are handled by Command Handlers. In this example, they are simply using a repository to get out the InventoryItem which is a domain object, and then invoke the proper method on the InventoryItem. The Repository.Save() will persist all the events (generated though ApplyChange() on the InventoryItem you will see below) to an in-memory event stream (collection).

The Inventory Item domain objects contain all the behavior for doing any state transitions.

The AggregateRoot base class contains the ApplyChange method, which stores the event being applied. It also calls the appropriate Apply() method on the InventoryItem. You will notice there are Apply() methods for only some of the events. This is because our InventoryItem only cares about maintaining its internal state (projection) that is required to perform logic.

What all this code is illustrating is the Command side of CQRS as well as Event Sourcing. Commands are explicit actions that we handle to perform some type of state change. For state changes, we’re using explicit events to capture what actually occurred from the result of a command.

CQRS & Event Sourcing Code Walk-Through

Projections

Before I illustrate the Query side of CQRS, first we’re going to build a Projection that acts as a read model representing the current state of the Events.

When events are saved by the Repository, it then dispatches the events to consumers. In this example, it is done in-memory however if you were using an actual Event Store, you’d likely be using a subscription model to have your projections run in isolation asynchronously.

Consumers handle the events to update the read model (projection). In this example, there are two different consumers for updating two different projections. One projection is for showing a list of Inventory Items, the other is for showing an individual Inventory Item.

This is just using a FakeDatabase that is really just an in-memory collection & dictionary.

CQRS & Event Sourcing Code Walk-Through

Query

Now that we have projections (read models) we can look at how the Query side of CQRS uses the projections.

The razor page is using the ReadModelFacade (and underlying FakeDatabase) that has the projection for our InventoryItem details.

CQRS & Event Sourcing Code Walk-Through

CQRS & Event Sourcing

Hopefully, this illustrated the differences between CQRS, Event Sourcing, and how they are used together while also using Projections. While this diagram is often used to describe CQRS, realize there are multiple aspects that are at play and not just CQRS.

Source Code

Developer-level members of my YouTube channel or Patreon get access to the full source for any working demo application that I post on my blog or YouTube. Check out the YouTube Membership or Patreon for more info.

Related Links

Follow @CodeOpinion on Twitter

Software Architecture & Design

Get all my latest YouTube Vidoes and Blog Posts on Software Architecture & Design

Optimistic Concurrency in an HTTP API with ETags & Hypermedia

How do you implement optimistic concurrency in an HTTP API? There are a couple of different ways, regardless of what datastore you’re using in the backend. You can leverage the ETag header in the HTTP Response to return a “version” of the resource that was accessed. When a client then needs to perform some operation on the resource, they send an If-Match header apart of the request with the value being the result of ETag from the initial GET request. Another option is to leverage hypermedia by returning URIs for actions relevant to a resource that include the version apart of the URI. This enables concurrency to be completely transparent and does not require any knowledge from the client.

YouTube

Check out my YouTube channel where I post all kinds of content that accompanies my posts including this video showing everything that is in this post.

Optimistic Concurrency

In a concurrent environment like a web application or HTTP API, you have multiple concurrent requests that could be trying to make state changes to the same resource. The normal flow for optimistic concurrency is that clients will specify the latest version they are aware of when attempting to make a state change. As an example, two clients make a request for an HTTP request of GET /products/abc123

Optimistic Concurrency in an HTTP API with ETags & Hypermedia

The HTTP API returns the data along with a “version” property. The version indicates the current version of that resource. Now when a client makes a subsequent call to perform any type of action that’s going to result in a state change, it also includes the version. As an example, the client performs an inventory adjustment by making an HTTP call to POST /products/abc123/quantity but it also includes the version it received from the prior GET request.

Optimistic Concurrency in an HTTP API with ETags & Hypermedia

Now if the second client, which also did a GET request and also had version 15 of the resource, makes a similar HTTP call to do an inventory adjustment. It also includes the version it has, which is 15. However since the first client has already made a successful state change, the version is now 16.

Optimistic Concurrency in an HTTP API with ETags & Hypermedia

This request by the second client will fail because we’ve implemented optimistic concurrency. There are various ways you can implement ways beyond just using a version number, such as a DateTime or Timestamp that represents the last change or most recent version. Using a relational database, this will be implemented by including the version in the WHERE clause and then getting back the value of affected rows. If no rows were affected then the version isn’t what is the current value.

ETags & If-Match

Another way of passing the version around from server to client is by leveraging the ETag and If-Match headers in the HTTP Response and Request.

A good example of this implementation is with Azure CosmosDB. When you request a document, it will return an _etag property but also include the ETag header in the response. This represents the version of the resource.

Here is what the response body looks like:

Here are the headers from the response.

Optimistic Concurrency in an HTTP API with ETags & Hypermedia

Since we now have the ETag value, we can now use it when making a subsequent request to perform a state change. To do so, the request must pass the If-Match header with the ETag value.

The Cosmos SDK uses this exactly as illustrated within its API.

Hypermedia

Another way to pass around the version is simply by using Hypermedia. Hypermedia is about providing the client with information about what other actions or resources are available based on the resources it’s accessing. This means when a client requests the product resource GET /products/abc123, the server will provide it the URI to where it can do an Inventory Adjustment. Since we’re providing the URI, we can include the current version in the URI.

In the example above, I’m using EventStoreDB, which allows optimistic concurrency by passing the current version when appending an event to the event stream. If the version passed is not the current version of the stream, a WrongExpectedVersionException is thrown.

Here’s an example of what the response body looks like when calling GET /products/abc123

When we want to do an Inventory Adjustment, we aren’t constructing a URI, we simply use the response from the GET and find the URI in the commands array.

Optimistic Concurrency

There are many different ways to handle optimistic concurrency, and hopefully, this illustrated a couple of different options by using the ETags/If-Match headers as well as leveraging hypermedia.

Source Code

Developer-level members of my YouTube channel or Patreon get access to the full source for any working demo application that I post on my blog or YouTube. Check out the YouTube Membership or Patreon for more info.

Related Links

Follow @CodeOpinion on Twitter

Software Architecture & Design

Get all my latest YouTube Vidoes and Blog Posts on Software Architecture & Design

Long live the Monolith! Monolithic Architecture != Big Ball of Mud

If you’re developing a Monolith or using a Monolithic Architecture doesn’t mean it needs to be a big ball of mud. Most people equate a Monolith with a Big Ball of Mud because it’s highly coupled and difficult to change. However, you can combat it by defining strict boundaries and logically decoupling those boundaries and the data that each boundary owns. To go even further you can loosely couple by leveraging asynchronous messaging between boundaries. Does this sound familiar? Like Microservices where each service has its defined capabilities and database?

YouTube

Check out my YouTube channel where I post all kinds of content that accompanies my posts including this video showing everything that is in this post.

Coupling

When people think of a Monolith or Monolithic Architecture they call it a tangled, spaghetti code mess that’s really difficult to make changes. What is really the issue is that the majority of the system is tightly coupled.

Below is a dependency graph that illustrates how each component/class/module is connected to others.

Systems that have many modules that have a high degree of afferent and efferent coupling are bound to be brittle. Changes to one module can have cascading effects on all dependant modules. Coupling is a necessary evil but can be managed.

Coupling exists equally with a microservices architecture. If you’ve developed a microservices architecture that relies heavily upon RPC to communicate between services, the coupling is no different than a monolith that communicates in process. Adding the synchronous calls over the network between services doesn’t magically make coupling go away. You’ve simply developed a distributed monolith, not microservices. If anything, communicating over the network via RPC makes things worse. Why? Check out my post REST APIs for Microservices? Beware!

Boundaries

Defining service boundaries is one of the most important aspects of designing a system, yet getting them “right” is incredibly difficult. There are a lot of tradeoffs that determine where those boundaries should lie.

Services should own a set of business capabilities and data. You may have services that other services boundaries need for query purposes, but a single service should own that data because of the capabilities it provides.

If you think about an existing monolithic architecture or a distributed monolith (bad microservices), let’s represent it by this large turd pile (poop emoji).

What you want to develop are smaller turd piles. Decomposing a large coupled system into smaller units. Each is a service boundary.

Another way to visualize this is to think of a piece of cake as a large high coupled system. The cake may be using a layered architecture but still has a high degree of coupling. To decompose you want to cut out a piece of the cake.

For more info on defining services boundaries, check out my post Context is King: Finding Service Boundaries

Loosely Coupled Monolith

The structure of a monolith that has well-defined service boundaries means that each boundary must first own its own data. If you’re developing a monolith this means that you may still have a single database instance, but the schema (tables, collections, etc) is only accessed by that boundary. No other boundaries are able to access them directly from data storage.

Obviously, boundaries will need to communicate so they can interact with each other. If you want to minimize coupling between boundaries, then a solution is to use asynchronous messaging. This means introducing a message broker that your monolith is sending messages to and also receiving.

Messages are contracts. They are simple DTOs that represent Commands and Events. This is why in the above diagram you see that the implementation for any service boundary is only coupled to the contracts of other service boundaries.

As an example of creating an HTTP API, it would host all the different service boundaries. each defines all its routes and dependencies within it. It can send commands and publish events to the message broker. Simply for scaling purposes, I’ve illustrated another top process as the Message Processor. It’s using the exact same code as the HTTP API. It will be connected to the message broker to consume messages and then dispatch them in-process to the appropriate boundary for the messages to be handled.

As with any monolith, if you need to communicate synchronously between boundaries, you can do so behind interfaces and functions which live within the contracts projects.

Deployment

The difference between a loosely coupled monolith and microservices at this point becomes a deployment concern. Each has well-defined boundaries and communicates primarily via asynchronous messaging. If you needed to separate a boundary within your monolith because you want to scale it differently or want it to be independently deployable, then you can go down that road, but with added complexity. Check out my post on scaling a monolith horizontally before you go down that road, however.

Monolithic Architecture

A monolith does not need to be a big ball of mud. It does not need to be a highly coupled pile of spaghetti that is difficult to change. Define service boundaries and limit direct synchronous communication as much as possible while leveraging asynchronous messaging.

Source Code

Developer-level members of my YouTube channel or Patreon get access to the full source for any working demo application that I post on my blog or YouTube. Check out the YouTube Membership or Patreon for more info.

Related Links

Follow @CodeOpinion on Twitter

Software Architecture & Design

Get all my latest YouTube Vidoes and Blog Posts on Software Architecture & Design