ALWAYS Valid Domain Model

Always having your Domain Model in a valid state means it will be predictable. You’ll write less defensive code or conditional code because your domain objects will always be in a valid state. Using aggregates is a great way to encapsulate the state with behavior to keep the state valid. Using factories to create your aggregates is key to having a valid state from the very beginning. Here’s how you can create an always valid domain model.

YouTube

Check out my YouTube channel where I post all kinds of content that accompanies my posts including this video showing everything that is in this post.

Aggregate

First, let’s start with defining our Aggregate as it ultimately is what will keep a valid domain model.

The example I’m going to use in this post is of a Shipment. You could think of a Shipment in the sense of Food Delivery service where you’re ordering food from the restaurant and it gets delivered to your home.

The aggregate in this scenario consists of a Shipment and two Stops. There is a Pickup Stop, which is the restaurant where the shipment starts from, and a Delivery stop which is your home where the Shipment ends. The Aggregate Root is the Shipment.

Valid Domain Model

One important aspect of this is that each stop needs to follow a progression. The initial state of a Stop is “In Transit”, then goes to “Arrived” once the delivery driver arrives at the location of the stop (either the restaurant or your home) and then finally goes to the “Departed” state when the delivery driver leaves the stop.

Valid Domain Model

Another rule is that this progression must be done for the Pickup Stop first, in its entirety before the Delivery Stop can start its progression. This makes sense because you need to arrive at the restaurant to pick up the food, leave the restaurant, arrive at the house for delivery, then leave the house.

Invariants

Based on the simplistic example above, our invariants are:

  • Shipment must have at least 2 stops
  • The First stop must be a Pickup
  • The Last Stop must be a Delivery
  • Stops must progress in order

The first three invariants must be established upon trying to create the Aggregate. The final invariant is controlled within the aggregate. In order to always be in a valid state, we must only allow the creation of a valid Aggregate that satisfies the invariants defined above. And once we have our Aggregate, we must only allow valid state transitions.

Enforcing these invariants is what will keep a valid domain model.

Factory

In order to create our Aggregate in a valid state right from the get-go, we can use a Factory. In the example below, I have a private constructor but expose two different static factory methods that force us into a good state. These factories are enforcing the first three invariants.

The Arrive() method is enforcing the last invariant that we must progress our stops in the correct order. Finally, here are the Stops (Pickup, Delivery) that enforce they transition themselves in the correct order.

Draft Mode

Often a scenario is what I call a “Draft Mode” where the invariants aren’t applicable, yet. In other words, they want to create a model that has much looser constraints.

To illustrate this with my Shipment example, you may have multiple Orders to a single Restaurant that ultimately will be placed all on the same Shipment.

In this case, the Shipment still has all the invariants but what we likely want is to have our Shipment created from a Plan. The concept of a Plan is to associate multiple orders and then generate a Shipment from them. This means we’re creating an Aggregate from another Aggregate.

ALWAYS Valid Domain Model

Having your domain model always in a valid state, right from the beginning means you’ll have to write less defensive code because you absolutely know the data is in a valid state. In my example, there will always be at least 2 stops, the first will be a Pickup, the last will be a Delivery. All stops will go through a progression, in order.

The factory is what sets everything up in a valid state and the Aggregate keeps us in a valid state.

Source Code

Developer-level members of my CodeOpinion YouTube channel get access to the full source for any working demo application that I post on my blog or YouTube. Check out the membership for more info.

Related Posts

Follow @CodeOpinion on Twitter

Enjoy this post? Subscribe!

Subscribe to our weekly Newsletter and stay tuned.

Separating Concerns with Pipes & Filters

How do you separate concerns when processing a request? Typical concerns such as Logging, Validation, Exception Handling, Retries, and many more. One way is to build a request pipeline for separating concerns by using the Pipes and Filters pattern. You can also build a pipeline using the Russian Doll model that allows you to short circuit at any point throughout the pipeline.

YouTube

Check out my YouTube channel where I post all kinds of content that accompanies my posts including this video showing everything that is in this post.

Pipes & Filters

Often time when processing a request you need to handle various concerns. The Pipes & Filters pattern allows you to break up these various concerns into a series of steps that create a pipeline. A request can be an HTTP request but can also be a message that’s being processed. If you’re using ASP.NET Core, you’re already using the Pipes & Filters pattern!

Pipes & Filters

In the diagram above, the sender is sending a request to the receiver, however, there are filters in between. These filters in between are what can handle various cross-cutting concerns.

Pipes & Filters

The request will pass through the various filters until it finally reaches the Receiver. Both Sender and Receiver are unaware that the request passed through the filters.

Pipes & Filters

As mentioned, the filters can be cross-cutting concerns such as logging, validation, caching, etc.

Pipes & Filters

One important note is that filters are independent. They should be composable with various types of requests. As you can assume a logging filter could be used in many different types of requests.

You should be able to decide per request type, which filters you want to use. They should be plug-and-play.

This allows the receiver of the request to really focus on the behavior that it needs to perform and not other concerns.

A good example of where you may have used this is with ASP.NET Core Action Filters.

Russian Doll

Another way of implementing Pipes & Filters is with the Russian Doll Model. The concept is the same that you have a Sender and a Receiver with Filters in between. The difference however is that each filter calls the next filter in the pipeline.

Each filter is still independent but because the pipeline is using a uniform interface, each filter can be provided the next filter to be invoked.

Russian Doll

Since each filter is calling the next filter, this means that you can short circuit the pipeline at any point.

Russian Doll

In the diagram above, the Validation filter may choose to short circuit the request if validation fails. Because it’s not calling the next filter in the pipeline the Receiver will never get executed to handle the request.

A good example of where you maybe have used this is with ASP.NET Core Middleware.

Messaging

Processing a message is often very similar to processing any other type of request, except it often doesn’t have an in-process response. There are still various concerns when processing a message that creating a pipeline helps with.

As an example using the Brighter messaging library, which supports Pipes & Filters using the Russian Doll model using attributes. This feels very similar to action filters in ASP.NET Core.

In the example, the PlaceOrderHandler has 3 different filters that will be executed before it is executed. Logging, Retry, Validation. I’ve implemented logging as a generic filter that can be used in any type of request. While Retry and Validation are specific to the PlaceOrderCommand.

Pipes & Filters

Creating a pipeline can be a powerful way to separate various concerns when processing a request. It allows you to focus on each individual task as a task (filter) within the pipeline.

You can create generic filters that can be used with any type of request or create very specific filters that are only used for a very specific request Also leverage the Russian Doll model to short circuit your request pipeline.

Source Code

Developer-level members of my CodeOpinion YouTube channel get access to the full source for any working demo application that I post on my blog or YouTube. Check out the membership for more info.

Related Posts

Follow @CodeOpinion on Twitter

Enjoy this post? Subscribe!

Subscribe to our weekly Newsletter and stay tuned.

Processing Large Payloads with the Claim Check Pattern

How do you handle processing large payloads? Maybe a user has uploaded a large image that needs to be resized to various sizes. Or perhaps you need to perform some ETL on a text file and interact with your database. One way is with a Message broker to prevent any blocking from calling code. Combined with the Claim Check Pattern to keep message sizes small to not exceed any message limits or cause performance issues with your message broker.

The pattern is to send the payload data to an external service or blob storage, then use a reference ID/pointer the blob storage location within the message sent to the Message Broker. The consumer can then use the reference ID/pointer to retrieve the payload from blob storage. Just like a Claim Check! This keeps message sizes small to not overwhelm your message broker.

YouTube

Check out my YouTube channel where I post all kinds of content that accompanies my posts including this video showing everything that is in this post.

In-Process

As an example, if a user is uploading a large file to our HTTP API, and we then need to process that file in some way, this could take a significant amount of time. Let’s say it’s simply a large text file where we need to iterate through the contents of the file, extract the data we need, then save the data to our database. This is a typical ETL (Extract, Transform, Load) process.

There are a couple of issues with doing this ETL when the user uploads the file. The first is that we’ll be blocking the user while the ETL occurs. Again, if this is a long process the could take a significant amount of time. The second issue is that if there are any failures throughout processing, we may partially process the file.

What I’d rather do is accept the file in our HTTP API, return back to the user/browser that the upload is complete and the file will be processed.

Out of Process

To move the processing of the file into another separate process, we can leverage a queue.

First, the Client/Browser will upload the file and our HTTP API.

Processing Large Payloads with the Claim Check Pattern

Once the file is been uploaded, we create a message and send it to the queue of our message broker.

Processing Large Payloads with the Claim Check Pattern

Once the message has been sent to the queue, we can then complete the request from the client/browser.

Now asynchronously a consumer can receive the message from the message broker and do the ETL work needed.

Processing Large Payloads with the Claim Check Pattern

Large Messages

There is one problem with this solution. If the file being uploaded is large and we’re putting the contents into the message on our queue, that means we’re going to have very large messages in our queue.

This isn’t a good idea for a few reasons. The first is that your message broker might not even support the size of messages you’re trying to send it. The second is that large messages can have performance implications with the message broker because you’re pushing a large amount of data to them, and then also pulling that large message out. Finally, the third issue is that your message broker may have a total volume limit. It may not be the number of messages but rather the total volume that has a limit. This means that you may only be able to have a limited number of messages because the messages themselves are so large.

This is why it’s recommended to keep messages small. But how do you keep a message small when you need to process a large file? That’s where the claim check pattern comes in.

First, when the file is uploaded to our HTTP API, it will upload the file to shared blob/file storage. Somewhere that both the producer and consumer can access.

Processing Large Payloads with the Claim Check Pattern

Once uploaded to blob/file storage, the producer will then create a message that contains a unique reference to the file in blob/file storage. This could be a key, file path, or anything that is understood by the consumer on how to retrieve the file.

Processing Large Payloads with the Claim Check Pattern

Now the consumer can receive the file asynchronously from the message broker.

Processing Large Payloads with the Claim Check Pattern

The consumer will then use the unique reference or identifier in the message to then read the file out of blob/file storage and perform the relevant ETL work.

Claim Check Pattern

If you have a large payload from a user that you need to process, offload that work out asynchronously to separate processes using a queue and message broker. But use the claim check pattern to keep your messages small. Have the producer and consumer share a blob or file storage where the producer can upload the file and then create a message that contains a reference to the uploaded file. When the consumer receives the message it can use the reference to read the file from blob storage and process it.

Source Code

Developer-level members of my CodeOpinion YouTube channel get access to the full source for any working demo application that I post on my blog or YouTube. Check out the membership for more info.

Related Posts

Follow @CodeOpinion on Twitter

Enjoy this post? Subscribe!

Subscribe to our weekly Newsletter and stay tuned.