Leaking Value Objects from your Domain

Value Objects are a great way to explicitly capture concepts within your domain. They are immutable, always in a valid state, provide behavior, and are defined by their value. This sounds a lot like Messages (Commands, Events) that are also immutable and should be in a valid state. However, exposing your Value Objects by using them within Commands or Events can have a negative impact on your ability to evolve your domain model.

YouTube

Check out my YouTube channel where I post all kinds of content that accompanies my posts including this video showing everything that is in this post.

Value Objects

First, let’s cover what Value objects are since they have many characteristics that define them. They are explicit domain concepts that should always be in a valid state. They are immutable once created, which means they are always created in a valid state. Since they cannot be mutated they also have the benefit of being defined by their value. Lastly, they should have behavior, which is a characteristic that is often overlooked.

A typical example of a Value Object is Money. Money isn’t just a decimal. Especially in a multi-currency environment. The combination of the amount and the currency is required to make it valid.

Another typical example is Distance. Again, the distance isn’t simply a number, but rather can be a number along with a unit of measure.

Messaging

Commands and Events in a Message or Event Driven Architecture look very similar to Value Objects. Messages are explicit domain concepts that are immutable and in a valid state. So can you have Value Objects inside a Command or Event?

In the example above, the PlaceOrderCommand has two Value Objects: Product and Currency. These are explicit concepts we’ve defined within a boundary.

Since messages are for the purpose of decoupling between boundaries, this means that other boundaries must be aware of these as concepts.

Putting Value Objects in your messages means you’re going to be leaking details outside of your service boundary. The consequences of this are that since messages are contracts, you’ll need to think about versioning any time you want to change a Value Object since it’s a part of a Message.

Rather, you want to keep domain concepts from leaking outside of your service boundary. Concepts within your service boundary you want to be able to refactor, change, rename, remove without having to concern yourself with other services. The purpose of messaging is decoupling and using messages as a stable contract. The moment you leak something like a Value Object into a message, you’ve coupled other services to concepts within your service boundary.

Conversion

Instead of leaking Value Objects, you can create Messages/DTOs that may look similar. Simply have some type of conversion that accepts your Value Objects as parameters but have the message being built be simple primitives or nested types.

As another example, that’s using a nested type.

Don’t Leak Value Objects

They are as much a domain concept as Entities are. If you’re not going to expose Entities then do not expose Value Objects. Although they may seem trivial, you’ll be handcuffed into versioning your messages if you do need to change a them in any way.

Messages are contracts for other services. You want to message contracts to have some stability. Although Value Objects have similar characteristics as messages (Commands and Events), they are meant to be internal while Messages are meant for other services boundaries.

Source Code

Developer-level members of my CodeOpinion YouTube channel get access to the full source for any working demo application that I post on my blog or YouTube. Check out the membership for more info.

Related Links

Follow @CodeOpinion on Twitter

Enjoy this post? Subscribe!

Subscribe to our weekly Newsletter and stay tuned.

Data Consistency Between Microservices

In a system where many users are collaborating concurrently, it can be difficult to manage data consistency between microservices. Do you need consistency? Maybe, maybe not. It’s a discussion to have with the business about the impact of inconsistent data. If it is important to have consistency, then one solution to this problem is by not having it in the first place. When data is required from another service for business logic, then it’s possibly a sign that you have some misaligned boundaries.

YouTube

Check out my YouTube channel where I post all kinds of content that accompanies my posts including this video showing everything that is in this post.

Commands

To illustrate using inconsistent data, I’ll use the example of placing an Order. The Client/Caller makes a request to our Order/Sales boundary.

The Order/Sales needs to call the warehouse to get the quantity on hand of the product being ordered. If there is no Quantity On Hand in the warehouse, then we cannot place that Order.

After we get the Quantity on Hand from the warehouse, we then call the Catalog boundary to get the Price of the Product we are ordering.

Here’s a code sample of what that might look like.

Concurrency

The issue with the above is the moment we retrieve the Quantity On Hand from the Warehouse and the Price from the Catalog, we immediately have stale data. This is because there will be no data consistency between those pieces of data and saving our new Order.

In a collaborative environment, you will have many users/processes interacting concurrently with various parts of the system.

Data Consistency Between Microservices

When we placed the order, we might have had another client at the same time do an inventory adjustment which then set the Quantity on Hand to 0. We could also have another client change the price for the product we’re ordering.

There is no data consistency.

If you’re in a monolith or share the same database where you can have all statements (selects & insert/update/deletes) be using the same database transaction with the correct isolation level (serializable), then you can prevent performing dirty reads to get consistency.

The code sample above, modified to use a serializable isolated transaction now looks like this:

Data Consistency

If you’re not using a single database and have a database per service, which I recommend, then how is having data consistency even possible? It’s not without a distributed transaction, which you likely won’t.

The root of the problem is querying data from other boundaries that will be immediately inconsistent the moment it’s returned, just as in my first example without a serializable transaction. If you’re making HTTP or gRPC calls to other services to retrieve data that you require to perform business logic, you’re dealing with inconsistent data. If you store a local cache copy that’s eventually consistent, you’re dealing with inconsistent data.

Is having inconsistent data an issue? Go ask the business. If it is, then you need to get all relevant data within the same boundary that’s required.

There are two pieces of data we ultimately needed.

We required the Quantity on Hand from the warehouse. In reality in the distribution/warehouse domain, you don’t rely on the “Quantity on Hand”. When dealing with physical goods, the point of truth is actually what is actually in the warehouse, not what the software/database states. Products can be damaged, stolen, and lost which the system does not know about in real-time. The system is eventually consistent with the real world.

Because of this, Sales has the concept of Available to Promise (ATP) which is a business function for customer order promising based on what’s been ordered but not yet shipped, purchased but not yet received, etc.

The catalog boundary also contained the price of the Product. But why? Why would the Catalog service own the selling price? Wouldn’t the Sales boundary own the selling price?

If we re-align where data ownership belongs within various boundaries, we can get back to having consistency with the right level of transaction isolation.

In the code below, we can go back to using a serializable transaction because our Sales boundary has the Price and Available to Promise (ATP) that we can use within this boundaries database. No longer are we using inconsistent data or relying on querying other boundaries.

Source Code

Developer-level members of my CodeOpinion YouTube channel get access to the full source for any working demo application that I post on my blog or YouTube. Check out the membership for more info.

Related Links

Follow @CodeOpinion on Twitter

Enjoy this post? Subscribe!

Subscribe to our weekly Newsletter and stay tuned.

My TOP Patterns for Event Driven Architecture

Here are my top 5 patterns and concepts (Outbox, Idempotent Consumers, Event Choreography, Orchestration, Retry/Dead Letter) for Event Driven Architecture that you’ll likely implement. Why? Well if you’re new to Event Driven Architecture there are many different problems that you’ll encounter. Most of the issues have well-established patterns or concepts you can leverage to deal with them. Here are the most common patterns for Event Driven Architecture that you’ll likely use.

YouTube

Check out my YouTube channel where I post all kinds of content that accompanies my posts including this video showing everything that is in this post.

Outbox Pattern

Generally, you’re publishing events because you want to notify another boundary that something has occurred within your boundary. There was likely some type of state change that you want to want to notify other boundaries about. This could be from a long-running business process or for state propagation.

The issue is your state changes are likely to be stored in a database while your publishing messages to a message broker. You cannot reliably do both without a distributed transaction.

My TOP Patterns for Event Driven Architecture

If you’re successfully writing to your database, but then for some reason, the message broker or queue you’re publishing a message to is unavailable or there is a failure, then you’re never going to publish the event.

My TOP Patterns for Event Driven Architecture

This could have some serious implications and cause inconsistencies or breaks of workflow between services in a long-running business process.

The Outbox Pattern solves this problem by writing the messages to be published to the database with your state changes within the same transaction. Separately a “Publisher” process will pull the messages from the database and then send them to your Message Broker or Queue.

Outbox Pattern
Outbox Pattern
Outbox Pattern

The outbox pattern is one of my top patterns for event driven architecture if you need to reliably publish events.

For more in on the Outbox Pattern, check out my post and video with code samples of how it works.

Idempotent Consumers

Most Message Brokers support “at least once messaging”. This means that a message may be delivered to a consumer at least once. In other words, it could deliver a message to a consumer more than once.

Processing a message more than once could have a negative effect that what is intended.

There are many different reasons why a message could be delivered more than once, but as an example, one reason is using the Outbox pattern described above. When the Outbox Publisher pulls a message from the database, publishes it to the Message Broker or Queue, it then must delete the record from the database. If for some reason this fails, then the record will still exist and the Outbox Publisher will ultimately send the message again to the broker.

Idempotent Consumers are able to handle processing the same message more than once without any adverse side effects.

Some consumers may be naturally idempotent. Meaning how they react to consuming a message can occur multiple times and they do not have any side effects.

For consumers that would have side effects, in order to handle duplicate messages, the key is to record a unique identifier (Message-ID) for each message that has been processed. Just like the outbox pattern, this means persisting the Message-ID along with any state changes in the same transaction to your database.

For more info on creating Idempotent Consumers, check out my post and video with code samples of how it works.

Event Choreography & Orchestration

Many different boundaries are often used together in a long-running business process. The challenges are that if one part of the process failed,

If each service has its own database, there’s no easy way to roll back changes that have happened in the process prior to the failure. How do you handle the lack of a distributed transaction or two-phase commit?

Event Choreography is driven entirely by events being consumed and published by various boundaries within a system. There is no centralized coordination or logic. A long-running process workflow is created by one boundary publishing an event, another consuming it and performing some action, then publishing its own event. Depending on the workflow there could be many services involved but they are entirely decoupled and have no knowledge about how the entire workflow works.

Orchestration provides a centralized place to define the workflow for a long-running business process. It consumes events but may send Commands to a specific boundary, generally still asynchronous via a message queue. Orchestration is telling other services to perform a specific action. Those services in turn publish events that the orchestrator consumes to start the next part of the workflow.

For more info on Event Choreography & Orchestration, check out my post and video with code samples of how it works.

Failures

Transient Faults are unpredictable and could be caused by network issues, availability, or latency with the service you’re communicating with.

In an event driven architecture, you get the benefit of having various ways of handling failures. The most common for transient failures are immediate retries.

For example, if you’re consuming a message and have to interact with some other dependency. It could be a database, a cache, or a 3rd party service. If there is a failure when consuming a message with that dependency, simply retry consuming the message again.

If the failure continues, using an Exponential Backoff allows more time/delay between retries. You may use an immediate retry and if a failure continues, wait for 5 seconds then retry again. If a failure still occurs, wait even longer for 10 seconds and retry again. You could configure this exponential backoff for different intervals and a total number of retries.

If all retries are failing you may choose to move the message that cannot be properly consumed to a dead letter queue. This allows you to continue processing other messages while not losing the message that cannot be processed. Moving a message over to a Dead Letter Queue allows you to attempt to process the message later or investigate why the consumer is failing. You’re not losing the actual message.

Handling failures with various patterns for event driven architecture is required to be resilient and reliable.

For more info on Handling Failures in a Message Driven Architecture, check out my post and video.

Source Code

Developer-level members of my CodeOpinion YouTube channel get access to the full source for any working demo application that I post on my blog or YouTube. Check out the membership for more info.

Follow @CodeOpinion on Twitter

Enjoy this post? Subscribe!

Subscribe to our weekly Newsletter and stay tuned.