If you’re using ASP.NET Core in AWS under any type of load balanced scenario, either through Elastic Beanstalk or an ALB with and ECS, etc, you will need to share the data protection keys. This is because each instance of your application needs to be using the exact same keys.
This isn’t an issue if you are using a single instance as the keys will be stored in memory.
One of the most common misconceptions about CQRS is it implies Eventual Consistency. That you must have different data sources for your commands and queries. Meaning you will have a use one data source for commands/writes and an entirely different data source for query/reads. This is simply untrue.
This assumption implies that you’re query/read data source will be eventually consistent with the command/write side. This is because the assumption is your commands will write to its data source, then emit events that will be processed and update your query/read database independently.
If you’re unfamiliar with CQRS, I highly recommend checking some other posts I’ve written about CQRS before reading futher.
One of the benefits of applying CQRS is that you can have different representations of your data. Your write model may look very different than your read model.
However, this doesn’t mean you need to have different data sources and use event handlers to build your query model.
If you’re just getting into applying CQRS, you can use the exact same underlying data model for both commands/writes and queries/reads. There’s nothing saying you can’t.
However, if you’re using a relational database you can get all the benefits of tailored query models by mapping your queries/reads models to database views. Or if you database supports it, materialized views.
If you’re using Entity Framework Core, this is pretty straight forward by defining your query types in the OnModelCreating method of your DbContext.
This means you’re command/write model and query/read models are always 100% consistent. You’re not dealing with eventual consistency.
Another bonus is you’re not writing event handlers to update your read/query database which also eliminates a pile of code and complexity.
From my experience, when applied wrong, eventual consistency can be a giant pain and not at all what you’re users are expecting.
Most often users are expecting to click a button and see the results immediately. Obviously, there are many ways to handle this, but if you’re new to CQRS, my initial recommendation is to keep things as simple as possible and that means keeping data consistent.
Create a class that changes state (command) and create a separate class that reads state (queries).
Use SQL Views (or materialized views) to map tailored queries.
Use something like Automapper for compositing the query result.
If using Views isn’t an option, and you’re using the same relational database for both reads and writes another option is to wrap the entire operation in a transaction. This means your operation to modify your database records for the command, as well as modify database records for your queries happen within the same transaction.
I’ll elaborate more on this, eventual consistency, event sourcing and more in coming posts.
Fat Controller CQRS Diet
I’ve blogged a bit about how to implement CQRS without any of the other fluff. You can check out my Fat Controller CQRS Diet blog series as well as a related talk:
If you have any questions or comments, please let me know on twitter as I will focus my posts on those questions and comments.
In this section, I’m going to cover how to deal with scaling SignalR when in a server farm behind a load balancer.
Typically to scale we would introduce a load balancer and additional instances of our application.
Introducing multiple instances of our application with SignlaR behind a load balancer is a problem because SignalR keeps track of connected clients in each instance.
This blog post is apart of a course that is a complete step-by-setup guide on how to build real-time web applications using ASP.NET Core SignalR. By the end of this course, you’ll be able to build real-world, scalable, production applications using the tools and techniques provided in this course.
If you haven’t already, check out the prior sections of this course.
The diagram below illustrates that we have 3 instances of our ASP.NET Core application behind a load balancer. When Client A makes a SignalR connection it is passed off to Instance #1. When Client B connects, it may get passed to Instance #2. Both instances are unaware of the other connected clients and there will be no communication between both.
Meaning, if you call Clients.All.SendAsync() from a Server Hub or HubContext, you will only be sending to the clients connected to that instance.
You can use Redis as a backplan which keeps information about all connected clients. SignalR uses the pub/sub feature to send messages to other servers. This solves the issue that if you call Clients.All.SendAsync() the message will be sent to all clients from all server.
Depending on which package you are using you must call to add Redis to your AddSignalR() in your startups ConfigureServices()
You can also specify configuration options. In this example, I’m configuring the ClientName of the Redis connection.
To confirm my SignalR Hub is now using Redis, I’ll take a look at the connections in my Redis instance I’m hosting in a Docker container.
There is one caveat when using a Redis backplane. If you are using anything other then WebSockets, meaning you are using Server-Sent Events or Long Polling, you must configure your load balancer to support sticky sessions. Sticky sessions are also known as client affinity and can be enabled in both Azure and AWS.