Roundup #41: Apache Spark, Strongly Typed EntityIDs, Azure Workers, Automapper, NetCore3 Progress

Here are the things that caught my eye this week in .NET.  I’d love to hear what you found most interesting this week.  Let me know in the comments or on Twitter.

Introducing .NET for Apache® Spark™ Preview

Today at Spark + AI summit we are excited to announce .NET for Apache Spark. Spark is a popular open source distributed processing engine for analytics over large data sets. Spark can be used for processing batches of data, real-time streams, machine learning, and ad-hoc query.

Link: https://devblogs.microsoft.com/dotnet/introducing-net-for-apache-spark/

Using strongly-typed entity IDs to avoid primitive obsession

Have you ever requested an entity from a service (web API / database / generic service) and got a 404 / not found response when you’re sure it exists? I’ve seen it quite a few times, and it sometimes comes down to requesting the entity using the wrong ID. In this post I show one way to avoid these sorts of errors by acknowledging the problem as primitive obsession, and using the C# type system to catch the errors for us.

Link: https://andrewlock.net/using-strongly-typed-entity-ids-to-avoid-primitive-obsession-part-1/

.NET Core Workers in Azure Container Instances

In .NET Core 3.0 we are introducing a new type of application template called Worker Service. This template is intended to give you a starting point for writing long running services in .NET Core. In this walkthrough you’ll learn how to use a Worker with Azure Container Registry and Azure Container Instances to get your Worker running as a microservice in the cloud.

Link: https://devblogs.microsoft.com/aspnet/dotnet-core-workers-in-azure-container-instances/

AutoMapper Usage Guidelines

A list of Do and Do Not for best practices if you’re using Automapper. I just saw that a new version was released and it made me think of this post.

Link: https://jimmybogard.com/automapper-usage-guidelines/

.NET Core 3.0 – progress on bugs, weekly update from 4/24

Link: https://twitter.com/ziki_cz/status/1121126233792585728

Enjoy this post? Subscribe!

Subscribe to our weekly Newsletter and stay tuned.

Using AWS Parameter Store for ASP.NET Core Data Protection Keys

Using AWS Parameter Store for ASP.NET Core Data Protection Keys

If you’re using ASP.NET Core in AWS under any type of load balanced scenario, either through Elastic Beanstalk or an ALB with and ECS, etc, you will need to share the data protection keys. This is because each instance of your application needs to be using the exact same keys.

This isn’t an issue if you are using a single instance as the keys will be stored in memory.

If you haven’t yet used the AWS SDK, I highly recommend first checking out my quick start on configuring AWS SDK in ASP.NET Core.

AWS Systems Manager Parameter Store

One option is to use Parameter store to store the data protection keys. Thankfully AWS has released a nice little package to make this really simple.

First, add the Amazon.AspNetCore.DataProtection.SSM package to your csproj.

Now you can use the PersistKeysToAWSSystemsManager method passing the prefix as the parameter.

That’s it! That simple. Now when you run your application, you will see that a new parameter has been created with the prefix you specified followed by key-{GUID}.

ASP.NET Core Data Protection

If you want to persist data protection keys to somewhere like S3, check out this other post.

If you have any questions or comments, please let me know on twitter as I will focus my posts on those questions and comments.

Related Links:

Enjoy this post? Subscribe!

Subscribe to our weekly Newsletter and stay tuned.

CQRS without Multiple Data Sources

One of the most common misconceptions about CQRS is it implies Eventual Consistency. That you must have different data sources for your commands and queries. Meaning you will have a use one data source for commands/writes and an entirely different data source for query/reads. This is simply untrue.

This assumption implies that you’re query/read data source will be eventually consistent with the command/write side. This is because the assumption is your commands will write to its data source, then emit events that will be processed and update your query/read database independently.

If you’re unfamiliar with CQRS, I highly recommend checking some other posts I’ve written about CQRS before reading futher.

Different Models

One of the benefits of applying CQRS is that you can have different representations of your data. Your write model may look very different than your read model.

However, this doesn’t mean you need to have different data sources and use event handlers to build your query model.

Views

If you’re just getting into applying CQRS, you can use the exact same underlying data model for both commands/writes and queries/reads. There’s nothing saying you can’t.

However, if you’re using a relational database you can get all the benefits of tailored query models by mapping your queries/reads models to database views. Or if you database supports it, materialized views.

If you’re using Entity Framework Core, this is pretty straight forward by defining your query types in the OnModelCreating method of your DbContext.

Consistentcy

This means you’re command/write model and query/read models are always 100% consistent. You’re not dealing with eventual consistency.

Another bonus is you’re not writing event handlers to update your read/query database which also eliminates a pile of code and complexity.

From my experience, when applied wrong, eventual consistency can be a giant pain and not at all what you’re users are expecting.

Most often users are expecting to click a button and see the results immediately. Obviously, there are many ways to handle this, but if you’re new to CQRS, my initial recommendation is to keep things as simple as possible and that means keeping data consistent.

Start simple:

  • Create a class that changes state (command) and create a separate class that reads state (queries).
  • Use SQL Views (or materialized views) to map tailored queries.
  • Use something like Automapper for compositing the query result.

Atomic

If using Views isn’t an option, and you’re using the same relational database for both reads and writes another option is to wrap the entire operation in a transaction. This means your operation to modify your database records for the command, as well as modify database records for your queries happen within the same transaction.

I’ll elaborate more on this, eventual consistency, event sourcing and more in coming posts.

Fat Controller CQRS Diet

I’ve blogged a bit about how to implement CQRS without any of the other fluff. You can check out my Fat Controller CQRS Diet blog series as well as a related talk:

If you have any questions or comments, please let me know on twitter as I will focus my posts on those questions and comments.

Enjoy this post? Subscribe!

Subscribe to our weekly Newsletter and stay tuned.