DocumentDB Transactions from .NET

DocumentDB Transactions from .NETI received a comment to my Optimistic Concurrency in DocumentDB  a couple weeks ago from Jerry Goyal:

Can we somehow handle the concurrency among multiple documents (transactions)?

Since ETags are defined on each document, you must build your concurrency around them.  However, this made me start to wonder how to update multiple documents at the same time using their respective ETags.  Meaning you would want both documents to update together only if both of their ETags were current.  If one was valid and the other was out of date, none of the documents would update.

Transactions

It’s pretty obvious that I’m looking for transactions within DocumentDB.

DocumentDB supports language-integrated transactions via JavaScript stored procedures and triggers. All database operations inside scripts are executed under snapshot isolation scoped to the collection if it is a single-partition collection, or documents with the same partition key value within a collection, if the collection is partitioned.

There is no client transaction scope you can use with the .NET SDK.   Meaning from the client (.NET) you cannot create a transaction scope.  It has to be done on the DocumentDB server via  stored procedure or trigger.

DocumentDB Emulator

There are several ways you can create a stored procedure.  One of which is via the DocumentDB Emulator web UI.

You can create a store procedure under any given collection.

You can also create stored procedures via the .NET SDK using the CreateStoredProcedureAsync.  However for this demo, I’m going to be creating it via the emulator.

Stored Procedure

For demo purposes I wanted a way to do a bulk insert.  If any of the customers failed to be added to the collection, due to the Name being falsey, I don’t want any to be created.

Here is the stored procedure I created called sp_bulkinsert.  Notice the getContext() method which is available to.  Check out the JavaScript docs for more on the Context/Request/Response.

Client

Here are a couple tests using the .NET SDK which call the stored procedure passing in an array of new customers.

The first test passes returning the number of created customer documents.  The second test fails because the customer name is empty/blank.

 

Demo Source Code

I’ve put together a small .NET Core sample with an XUnit test from above. All the source code for this series is available on GitHub.

Are you using DocumentDB? I’d love to hear your experiences so far along. Let me know on twitter or in the comments.


 

Queries with Mediator and Command Patterns

Queries with Mediator and Command PatternsI recently got a really great comment from my post on using Query Objects instead of Repositories.  Although that blog post is now 2 years old, I still use the same concepts today in my applications.

What I thought may be relevant is to elaborate more on why and when I use the mediator and command pattern for the query side.

It may seem obvious on the command side, but not really needed on the query side.

Here a portion of the comment from Chris:

I’m struggling a bit to see the killer reason for using Query objects over repository. I can see the benefits of CQS but it’s this command pattern type implementation of the Q part I struggle with. I can see for Commands (mutators) the benefits of having separate handlers as you can use decorator pattern to wrap them with additional functionality e.g. Transaction, Audit etc.

However for queries I’m struggling to see why you wouldn’t just use a normal repository. Essentially your IQuery object defined what would be a method signature in the repository and the handler defines the implementation of the method. However at some point you have to compose the IQuery class to it’s handler either using a dependency injection framework or Mediator pattern as in your following blog.

Query Objects

The primary place I use query objects (mediator + command pattern) is when I want to be decoupled from an integration boundary.  Often times this is between my application and the web framework I’m using.

I view my query objects as my public API.

Which means I usually want to create a pipeline for those queries.  I will use a request pipeline for handling things such as authorization, validation and logging in the query pipeline.

Since this is the same implementation I use on the command side, it’s easy to use with library like MediatR which handles both commands and queries as a “Request”.

Repositories

I still use repositories, but just very differently than I did before.  Since my query objects are my public API, my repositories are generally only used internally within a the core of a given application (or bounded context).

My repositories are usually pretty narrow and also handle things like caching.

CQRS

There are many ways to implement the separation between reads and rights.  I’ve failed at making this point clear in many of my prior posts.

I’ve had a few encounters recently that make me feel like there’s still a lot of misconceptions about what people think CQRS is.  I’ll keep posting quote from Greg Young.

CQRS is simply the creation of two objects where there was previously only one.

The separation occurs based upon whether the methods are a command or a query (the same definition that is used by Meyer in Command and Query Separation, a command is any method that mutates state and a query is any method that returns a value).

That’s all it is folks.  Not really that interesting.  What’s interesting are all the possibilities because of this simple idea.

How you implement that is up to you.

One of the ways I’ve been implementing this is with the Mediator + Command Patterns.  It’s not the only way!

Comments

How are you implementing the query side?  Always enjoy hear your feedback, commands and questions.  Please let me know on twitter or in the comments.


 

Environment Variables in ASP.NET Core

In my last post, I covered how to handle sensitive configuration data by using User Secrets while working in development or on your local machine.  The next step is how to use environment variables in ASP.NET Core once you deploy to production (eg Azure App Service)?

IHostingEnvironment

In your ASP.NET Core Startup class, the ctor has a IHostingEnvironment parameter.  One of the properties on it is the EnvironmentName.  Along with this are a few extension methods such as IsDevelopment(), IsStaging(), IsProduction().

In the sample below, you can see if the environment is development, then I’m adding the User Secrets to the ConfigurationBuilder.

What IHostingEnvironent.IsDevelopment() ultimately does is checks to see if the IHostingEnvironment.EnvironmentName is equals to Development.

ASPNETCORE_ENVIRONMENT

How did IHostingEnvironment.EnvironmentName get set to Development?

If you are using Visual Studio you can access this in the in the project properties.

If you are outside of Visual Studio, you can manage it by editing the launchSettings.json.  This is what a sample looks like.

Environment Variables

You can also add additional environment variables that will be loaded when ConfigurationBuilder.AddEnvironmentVariables() is called in Startup.

In my above example, only under development are user secrets loaded.  This is done after environemnt variables are loaded.  This means that the user secrets override any environment variables I’ve set.

Azure AppService

Once you are ready to deploy to azure, you may want to set your environment as well as other environment variables that you need which will be used instead of user secrets.  You can do this in your App Service under Application Settings.

 

 

 

Always enjoy hear your feedback.  Please let me know on twitter or in the comments.