Paging DocumentDB Query Results from .NET

Paging DocumentDB Query Results from .NETI received a comment on my DocumentDB Transactions from .NET post from Onur:

The thing I don’t like about documentdb is that it doesn’t support aggregations nor paging.

This was actually some pretty good timing of this comment as I just ran into a situation of a side project that required to page through  a result set.

Skip & Take

It does not appear that DocumentDB supports (yet) SKIP and TAKE, which you would expect to use in order to do paging.  Because of this, you can’t implement all the same functionality you might expect.

But there is a a way to limit the number of records returned in a query and subsequently having the next query return the records were the previous query left off.  This means you can basically continue through your result set.

Continuations

DocumentDB has the concept of a continuationToken which is returned from a query which limits the number of records to return.  When using the CreateDocumentQuery<T> from the .NET SDK, you can specify an additional parameter for FeedOptions.

FeedOptions

Here’s an example of using FeedOptions to limit the number of results:

RequestContinuation

FeedOptions also property for specifying the RequestContinuation.  If you original query had a MaxItemCount and your query result exceeds the specified count, then your results will contain a ResponseContinuation property.

Combine these two to basically do forward paging.  Here is an xUnit test to outline:

 

 

Demo Source Code

I’ve put together a small .NET Core sample with an XUnit test from above. All the source code for this series is available on GitHub.

Are you using DocumentDB? I’d love to hear your experiences so far along. Let me know on twitter or in the comments.


 

DocumentDB Transactions from .NET

DocumentDB Transactions from .NETI received a comment to my Optimistic Concurrency in DocumentDB  a couple weeks ago from Jerry Goyal:

Can we somehow handle the concurrency among multiple documents (transactions)?

Since ETags are defined on each document, you must build your concurrency around them.  However, this made me start to wonder how to update multiple documents at the same time using their respective ETags.  Meaning you would want both documents to update together only if both of their ETags were current.  If one was valid and the other was out of date, none of the documents would update.

Transactions

It’s pretty obvious that I’m looking for transactions within DocumentDB.

DocumentDB supports language-integrated transactions via JavaScript stored procedures and triggers. All database operations inside scripts are executed under snapshot isolation scoped to the collection if it is a single-partition collection, or documents with the same partition key value within a collection, if the collection is partitioned.

There is no client transaction scope you can use with the .NET SDK.   Meaning from the client (.NET) you cannot create a transaction scope.  It has to be done on the DocumentDB server via  stored procedure or trigger.

DocumentDB Emulator

There are several ways you can create a stored procedure.  One of which is via the DocumentDB Emulator web UI.

You can create a store procedure under any given collection.

You can also create stored procedures via the .NET SDK using the CreateStoredProcedureAsync.  However for this demo, I’m going to be creating it via the emulator.

Stored Procedure

For demo purposes I wanted a way to do a bulk insert.  If any of the customers failed to be added to the collection, due to the Name being falsey, I don’t want any to be created.

Here is the stored procedure I created called sp_bulkinsert.  Notice the getContext() method which is available to.  Check out the JavaScript docs for more on the Context/Request/Response.

Client

Here are a couple tests using the .NET SDK which call the stored procedure passing in an array of new customers.

The first test passes returning the number of created customer documents.  The second test fails because the customer name is empty/blank.

 

Demo Source Code

I’ve put together a small .NET Core sample with an XUnit test from above. All the source code for this series is available on GitHub.

Are you using DocumentDB? I’d love to hear your experiences so far along. Let me know on twitter or in the comments.


 

Sensitive Configuration Data in ASP.NET Core

While working on my new side-project in ASP.NET Core, I was at the point where I needed to start storing sensitive configuration data.

Things like my DocumentDB Auth Key, Google OAuth ClientId & Secret, Twilio Auth Token, etc.

Depending on your context you may not want to be storing these types of application settings in configuration files that are committed to source control.

As you would expect, my local environment for development and once I deploy to Azure have completely different configurations.

So how do you change the configuration from local to production?

If you are used to using appSettings from the app.config and web.config, you may have been going down the road of transforming and replacing values with tools like SlowCheetah or through Octopus Deploy.

With ASP.NET Configurations, there’s a new way to do this.

Configuration

ASP.NET Core has a ConfigurationBuilder that enables you to load application settings from various places.  If you’ve loaded up a brand new ASP.NET Core app, you’re Startup looks something like this.

You have a new appsettings.json file you can use as well as an optional appsettings for each different environment.

I could use the appsettings.Development.json for local development, however as mentioned i want to keep sensitive data out of this file.

For this, I’ve now been using User Secrets.

User Secrets

ASP.NET Core adds a new way to store your sensitive data with User Secrets.

User Secrets are simply a plain text json file that are stored in your systems user profile directory.  Meaning the file is stored outside of your project directory.

There are a couple ways to manage your user secrets, first you need to add Microsoft.Extensions.SecretManager.Tools in the tools section of your project.json.

I’m unaware of how this translates to the new .csproj format.

If you are using the dotnet CLI, then you should now be able to run dotnet user-secrets --version

Next in your project.json you want to add a “userSecretsId” property in the root.  The value can be anything you want but should probably keep it unique so it doesn’t collide with another user secrets you create for any other project.

In order to load the secrets file, you need to first add Microsoft.Extensions.Configuration.UserSecrets package in your project.json

As mentioned, since I only use user secrets for local development, we can now load the secrets using the ConfigurationBuilder in our Startup

Visual Studio

If you are using Visual Studio, you can edit your user secrets file by accessing it in the context menu of your project in the solution explorer.

CLI

If you are using the dotnet CLI then you can call dotnet user-secrets [options] [command] to clear, list, remove and set secrets.

Example

Simple example would be wanting to store the connection string to a database.

 dotnet user-secrets set ConnStr "User ID=Derek;Password=CodeOpinion;"

Now within our code we can access the connection string after we built our configuration from the ConfigurationBuilder.

Production

Next post I’ll take a look at how you can use Environment Variables and specifically how to set them when deploying as an App Service within Azure.

Always enjoy hear your feedback.  Please let me know on twitter or in the comments.