Microsoft Orleans Tutorial: Grains and Silos

Grains and SilosThis is the first post in the series were I’m actually getting into some code.  You will be able to follow along the journey of creating a practical web application using Microsoft Orleans.  This first post is really just setting the ground level to get familiar with the basics of Grains and Silos.  By the end of this post I’ll have a simple demo app that is a functioning ASP.NET Core as our client/frontend with a Orleans Silo hosting our Grains.

Blog Post Series:

Grains and Silos

For this blog post and throughout this series I’m going to be creating a web application with ASP.NET Core.   Our ASP.NET Core application will be our frontend and Orleans sits as a stateful middle tier in front of our data storage.

Now to break down the two pieces we need to get started with Orleans are Grains and Silos.  First let’s take a look at what Grains are.

Grains

Grains are the key primitives of the Orleans programming model. Grains are the building blocks of an Orleans application, they are atomic units of isolation, distribution, and persistence. Grains are objects that represent application entities. Just like in the classic Object Oriented Programming, a grain encapsulates state of an entity and encodes its behavior in the code logic. Grains can hold references to each other and interact by invoking each other’s methods exposed via interfaces.

One other interesting note about grains is that they are automatically managed by Orleans.  You don’t have to worry about instantiating or managing them.   Orleans terms them virtual actors as apposed to traditional actors.

Silos

Where do Grains live? Silos of course.

Silos are what host and execute Grains.  Again, Grains are your objects that expose behavior and encapsulate state.  Orleans creates your grains in the Silo and executes them here.  Your client code will reference only interfaces that your grains implement.  Don’t worry, sample code coming up.

Oh and of course, you can also have a cluster of silos.

Code

Now that we know the basics, let’s create a really simple demo just to get our feet wet and get a fully functioning app running.

Note: Remember to BUILD once you create any project as there is some codegen that occurs.  Do this before you start freaking out about types missing.

Grains Project

First we are going to create a project called Grains for our grains.  Here’s a netstandard2.0 project with references to a couple Orleans packages.  Note as of this blog post, I’m using 2.0.0-beta2 which supports .NET Standard 2.0.

Now we can create our first Grain.  I’m going to create a grain that simply increments a number within the grain.  Yes that simple.

Silo Project

Now that we have a grain, we need a Silo.  Create a .NET Core 2.0 console application called Silo.  Here’s the cs project with the relevant NuGet packages as well as Reference to our Grains project.

As you can see, we also have a OrleansConfiguration.xml that needs to be included in the output.  This is configuration file we will use for hosting our Silo.

Lastly, we need to start our Silo in our Program.cs

ASP.NET Core

Now let’s create a client that will interact with our Grains.  Create a new ASP.NET Core running .NET Core 2.0 application called Web

I’ve got a couple other dependencies referenced.  Botwin and Polly.

Polly is a fault handling library and Botwin is Middleware for ASP.NET Core that allows Nancy type routing.

The reason we want to use Polly is for our initial connection to the Silo.  Since we might running multiple projects from Visual Studio/Rider/Code/Whatever, this will help since the Silo might not be ready by the time our ASP.NET fires up.

Here is our Startup.cs that creates a singleton of our the Orleans IClusterClient.  It is thread-safe and intended to be used for the life of the application.

Now we can create a Botwin module to handle two different endpoints.  A GET request will return what our current count is, and a POST will increment our count by 1.

We simply inject our IClusterClient which is used to call GetGrain<T>.  Once we have our grain, we can call the relevant methods defined on the grain interface.

Done!

That is a fully functional, yet incredibly simple demo using Orleans on .NET Core and ASP.NET Core.  Stay tuned for more as we legitimately start building an app.

If you want to try the demo, all the source is available on my PracticalOrleans GitHub Repo.

Do you have any questions or comments? Are you using Orleans?  I’d love to hear about it in the comments or on Twitter.

Eventual Consistency and Business Alignment

mapI recently discovered through eventual consistency that my bounded contexts were not properly aligned with the business.   I won’t lie, it took me quite a while to make this realization.

This was most likely the case in many situations I’ve had in the past.  Because of this realization, I wanted to let out some of my thoughts about eventual consistency and business alignment.

Dependent Bounded Context

I’ve often encounter situations where a bounded context requires information that another bounded context is responsible for.  I’d like to use a simple example I’ve heard from Udi Dahan.  In the context of an Ecommerce site.

  • A customer can be a defined as a “preferred” customer.
  • Preferred customers receive a 10% discount on all orders.

Based on the above, the “preferred” flag and any business rules associated to it, most likely exists in some sort of the CRM bounded context.  However, this detail is required in the Sales bounded context in order to apply a discount if eligible.

As you can see, there is information that needs to be shared between bounded contexts.

Publish / Subscribe Domain Events

One approach for decoupling your bounded context is to publish domain events from your domain model.  This allows other bounded contexts to subscribe to those events and handle them accordingly.

Let’s use our example above to see how this would be implemented.  In our CRM bounded context, when a customer is defined as preferred in our domain model, we would publish a CustomerIsPreferred event.

class CustomerIsPreferred
{
	public Guid CustomerId { get; private set; }
	public DateTime Date { get; private set; }
	
	public CustomerIsPreferred(Guid customerId, DateTime date)
	{
		CustomerId = customerId;
		Date = date;
	}
}

In our Sales bounded context, we would subscribe to this event and update our customer model with a preferred flag. This piece of information is used as a local cache in our Sales bounded context.

During our checkout process in Sales, we would then use the preferred flag on the concept of a customer in Sales to determine if they should receive a 10% discount.

However, remember that this preferred flag is not owned by Sales.

Because of the publish / subscribe model (assuming asynchronicity), at any given time, our preferred flag in Sales could be out of sync with current state in our CRM bounded context. Eventually consistency doesn’t mean our data is wrong, it just means it is stale.

Business Alignment

There are many situations where data being eventually consistent is totally acceptable.  I’ve found in the real world we often make decisions with stale data all the time.

However, there are times where full consistency is required.  When describing the example above to the business, does the eventual consistency of the preferred flag have true business impact?  If it truly does matter and the data must be fully consistent, then you may have bad business alignment with your bounded contexts.

Re-evaluate your bounded context and the boundaries as you may have an wrong interpretation of responsibilities.

I’ve found that drawing a context map and the events which are published and subscribed with a domain expert should flush out any of these incorrect interpretations and help you re-align boundaries and responsibilities.

Query Objects with a Mediator

Mediator

In my previous blog Query Objects instead of Repositories, I demonstrated creating query objects and handlers to encapsulate and execute query logic instead of polluting a repository with both read and write methods.  Since we have moved away from repositories and are now using query objects, we will introduce the Mediator pattern. It will allows have a common interface that can be injected into our controller or various parts of our application. The mediator will delegate our query objects to the appropriate handler that will perform the query and return the results.

First we will create an interface that will be used on all of our query objects.

public interface IQuery<out TResponse> { }

Now we need to create an interface that all of our query handlers will implement.

public interface IHandleQueries<in TQuery, out TResponse>
	where TQuery : IQuery<TResponse>
{
	TResponse Handle(TQuery query);
}

Next we will create our Mediator interface. Most examples you will see that are implementing command handlers generally are showing an IFakeBus or something similar. The difference being that generally in the Bus implementation there is no return type. On the query side, our intent is to return data.

public interface IMediate
{
	TResponse Request<TResponse>(IQuery<TResponse> query);
}

There are many ways you can implement your mediator. As an example:

public class Mediator : IMediate
{
	public delegate object Creator(Mediator container);

	private readonly Dictionary<Type, Creator> _typeToCreator = new Dictionary<Type, Creator>();

	public void Register<T>(Creator creator)
	{
		_typeToCreator.Add(typeof(T), creator);
	}

	private T Create<T>()
	{
		return (T)_typeToCreator[typeof(T)](this);
	}

	public TResponse Request<TResponse>(IQuery<TResponse> query)
	{
		var handler = Create<IHandleQueries<IQuery<TResponse>, TResponse>>();
		return handler.Handle(query);
	}
 }

Now that we have our interfaces and mediator implementation, we need to modify our existing queries and handlers.

public class ProductDetailsQuery : IQuery<ProductDetailModel>
{
	public Guid ProductId { get; private set; }

	public ProductDetailsQuery(Guid productId)
	{
		ProductId = productId;
	}
}

public class ProductDetailQueryHandler : IHandleQueries<ProductDetailsQuery, ProductDetailModel>
{
	private DbContext _db;
 
	public ProductDetailQueryHandler(DbContext db)
	{
		_db = db;
	}
 
	public ProductDetailModel Handle(ProductDetailsQuery query)
	{
		var product = (from p in _db.Products where p.ProductId == query.ProductId).SingleOrDefault();
		if (product == null) {
			throw new InvalidOperationException("Product does not exist.");
		}
 
 		var relatedProducts = (from p in _db.RecommendedProducts where p.PrimaryProductId == query.ProductId);
 
 		return new ProductDetailsModel
		{
			Id = product.Id,
			Name = product.Name,
			Price = product.Price,
			PriceFormatted = product.Price.ToString("C"),
			RecommendedProducts = (from x in relatedProducts select new ProductDetailModel.RecommendedProducts {
				ProductId = x.RecommendedProductId,
				Name = x.Name,
				Price = x.Price,
				PriceFormatted = x.Price.ToString("C")
			})
		};
	}
 }

Now in our controller, instead of either creating a new instance of the query handler in our controllers or having all them injected into the constructor, we now simply inject the mediator.

public class ProductController : Controller
{
	private IMediate _mediator;
	
	public ProductController(IMediate mediator)
	{
		_mediator = mediator;
	}
	
	public ViewResult ProductDetails(ProductDetailQuery query)
	{
		var model = _mediator.Request(query);
		return View(model);
	}
}

As before, we have encapsulated the generation of our view model into its own object but now a common interface in a mediator to handle the incoming query object requests.