My F# Journey – What I’ve learned so far

fsharpThis is a first blog post in a series to document my experiences while learning functional programming. This is my F# Journey.

During my 15 year career that started in the late 90’s, I have not made very many conscious decisions about learning a specific language or technology.  The path I’ve taken and the experience I’ve gained seems like it just happened naturally.

As someone always wanting to learn, I usually find some interesting topics and start going down the rabbit hole (Domain Driven Design, CQRS, Event Sourcing… thanks Greg Young), however I never usually set out on a “I’m going to learn X” journey.

That’s about to change.

As with anyone that keeps up with the latest trends, functional programming is all the rage.  And I do believe for good reason (more on that later),  which is why I’ve decided to take on the journey of learning with F#.   I’ve chosen F# because of my .NET/C# background and feel it can help my career to keep it in the .NET ecosystem.

What I’ve learned so far…

Stop comparing it to C#

apple-vs-orange

It’s natural when looking at another imperative language that you are unfamiliar with, to compare it to the language you know best.   The concepts are all transferable.  How do I perform a loop?  How do I define a variable?  How do I assign a variable?

With a quick search and reading a few examples,  you are off to the races writing some basic code in a new imperative language.  Stop trying to compare concepts.

Let it go!

Open up your mind to new ideas and try and forget everything you know.

Think like a beginner.

Just because you can, doesn’t mean you should

Because F# is a hybrid language and supports some of the concepts of an imperative language, doesn’t mean you should use them.  In pure functional languages, there are no loops or objects.

Let it go!

Just because you can use F# in a non-functional way, doesn’t mean you should (especially while learning).

When reading intro articles, you will see the following statements over and over again: “start thinking functionally” or “start thinking differently”.  It’s hard at first to really grasp what this really means.  Once you finally let go of the imperative way of thinking, you will get an “AH HA!” moment.

Read & Play

Anytime I’ve ever learned a new language it has always been through practical use in a small app.  However, I do find that learning the basics of F#, understanding F# types, and thinking like a beginner before jumping into real code has been helpful.  I’m using Visual Studio and writing code, but more as a playground than attempting actually write an app.  Once I feel comfortable enough and actually feel like I fully “get it”, I’m going to start writing a simple app.

Read More

Query Objects with a Mediator

Mediator

In my previous blog Query Objects instead of Repositories, I demonstrated creating query objects and handlers to encapsulate and execute query logic instead of polluting a repository with both read and write methods.  Since we have moved away from repositories and are now using query objects, we will introduce the Mediator pattern. It will allows have a common interface that can be injected into our controller or various parts of our application. The mediator will delegate our query objects to the appropriate handler that will perform the query and return the results.

First we will create an interface that will be used on all of our query objects.

public interface IQuery<out TResponse> { }

Now we need to create an interface that all of our query handlers will implement.

public interface IHandleQueries<in TQuery, out TResponse>
	where TQuery : IQuery<TResponse>
{
	TResponse Handle(TQuery query);
}

Next we will create our Mediator interface. Most examples you will see that are implementing command handlers generally are showing an IFakeBus or something similar. The difference being that generally in the Bus implementation there is no return type. On the query side, our intent is to return data.

public interface IMediate
{
	TResponse Request<TResponse>(IQuery<TResponse> query);
}

There are many ways you can implement your mediator. As an example:

public class Mediator : IMediate
{
	public delegate object Creator(Mediator container);

	private readonly Dictionary<Type, Creator> _typeToCreator = new Dictionary<Type, Creator>();

	public void Register<T>(Creator creator)
	{
		_typeToCreator.Add(typeof(T), creator);
	}

	private T Create<T>()
	{
		return (T)_typeToCreator[typeof(T)](this);
	}

	public TResponse Request<TResponse>(IQuery<TResponse> query)
	{
		var handler = Create<IHandleQueries<IQuery<TResponse>, TResponse>>();
		return handler.Handle(query);
	}
 }

Now that we have our interfaces and mediator implementation, we need to modify our existing queries and handlers.

public class ProductDetailsQuery : IQuery<ProductDetailModel>
{
	public Guid ProductId { get; private set; }

	public ProductDetailsQuery(Guid productId)
	{
		ProductId = productId;
	}
}

public class ProductDetailQueryHandler : IHandleQueries<ProductDetailsQuery, ProductDetailModel>
{
	private DbContext _db;
 
	public ProductDetailQueryHandler(DbContext db)
	{
		_db = db;
	}
 
	public ProductDetailModel Handle(ProductDetailsQuery query)
	{
		var product = (from p in _db.Products where p.ProductId == query.ProductId).SingleOrDefault();
		if (product == null) {
			throw new InvalidOperationException("Product does not exist.");
		}
 
 		var relatedProducts = (from p in _db.RecommendedProducts where p.PrimaryProductId == query.ProductId);
 
 		return new ProductDetailsModel
		{
			Id = product.Id,
			Name = product.Name,
			Price = product.Price,
			PriceFormatted = product.Price.ToString("C"),
			RecommendedProducts = (from x in relatedProducts select new ProductDetailModel.RecommendedProducts {
				ProductId = x.RecommendedProductId,
				Name = x.Name,
				Price = x.Price,
				PriceFormatted = x.Price.ToString("C")
			})
		};
	}
 }

Now in our controller, instead of either creating a new instance of the query handler in our controllers or having all them injected into the constructor, we now simply inject the mediator.

public class ProductController : Controller
{
	private IMediate _mediator;
	
	public ProductController(IMediate mediator)
	{
		_mediator = mediator;
	}
	
	public ViewResult ProductDetails(ProductDetailQuery query)
	{
		var model = _mediator.Request(query);
		return View(model);
	}
}

As before, we have encapsulated the generation of our view model into its own object but now a common interface in a mediator to handle the incoming query object requests.

Read More

Query Objects instead of Repositories

QueryThe repository pattern is often used to encapsulate simple and sometimes rather complex query logic.   However, it has also been morphed into handling persistence and is often used as another layer of abstraction from your data mapping layer.   This blog post show you how to slim down and simplify your repositories or possibly eliminate them all together by using query objects.

A typical repository will look something like this:

public interface IProductRepository
{
	void Insert(Product product);
	void Delete(Product product);
	IEnumerable<Product> GetById(Guid id);
	IEnumerable<Product> GetAllActive();
	IEnumerable<Product> FindByName(string name);
	IEnumerable<Product> FindBySku(string name);
	IEnumerable<Product> Find(string keyword, int limit, int page);
	IEnumerable<Product> GetRelated(Guid id);
}

Each of the Get/Find methods implemented above would encapsulate a specific a query.  This type of repository would most likely be used to then transform the data returned from a set of methods into a View Model which would then be passed to our view or serialized and sent back to the caller (browser).  As an example of passing the model to a view in ASP.NET MVC, it would look something like this:

public ViewResult ProductDetails(Guid productId)
{
	var product = _productRepository.GetById(productId);
	var relatedProducts = _productRepository.GetRelated(productId);
	
	var model = new ProductDetailsModel
	{
		Id = product.Id,
		Name = product.Name,
		Price = product.Price,
		PriceFormatted = product.Price.ToString("C"),
		RecommendedProducts = (from x in relatedProducts select new ProductDetailModel.RecommendedProducts {
				ProductId = x.RecommendedProductId,
				Name = x.Name,
				Price = x.Price,
				PriceFormatted = x.Price.ToString("C")
			})
	};

	return View(model);
}

As I’ve mentioned before, I believe you should think of your MVC framework as an HTTP interface to your application.  Regardless if you are returning HTML or JSON, the generation of your View Model should not be coupled to your MVC framework.

Query Objects

In the example above, I want to extract the generation of my view model info a query object.  A query object is similar to a command object for processing behavior in our domain.  Our query object will now look like this:

public class ProductDetailsQuery
{
	public Guid ProductID { get; private set; }
	
	public ProductDetailQuery(Guid productId)
	{
		ProductId = productId
	}
}

In order to execute our query, we will implement in a query handler.

public class ProductDetailQueryHandler
{
	private DbContext _db;
	
	public ProductDetailQueryHandler(DbContext db)
	{
		_db = db;
	}
	
	public ProductDetailModel Handle(ProductDetailQuery query)
	{
		var product = (from p in _db.Products where p.ProductId == query.ProductId).SingleOrDefault();
		if (product == null) {
			throw new InvalidOperationException("Product does not exist.");
		}
		
		var relatedProducts = (from p in _db.RecommendedProducts where p.PrimaryProductId == query.ProductId);
		
		return new ProductDetailsModel
		{
			Id = product.Id,
			Name = product.Name,
			Price = product.Price,
			PriceFormatted = product.Price.ToString("C"),
			RecommendedProducts = (from x in relatedProducts select new ProductDetailModel.RecommendedProducts {
				ProductId = x.RecommendedProductId,
				Name = x.Name,
				Price = x.Price,
				PriceFormatted = x.Price.ToString("C")
			})
		};
	}
}

Now we have encapsulated the generation of our view model which can be used in our controller.

public ViewResult ProductDetails(ProductDetailQuery query)
{
	var model = _queryHandler(query);
	return View(model);
}

Now our controller is responsible for delegating the call to generate the model and action result or serialization.  In my next post I will take this a step further by introducing a common interface for our query handler in order to accept multiple query objects and  return types.

Read More

Throw Out Your Dependency Injection Container

Dependency Injection

Dependency injection containers (aka inversion of control containers) seem to be common in most modern applications.  There is no argument against the value dependency injection, but I do have a couple arguments against using a dependency injection container.  Like many other design patterns and practices, over time the development community seems to forget the original problem the pattern or practice was solving.

Constructor & Setter Injection

Passing your dependencies via the constructor is generally a better way of injecting dependencies instead of  setters. Constructor injection gives you a much clearer definition of what dependencies are needed in order to construct the object into a valid state.  My dependencies fields are usually readonly, to prevent them from being externally set after object creation.

Regardless if you use constructor or setter injection with your container of choice, there is often one injection method that is seemingly rarely used or mentioned.  Pass your dependency to the method that requires it.

How often have you seen a constructor take a pile of dependencies via it’s constructor, to which those dependencies were used in a limited amount of methods.  I often see this when looking at CQRS Command Handlers.

class InventoryHandler
{
	private readonly IRepository _repository;
	private readonly IBus _bus;
	private readonly IValidateInventory _validation;
	
	public InventoryHandler(IRepository repository, IBus bus, IValidateInventory validation)
	{
		_repository = repository;
		_bus = bus
		_validation = validation;
	}

	public void Handle(ReceiveInventory cmd)
	{
		if (!_validation.IsAvailable(cmd.ProductId)) {
			throw new InvalidOperationException("Cannot receive inventory");
		}

		// Receive Product into Inventory
		var inventory = _repository.GetById(cmd.ProductId);
		inventory.Receive(cmd.Quantity, cmd.UnitCost);
	}

	public Handle(RemoveInventory cmd)
	{
		// Remove product from inventory
		var inventory = _repository.GetById(cmd.ProductId);
		inventory.RemoveInventory(cmd.Quantity);

		// Pub/Sub to notify other bounded context of inventory removal
		_bus.Send(new InventoryRemovedMessage(cmd.InventoryId));
	}
}

In this simplified example, IRepository is used in both Handle() methods. However, IBus & IValidateInventory are both only used in one method each.

Instead of passing the dependencies via constructor, pass the required dependencies to the method that requires it.

class InventoryHandler
{
	private readonly IRepository _repository;
	
	public InventoryHandler(IRepository repository)
	{
		_repository = repository;
	}

	public void Handle(IValidateInventory validation, ReceiveInventory cmd)
	{
		if (!validation.IsAvailable(cmd.ProductId)) {
			throw new InvalidOperationException("Cannot receive inventory");
		}

		// Receive Product into Inventory
		var inventory = _repository.GetById(cmd.ProductId);
		inventory.Receive(cmd.Quantity, cmd.UnitCost);
	}

	public Handle(IBus bus, RemoveInventory cmd)
	{
		// Remove product from inventory
		var inventory = _repository.GetById(cmd.ProductId);
		inventory.RemoveInventory(cmd.Quantity);

		// Pub/Sub to notify other bounded context of inventory removal
		bus.Send(new InventoryRemovedMessage(cmd.InventoryId));
	}
}

 

Nested Dependencies

The primary reason why I prefer not to use a dependency injection container is because it potentially masks a code smell.  Nested dependencies or over abstraction can add complexity by adding unnecessary layers.

Have you seen something similar to this before?

public class MyService(IRepository repostiory) { ... }
public class Repository(IUnitOfWork unitOfWork) { ... }
public class UnitOfWork(IDbContext dbContext) { ... }
public class DbContext(IDbConnection dbConnection) { ... }
public class DbConnection(string connectionString) { ... }

Code SmellIf you are using a dependency injection container, you may not have noticed the excessive and unneeded abstraction layers in parts of you application.  If you had to manually wire up up these nested dependencies through all the layers, it would be really painful.

Masking the pain by using a dependency injection container is making your application more complex.

Take a look at your application and see if you can remove your dependency injection container.   If you can, then what benefits did it provide?  If you can’t, ask yourself if you really need all those layers.

Read More

Simplify Your Code

DependenciesThe rise of package managers like nuget and npm have made taking on external dependencies into your project easier.  Because of this ease, it seems more developers are often adding external dependencies to their project without much thought on the complexity they may be adding.  I suggest you can simplify your code by limiting external dependencies.

Frameworks

Some frameworks are going to be the foundation to parts of your application.  ASP.NET MVC for example, could be the base framework for your UI.   However, it should not be the foundation of your application.  You should view it as the HTTP interface to your application.  Robert “Uncle Bob” Martin has a talk Architecture: The Lost Years which addresses this point.    Developers have been using MVC frameworks as top level architecture which represent the entire application rather than the delivery mechanism.

Dependencies

In recent years and the rise of pushing more to the client side, the front-end development space is full of SomeLibrary.js.

Take Durandal as an example for developing SPA applications.  It is built on top of jQuery, Knockout, RequireJS.  In order to developer a SPA app with Durandal, you are at a minimum taking on 4 dependencies.  Odds are very good you will be adding more dependencies such as Bootstrap, Breeze, Moment, etc.   I’m a fan of Durandal and Rob Eisenberg, which is why I’m really excited to see Aurelia.

You may be saying to yourself: “Who cares!  These are all popular and mature open source libraries!”.  Yes they are, but that doesn’t mean they are bug-free and simple.  They may also move at a pace that is quicker than your own release cycle which can lead to you constantly trying to stay current.

Any external dependency you rely on is now your code and your problem.

How do you explain to a customer or boss of a when you discover a bug that is causing an issue your application?  Do you think they care that you didn’t write it?  It’s your problem.

What Smells?

Code SmellAutomapper is a good example of a popular dependency that can hide a code smell.  If you have a lot of DTO’s that you are mapping through different layers, I can see why you would want to use automapper.  It would be really annoying to have to write all the simple mapping code from one DTO to another.  Hiding smells by using automapper isn’t necessairly the answer.  I would rather the question be asked “Why are we mapping X -> Y -> Z?  Do we really need 3 DTO’s before getting serialized to the client?”

Limiting your external dependencies can simplify your code.  I’m not saying to ditch all your dependencies and to never use anything external.  What I’m saying is be very diligent and understand the dependency, how it works and the problem it solves.

Do you really have the problem that dependency solves?  If so, why do you have that problem?

Read More

Code Reviews with Visual Studio

Code Reviews with Visual Studio

 

Code reviews are one of the most important development practices to improves quality, reduces bugs, and knowledge sharing.   Here is how to perform Code Reviews with Visual Studio.

In order to use code reviews with Visual Studio, you must be using TFVS (Team Foundation Version Control) within Visual Studio Online or Team Foundation

Request Code Review

Before you commit your changes, in the team explorer go to the My Work section.  Create your code review request by specifying the reviewer (who you want to perform the code review), title, area path, and description.

Code Reviews with Visual Studio

 

Code Reviews with Visual Studio

 

After submitting the code review request, you can suspend your current work while you wait for feedback from the code reviewer.  This allows you to begin work on another product backlog item.

Code Reviews with Visual Studio

 

Perform Code Review

Once a code review request has been sent to you, you can see it from the My Work section.

Code Reviews with Visual Studio

 

Opening the code review will show you the files modified so you can review with the standard diff view.  You can add comments to each file change to let the author know of any suggested changes.

Code Reviews with Visual Studio

 

Resume Work and Review Feedback

Once your code has been reviewed, you can resume your suspended work and view the feedback from the code reviewer.

Code Reviews with Visual Studio

 

Code Reviews with Visual Studio

 

Get Notified!

Once you start using the above workflow, you may want to get notified via email when someone sends you a code review request.  To do so from Visual Studio, in the Team Explorer access the Settings section.

Code Reviews with Visual Studio

 

This will open up Visual Studio online in your browser where you can manage your basic alerts.

Code Reviews with Visual Studio

 

 

 

Read More

Add a build number to your Assembly Version

TeamCity

Including the build number in your assembly version number can be a very useful feature.  Using reflection you can retrieve you assembly version number and display it appropriately in your app. Here is how to add a build number to your Assembly Version using TeamCity continuous integration server.

AssemblyInfo Patcher

TeamCity is a continuous integration server developed by JetBrains.  If you are looking at trying a new build server, I highly recommend giving it a try.   It should take no longer than a hour or two to install, configure, and build your project.

There is a built-in build feature that allows you to modify the AssemblyInfo.cs during the build process.  This feature works by scanning for all AssemblyInfo files (.cs, .vb, .cpp, .fs) in their usual file locations and replaces the AssemblyVersion, AssemblyFileVersion and AssemblyInformationVersion attributes with the values you define.

 Add Build Feature

Under your build configuration settings, add a new Build Feature.

TeamCity

Select the AssemblyInfo patcher.  Here you will be able to specify the version format by using parameters or static text.  In my example below, I’m including the %build.counter% parameter in the version format.

TeamCity

 

That’s it!  After the build is complete, the assembly outputted now contains the file version with build number.

TeamCity

Read More

Visual Studio Online Check-In Policies

Visual Studio Online Check-In Policies

Want to produce better code and more efficient development group? Start using Visual Studio Online check-in policies within your team project.

Check-in Policies are rules you can define at a Visual Studio Online team project which are enforced when a developer attempts to check-in their source code.

Note: You must be using Team Foundation Version Control (TFVC) with your project in order to use the check-in policies.  Although VSO now supports Git as a version control system for your team project, check-in policies are not supported.

One of the main reasons I started using check-in policies was to enforce associated work items to a changeset.  Having all changsets associated to work enabled me to automate the creation of a change lot during the build process.  If you are looking for better way to visualize your Visual Studio Online work items, take a look at my LeanKit Visual Studio Online Integration blog post.

Source Control Settings

From the Team Explorer, access the settings menu and select Source Control under the Team Project heading.

Visual Studio Online Check-In Policies

 

Under the Check-in Policy tab, click the Add button to select the policy you want to add.

Visual Studio Online Check-In Policies

Without any extension or power tools, Visual Studio Online provides four team project check-in policies that you can specify:

Builds
Requires that the latest build was successful for each affected continuous integration build definition.

Changeset Comments Policy
Requires the developer to provide comments with the check-in.  These comments will be associated with the changeset.

Code Analysis
Requires the developer to run Code Analysis from Visual Studio prior to check-in.

Work Items
Requires the developer to associate at least one work item with the check- in.

Code Review Policy

There is a great policy written by Colin Dembovsky (@colindembovsky) on Visual Studio Gallery that provides Code Review Policy.

Once the extension is installed you will now have a new option available to add.

Visual Studio Online Check-In Policies

Visual Studio Online Check-In Policies

 

Once this policy is added, you will be unable to check-in until a Code Review has been requested, closed, and has no “Needs Work” response.

CheckIn5

Read More

Integrate LeanKit and Visual Studio Online

Integrate LeanKit and Visual Studio Online

LeanKit and Visual Studio Online are both two great tools.  Why not use them both?  Here is a guide to integrate LeanKit and Visual Studio Online.

Although Visual Studio Online (and Team Foundation Server) provide a task board  to visualize work items and flow, I prefer to use the fully customized Kanban board by LeanKit.  Thankfully, I found out that LeanKit has created an Integration Service, which is available on GitHub.

My goal is to manage all work items within LeanKit, however I want to be able to associate Visual Studio Online Work Items to changesets (during checking) within Visual Studio.  (LeanKit/VSO ChangeLog how-to coming soon!)

Overall the installation and configuration is fairly straightforward. However, there were a couple hiccups I encountered along the way that inspired this how-to.  I also wanted to point out that main contributor to the integration service, David Neal (@reverentgeek), was very helpful in answering questions via Twitter.  Check out the source code if you’re interested, pretty nice code.

Integration Service Installation

  1. Download the latest zip/executables from LeanKit’s website here.
  2. Follow the installation guide provided by LeanKit.

Although LeanKit provides a bit of an overview on the configuration, follow these steps specifically for connecting to Visual Studio online (or Team Foundation Server).

Visual Studio  Online – Alternate Authentication Credentials

Before you configure the integration service, you must enable alternate authentication credentials to your Visual Studio account profile.  This allows you to specify a username/password the integration service can use to connect to Visual Studio Online Web Service.

When in Visual Studio Online, access your account profile to enable the alternate authentication credentials.

VSO Alternate Authentication Creds

 LeanKit – Board Settings

In your LeanKit board settings, go to the Card ID Settings section.  Here you will want to make sure that you are using the external card ID.  Do not select the auto-increment card ID setting because we want the Card ID to be the same as the Visual Studio Online Work Item #.

LeanKit Card ID Settings

Configure Integration Service

After installation, browse to http://localhost:8090 to access the web interface.  Specify your LeanKit account and credentials.

LeanKit Integration

Next, specify your Visual Studio Online account and alternate access credentials.

LeanKit Integration

The integration service is very configurable in terms of mapping LeanKit Cards to VSO Work Items and different status.  Select the LeanKit board and the Visual Studio Online Project you would like to create a mapping for.

LeanKit Integration Mapping

In the Selection tab, you will want to select the VSO Work Item States and Types that will be mapped to LeanKit.

LeanKit Mapping

The Lanes and States tab will be populated with the swim lanes and columns from your Kanban board.  Select a column or lane and then select an available state to map.  The intent here is to specify if the LeanKit card moves to that column/lane it will be updated in VSO with the mapped State.  If you define multiple states, only the first matching state will be used.

In the example below, I’ve mapped the Backlog column  to the New and To Do VSO states.

LeanKitVSO-Step5

In the Card Type tab, specify which LeanKit card types you want to map to which VSO Work Items.

LeanKit Integration Card Types

In the Options tab, specify how you want the integration service to sync.  It can sync changes bi-directional, however for my usage, I only want to push changes from LeanKit to VSO.  As mentioned above, I only want to use LeanKit for managing work items, however I want them to be in VSO so I can associate work items to changesets.

LeanKit Integration Options

Be sure to click the save button if you haven’t already.  What may not be obvious at this point is that you need to Activate your new configuration.  Click on the Activation tab, then click on the big red Activate Now button.

LeanKit Integration Activate

That’s it!

I’ve created a LeanKit Defect Card in my Product Backlog and the Integration Service has created the Bug work item in VSO.

LeanKit Test

LeanKit Test

Read More