Organizing Code by Feature using Vertical Slices

I’ve written about organizing code by feature and vertical slices instead of technical concern quite a bit on this blog. However, it doesn’t seem to have caught on as much as I’ve hoped. I’m always seeing posts on Twitter about this concept where people finally catch on and make the discovery.

YouTube

Check out my YouTube channel where I post all kinds of content that accompanies my posts including this video showing everything that is in this post.

Technical Concern

I’ve always found it interesting that layers or technical concerns ended up being seemingly the most important thing when organizing code. I’m curious how this came to be. My guess is there are many reasons, but I think project template/scaffolding are a culprit.

If you create a brand new ASP.NET Core MVC application, you’re going to get a folder for Models, Views, and Controllers. By default everything is organized by technical concern.

What I find interesting about this is you usually end up having Models, Views, and Controllers all be a 1:1:1. Meaning a ViewModel is likely only used in one View which is only used in one Controller Action. If this is the case, why benefit does having these files/classes live in a project structure based on technical concern?

What does your application do?

I care about capabilities. Features. I care about what an application does.

When I’m looking at a project structure and see a ManageController file, that gives me no insights into features or capabilities that bring to the system. Looking at this project structure below, of course, it’s an e-commerce site, but what are the rich set of features it provides?

Change

When I was creating systems in layers, my pain point was making simple changes to an existing feature or adding a new feature.

Having to edit multiple files across multiple projects seemed absurd and cumbersome.

Data is flowing throw layers. When a user invokes a request, data must flow through all the layers to ultimately hit a database. The same occurs when you need to present data to a user.

In some situations, you could be editing more than 6 files for a simple change.

  • DataAccess/ShoppingCartModel.cs
  • DataAccess/ShoppingCartRepository.cs
  • BusinessLogic/ShoppingCartServices.cs
  • Controllers/ShoppingCartController.cs
  • ViewModels/ShoppingCartViewModel.cs
  • Views/ShoppingCart/View.cshtml

Organize Code by Feature

Organizing Code by Feature

When you start organizing code by feature you get the benefit of not having to jump around a codebase. Things that related in behavior are placed together. From the screenshot above, if you need to make a change to how products are added to a Shopping Cart, guess which file you’d be changing?

Layers in Features

I’m not saying to throw out layers. Layers ultimately live inside the vertical slice of a feature.

The way I like to describe this is think of your system/service as a cake that has multiple layers. Each layer represents a technical concern. If you have a data access layer, it controls all of the data access within that service/system.

Instead of it being application wide, cut a slice out of that cake.

You still have layers. You still have technical responsibilities, but you’ve made those boundaries to within the vertical slice of a feature.

Dependencies

The biggest win when you start organizing by feature, is now your dependencies are within the vertical slice. This allows you to manage dependencies per vertical slice.

If you wanted to move from EF6 to EF Core, you could do this one vertical slice at a time. It does not mean you have to re-write an entire data access layer to migrate from one to the other.

Follow @CodeOpinion on Twitter

Enjoy this post? Subscribe!

Subscribe to our weekly Newsletter and stay tuned.

More

Defining Service Boundaries by Splitting Entities

Defining Service Boundaries is a really important part of building a loosely coupled system, yet can often be difficult. Here’s one way of realizing where service boundaries lie but looking at Entities and the properties and how they relate.

YouTube

Check out my YouTube channel where I post all kinds of content that accompanies my posts including this video showing everything that is in this post.

Catalog

For the purpose of this example, I’m going to use an e-commerce, distribution domain. Don’t worry too much if you don’t know distribution. At a high level, you buy products from vendors, store them in your warehouse, then sell them to customers.

Here’s what my Loosely Coupled Monolith demo application has as a solution structure.

Defining Service

There are 4 boundaries defined. Catalog, Purchasing, Sales, and Shipping.

In the Catalog project, we have a ProductModel that represents a product in our system.

Dependency

Because we have this singular model of a product in our Catalog boundary, you can imagine the rest of the system will need access to it. In the Sales boundary, we have a feature to create an Order, which requires the Price of a product we’re purchasing.

For Sales to get the Product Price, we’ve exposed an interface & implementation to return product info.

Now here’s an example of using the IProductQuery.GetProduct() in our PurchaseProduct feature in our Sales boundary.

Splitting Entities

Now, this might not seem like all that bad that Sales has to take a dependency on Catalog. But we want our boundaries to be as autonomous as possible.

The primary reason I think the above example occurs is that the notion of an entity must live only in one place. I call this the “Entity as a Service”. Where a particular boundary owns an entity and provides an API to manage that entity.

In my example in this post, the Catalog boundary owns the Product entity.

What’s better served, in my opinion, is to have each boundary own the behavior and data around the concept of an entity.

Sale Price

The Price property/field on our ProductModel has no bearing on any other property on that model. Meaning, the price does not affect the name of the product. The same goes for Cost and Quantity. If any of those properties were to change, it has no bearing on what the Price is.

If they have no bearing to each other, why are they in the same model?

In this case, Price on a Product actually belongs to the Sales boundary, not the Catalog.

We can have the same concept of a Product live in multiple boundaries.

The above example is the ProductModel I’ve created for the Sales Boundary and added it to our Entity Framework SalesDbContext.

Now the PurchaseProduct feature does not need a dependency on Catalog anymore.

Cost

Since we no longer need the Price in our Catalog ProductModel, I’ve removed it.

Now the Cost property is the next easy target. The likely same applies that our Purchasing boundary would be the owner of the Cost. It’s going to be managing this value with the Vendor/Manufacturer, so it would make sense that it would own the Cost of a Product.

Quantity

Quantity here is referring to the Quantity on Hand of the product in the warehouse. The logical next step would that the Quantity on Hand would be managed by an Inventory or Warehouse boundary.

Warehouses with physical goods can some times do what is called an Inventory Adjustment. This can happen for various reasons such as people actually counting the products, finding damaged product, etc. This is a way to update the system to reflect the actual quantity on hand.

What if we had business rule that stated that you cannot purchase a product if there is no quantity on hand?

How would you model this? Would Sales have to have a dependency on the Inventory/Warehouse context so it could get the latest Quantity on Hand of a product?

In the situations I’ve been in, they use a business function called Available to Promise. This is a value that is calculated by the quantity on hand in the warehouse, what has been purchased from vendors but not yet received, and what has been ordered.

Defining Service

With using asynchronous messaging/events, the Sales context would keep track of a products ATP value and use that for the business rule around when an Product can be ordered.

Defining Service Boundaries

Defining service boundaries is difficult. Start out by looking at your Entities and splitting them up by sharing the same concept of a Entity across multiple boundaries.

An Entity does NOT need to reside in one single boundary. As shown in this example, the concept of a Product can reside in many different boundaries, and each concept owning the data/behavior it owns.

Follow @CodeOpinion on Twitter

Enjoy this post? Subscribe!

Subscribe to our weekly Newsletter and stay tuned.

Links

Scaling Hangfire: Process More Jobs Concurrently

As you start enqueuing more background jobs with Hangfire, you might need to increase the number of Consumers that can process jobs. Scaling Hangfire can be done in a couple of ways that I’ll explain in this post, along with one tip on what to be aware of when starting to scale out.

YouTube

Check out my YouTube channel where I post all kinds of content that accompanies my posts including this video showing everything that is in this post.

Producers & Consumers

First, let’s clear up how Hangfire works with producers and consumers.

A Hangfire producer is creating background jobs but not executing them. This is when you’re using the BackgroundJobClient from the Hangfire.

Once you call Enqueue, the job is stored in Hangfire JobStorage. There are many job storages you can use, such as SQL Server, Redis, and others.

The Hangfire Server, which is a consumer, will get that job from JobStorage and them execute the job.

There are two ways of scaling Hangfire to add more consumers. Worker Threads and Hangfire Servers.

Worker Threads

A Hangfire server can be hosted along with ASP.NET Core, or entirely standalone in a Console app using the Generic Host, which is the example below.

The Hangfire Server uses multiple threads to perform background jobs. Meaning it can process a background job per thread within the Hangfire server. This allows you to execute background jobs concurrently.

By default, the number of threads it uses is 5 per Processor Count. With a maximum of 20.

Math.Min(Environment.ProcessorCount * 5, 20);

However, you can configure this by setting the WorkerCount in the AddHangfireServer()

Constraints

Ultimately your constrained by the host where this Hangfire Server is running. Regardless if you’re running in a container, VM, or Physical Server, you’re going to be constrained by the CPU and Memory of the host. Meaning, you cannot just arbitrarily set the WorkerCount to a very high number as you could max out CPU if you had a high number of concurrent jobs that are CPU intensive.

You’ll have to monitor your application and the background jobs specific to your app to determine what the right number is. The default is a good default.

Hangfire Servers

The second option for scaling Hangfire is to simply run more Hangfire Servers.

Now when running two instances of my hangfire Server, the dashboard shows each server with 30 worker threads. This means our application can process 60 jobs concurrently.

Hangfire fully manages dispatching jobs to the appropriate server. You simply need to add servers to increase consumers, which ultimately increases the number of jobs you can process.

Downstream Services

Once you start scaling out by increasing the overall worker count or adding Hangfire servers, you will want to pay attention to downstream services.

If for example, your background jobs interact with a Database, you’re going to now be adding more load to the database because you’re going to be performing more jobs concurrently.

Just be aware that adding more consumers can move the bottleneck to downstream servers. They also become a constraint when trying to scale your system that’s using Hangfire.

Follow @CodeOpinion on Twitter

Enjoy this post? Subscribe!

Subscribe to our weekly Newsletter and stay tuned.

Links