Roundup #72: Succinct C#, IHostedService Shutdown Timeout, try-convert, Coupling, Cohesion, and Microservices

Writing More Succinct C#

When looking at a lot of C# code nowadays, I find myself thinking “wow, that code could be made SO MUCH SMALLER!”. C# is a very flexible language, allowing you to write clean and functional code, but also very bloated code.


Extending the shutdown timeout setting to ensure graceful IHostedService shutdown

I was seeing an issue recently where our application wasn’t running the StopAsync method in our IHostedService implementations when the app was shutting down. It turns out that this was due to some services taking too long to respond to the shutdown signal. In this post I show an example of the problem, discuss why it happens, and how to avoid it.



This is a simple tool that will help in migrating .NET Framework projects to .NET Core.

This tool is for anyone looking to get a little help migrating their projects to .NET Core (or .NET SDK-style projects).

As the name suggests, this tool is not guaranteed to fully convert a project into a 100% working state. The tool is conservative and does as good of a job as it can to ensure that a converted project can still be loaded into Visual Studio and build. However, there are an enormous amount of factors that can result in a project that may not load or build that this tool explicitly does not cover. These include:

Complex, custom builds that you may have in your solution

API usage that is incompatible with .NET Core

Unsupported project types (such as Xamarin, WebForms, or WCF projects)

If the bulk of your codebase is generally capable of moving to .NET Core (such as lots of class libraries with no platform-specific code), then this tool should help quite a bit.


Build Stuff #6 e-Meetup – Sam Newman – Coupling, Cohesion, and Microservices

The terms coupling and cohesion come from the world of structured programming, but they are also thrown about in the context of microservices. In this session, I look at the applicability of these terms to a microservice architecture and also do a deep dive into the different types of coupling to explore how ideas from the 1970s still have a lot of relevance to the types of systems we build today.


Enjoy this post? Subscribe!

Subscribe to our weekly Newsletter and stay tuned.

Why use DTOs (Data Transfer Objects)?

Data Transfer Objects

Should you really use DTOs (Data Transfer Objects)? Seem like a lot of work mapping your database entities to another object? Why Bother? The simple answer is coupling.

Data Transfer Objects

First, what are DTOs? When people refer to Data Transfer Objects, what they mean are objects that represent data structures that generally do not contain any business logic or behavior. If they do contain behavior, it’s generally trivial.

Data Transfer Objects are often used to be serialized by the producer and then deserialized by the consumer. Often times these consumers may live in another process being used by an entirely different language and/or platform.


I’ve recorded a short video explaining this with some sample code. Check out the video and make sure to subscribe to my YouTube Channel.

Crossing Boundaries

The most common example of this is creating an HTTP API using ASP.NET Core that returns an object from its Controller actions that are ultimately serialized to JSON. The consumer is oftentimes JavaScript that uses those JSON responses in displaying the browser using a Component or SPA Framework.


If you’re not using DTOs, then you’re likely exposing internal data structures.

The biggest culprit of this is simple TODO demo applications that expose the database entities directly. Meaning they output a serialized list of TODOs to the javascript frontend. And when you want to create a new record, they often times take the TODO object to insert directly into the database. This is leaking internals.

This is my biggest complaint with simple demo applications are they often don’t implement or follow some practices, because rightly so, they aren’t applicable to a simple TODO application. However, people take the example of a simple TODO and use the same patterns into a much large application.

The problem is when internal data objects are serialized and consumed by a client you either down own or cannot change easily. Or, which happens more often, the application itself gets very large.


The moment you want change internal data objects, you now have to update the clients.

Take this simple example of a Customer that is an internal data structure we use through the system and likely use to persist using an ORM.

If we are serializing this structure and clients are consuming this, if we change this structure, we’re likely going to break out clients.

This change would require a change to all of our clients. We could easily make this change in our own codebase and have all our own usages be correct, but we would be breaking all of our decoupled clients that get a serialized representation.


When creating an HTTP API, it’s all about representations. Most often times clients need a rich representation of a resource, not a just serialized version of a database entity. They often times need related data.

Having your API return rich representations means you must do some level of composition to create an object, not just a database entity, that will get serialized. This is where a DTO comes into play.

I actually don’t often use the term DTO, but rather use the word Representation or ViewModel. The purpose is still the same, it’s a data structure that is a contract between the producer and the consumer. That contract should remain stable (through backwards compatibility) or have a versioning strategy.


The reason you want to use DTOs is that you want clients to couple to that contract, not to your internal data structures. This allows you to modify and evolve your internals freely without breaking clients.


Enjoy this post? Subscribe!

Subscribe to our weekly Newsletter and stay tuned.

.NET Portability Analyzer

.NET Portability Analyzer

In order to migrate your application from .NET Framework to .NET Core, one part of the migration is making sure your existing code that targets the .NET Framework BCL (Base Class Library) also works with the .NET Core BCL. This is where the .NET Portability Analyzer comes in.

Migrating from .NET Framework to .NET Core

This post is in a blog series for migrating from .NET Framework to .NET Core. Here’ are some earlier post if you need to catch up:


.NET Framework and .NET Core are two entirely different things. Yes, they share the name “.NET”, but they are comprised of different Runtimes and Base Class Libraries. The Base Class Library (BCL) is the foundation of the framework for .NET types such as Object, String, Boolean, Array, List, DateTime, and other primitive types and data structures.

.NET Standard

In order to share code/libraries between .NET Framework applications and .NET Core applications is where .NET Standard comes in. I’ve found the simplest way to explain this is to think of .NET Standard as an interface (or a contract).

Both .NET Framework and .NET Core implement that interface.

Because of this, if you write an application that is .NET Standard 2.0 compliant, you can then run your application on both .NET Framework or .NET Core.

  • .NET Framework 4.8 supports .NET Standard 2.0
  • .NET Core 2.1 supports .NET Standard 2.0
  • .NET Core 3.1 supports .NET Standard 2.1

Again, because both .NET Framework 4.8 and .NET Core 2.1 support .NET Standard 2.0, you could run your application on either.

Note that .NET Core 3.1 supports .NET Standard 2.1. And it has been noted that .NET Framework will never support .NET Standard 2.1.

.NET Portability Analyzer

The .NET Portability Analyzer is a tool that analyzes your codebase and provides a report about what types you’re using from the BCL that are supported in .NET Standard or .NET Core.

This allows you to determine how close or far you are away from being able to migrate to .NET Core. A lot of the Types and APIs that exist in .NET Framework also now exist in .NET Core. There are however some gaps that will never be officially implemented in .NET Core. Most notably this will be any type that lives in the System.Web namespace.

There are two ways to run the .NET Portability Analyzer. The simplest is to use the Visual Studio Extension.

Once installed, you can access the settings from the solution context menu:

The Portability Analyzer settings allow you to configure which target platforms to review against. You can specify .NET Core specific or .NET Standard.

.NET Portability Analyzer

After configuring you can then run the “Analyzer Assembly Portability” from the solution context menu. This will generate an Excel file (.xlsx) that will contain all the types that exist in your code that are not supported against the target platforms you specified.

The Portability Summary section gives overall % of each assembly/project in your solution and how it’s supported on the target platform.

.NET Portability Analyzer

The details section lists all the APIs that are missing from a target platform.

The missing assemblies section of the report likely will include a list of dependencies that you are likely using via NuGet. I’ll cover how to handle 3rd party dependencies in another post.

This report will be invaluable for tracking down APIs that you’re using that are not supported in .NET Core.

Once you have these missing APIs in hand that are not supported in .NET Core, it really is up to you to determine how you want to handle or rewrite that code.

If you’re not using Visual Studio, you can simply build the .NET Portability Analyzer and run it against your assembly. You can find the source and instructions on the projects GitHub repo.

Useful Links

If you’ve used the portability analyzer and have and or comments that may help others, let me know in the comments or on Twitter.

Enjoy this post? Subscribe!

Subscribe to our weekly Newsletter and stay tuned.