.NET Serialization Benchmark 2019 Roundup

With .NET Core 3.0  and .NET 4.8 released it is time to check again which serializer performs best. The older overviews still contain some interesting insights because the focus tends to shift with year to year a bit:

Time moves on, some libraries are no longer maintained and no longer compatible with .NET Core 3. These were removed:

Name Reason
ZeroFormatter Project was abandoned by maintainer in favor of MessagePack-CSharp Github issue.
Crash on .NET Core 3 with System.BadImageFormatException: ‘Bad IL format.’
Wire Deserialization fails. See Issue1 and  Issue2. Besides this there is no project Activity since 2017.
Hyperion Serialization does not work due to accessing internal fields of Exceptions. See Issue.


Tested Serializers




Version Tolerant

# Commits / Year Of Last Commit at 8/2019

Apex Binary No 143 / 2019
BinaryFormater Binary Yes ?
Bois Binary ? 181 /2019
DataContract Xml Yes ?
FastJson Json Yes 138 / 2019
FlatBuffer Binary Yes 1701 / 2019
GroBuf Binary Yes 322 / 2019
JIL Json Yes 1272 / 2019
Json.NET Json Yes 1787 / 2019
JsonSerializer (.NET Core 3.0) Json Yes ?
MessagePackSharp Binary, MessagePack Yes 715 / 2019
MsgPack.Cli Binary,
Yes 3296 /2019
Protobuf.NET Binary
Protocol Buffer
Yes 1018 / 2019
SerivceStack Json Yes 2526 /2019
SimdJsonSharp Json No 74 /2019
SlimSerializer Binary No ~100 /2019
UTF8Json Json Yes 180 /2019
XmlSerializer Xml Yes ?

New Additions since 2018 are Apex, SimdJsonSharp, Utf8Json and JsonSerializer which comes with .NET Core 3.

Test Execution

You can clone my Serializer Testing Framework: https://github.com/Alois-xx/SerializerTests and compile it. Then execute RunAll.cmd from the SerializerTests directory which will execute the tests on .NET Framework and .NET Core 3.  I have put the output directory above the source directory (e.g. SerializerTests\..\SerializerTests_hh_mm_ss.ss\) to not check in measured data by accident.

Test Results


And another time for .NET 4.8 were some of the new .NET Core only serializers are missing:


The times are shown for .NET Core 3.0 and .NET 4.8 for 1 million objects sorted by deserialize time. The tests were performed on my i7-4770K CPU @ 3.50GHz on Win 10.0.18362.356 (= 1903) with 16 GB RAM. Normally you read data more often than write it. This is the reason why I did put emphasis on the deserialize times. NopSerializer and SimdJsonSharpSerializer test only deserialize times hence the 0 in the graph for the serialization times.

But You Are Doing It All Wrong Without Benchmark.NET

How accurate are the measured numbers? As usual you have many factors such as virus scanners, Windows update downloads, and other things which can show different timings in the region of 10%. For more memory intensive (read slower) serializers it can reach sometimes up to 30% because page fault performance can differ quite a lot and has many influencing factors. Although the absolute timings may vary the relative ordering remains pretty stable when you measure at one point in time. The measurement error during one test run is < 5% which is pretty good to decide which serializer is faster for your specific use case. Every test was executed 5 times and the median value was taken. I have experimented with 10 or more runs but the resulting numbers did not stabilize because if the test suite runs for an extended period of time (hours) then the machine may reach different states (like downloading and unpacking a windows or other application update) which did the introduce systematic errors. For that reason I did get back to rather short test timings to ensure that the relative performance of the tested serializers is comparable. And no I have not used Benchmark.NET because it is only good for micro benchmarks. It goes into great length to get stable numbers but the price for that is that it executes a test which runs for one microsecond for several minutes. If the same approach would be used for the serializers then the test suite would run for days and you still would not know if the numbers are repeatable the next day because this are not real micro benchmarks which depend only on CPU. Besides that Benchmark.NET makes first time init effect measurements pretty hard to test the JIT timing overhead. I am aware that there is a repository of Benchmark.NET enabled performance test named dotnet Performance on github where I have added the .NET Core 3 testcase for IndexViewModel as it was mentioned in https://github.com/dotnet/performance/pull/456/commits where where I get with Benchmark.NET tests only a perf increase of 30% with the new .NET Core 3 Json serializer. I find it strange the the current repo contains no tests for the new Json Serializer of .NET Core 3. I have added the tests as any other for the new Json Serializer of .NET Core in the .NET Testsuite like this:

        [Benchmark(Description = "NetCoreJson")]
        public T NetCoreJson_()
            memoryStream.Position = 0;
            return (T) System.Text.Json.JsonSerializer.DeserializeAsync(memoryStream, typeof(T)).Result;

        [GlobalSetup(Target =nameof(NetCoreJson_))]
        public void SetupNetCoreJson()
            memoryStream.Position = 0;
            System.Text.Json.JsonSerializer.SerializeAsync<T>(memoryStream, value).Wait();


|                     Method |      Mean |    Error |   StdDev |    Median |       Min |       Max |   Gen 0 |  Gen 1 | Gen 2 | Allocated |
|--------------------------- |----------:|---------:|---------:|----------:|----------:|----------:|--------:|-------:|------:|----------:|
|                        Jil |  64.25 us | 0.159 us | 0.124 us |  64.23 us |  64.07 us |  64.45 us |  6.4037 | 0.7684 |     - |   26.7 KB |
|                   JSON.NET |  81.11 us | 0.808 us | 0.756 us |  81.12 us |  79.98 us |  82.31 us |  8.3682 |      - |     - |  34.27 KB |
|                   Utf8Json |  38.81 us | 0.530 us | 0.496 us |  38.66 us |  38.29 us |  39.76 us |  5.1306 |      - |     - |  21.58 KB |
|               NetCoreJson  |  49.78 us | 0.692 us | 0.648 us |  49.51 us |  49.07 us |  51.00 us |  5.3066 | 0.5896 |     - |  21.91 KB |
| DataContractJsonSerializer | 343.39 us | 6.726 us | 6.907 us | 340.58 us | 335.44 us | 355.18 us | 20.3125 | 1.5625 |     - |  88.38 KB |

Then I executed MicroBenchmarks.exe with testcase #92 which tries the deserialization tests on IndexViewModel. In my test suite I only test stream based serializers because I want to support also large datasets which need no transformation into a string before the serializer is willing to deserialize the data. You can extend the Json_FromString which leads to similar results:


|      Method |     Mean |    Error |   StdDev |   Median |      Min |      Max |  Gen 0 |  Gen 1 | Gen 2 | Allocated |
|------------ |---------:|---------:|---------:|---------:|---------:|---------:|-------:|-------:|------:|----------:|
| NetCoreJson | 51.52 us | 0.498 us | 0.441 us | 51.49 us | 50.78 us | 52.53 us | 8.2508 | 0.4125 |     - |   34.2 KB |
|         Jil | 42.83 us | 0.414 us | 0.388 us | 42.73 us | 42.27 us | 43.56 us | 5.6662 | 0.6868 |     - |  23.49 KB |
|    JSON.NET | 77.07 us | 0.580 us | 0.542 us | 77.22 us | 76.19 us | 78.12 us | 7.6523 | 0.9183 |     - |  31.31 KB |
|    Utf8Json | 42.64 us | 0.263 us | 0.246 us | 42.65 us | 42.27 us | 43.06 us | 8.1967 | 0.3415 |     - |  33.87 KB |

This emphasis again to not blindly trust random performance numbers which are mentioned on github. This one was created with MicroBenchmarks.exe –filter “MicroBenchmarks.Serializers.Json_FromString*” after adding NetCoreJson to the mix. There we find again a perf increase of 32% compared to Json.NET measured with Benchmark.NET. But this is nowhere the originally factor 2 mentioned by Scott Hanselmann. Perhaps at that point in time the serializer did not fully support all property types which did result in great numbers. To combat confirmation bias I write the serialized data to disk so I can check the file contents to verify that indeed all required data was written and not e.g. private fields were omitted by this or that serializer which is a common source of measurement errors when a serializer has too good numbers.

Show Me The Real Numbers

The big pictures above need some serious explanation. In my first article I have shown that serialized data size and performance correlate. It is common wisdom that binary formats are superior to textual formats. As it turns out this is still somewhat true but the battle tested binary protobuf-net serializer is beaten by 50% on .NET Core by UTF8Json a Json serializer. There is one guy who did write MessagePackSharp, Utf8Json, ZeroFormatter and others named Yoshifumi Kawai who really knows how to write fast serializers. He is able to deliver best in class performance both for .NET Framework and .NET Core. I have chosen the fastest version tolerant Json serializer Utf8Json and normalized all values to it. Utf8Json is therefore 100% for serialize and deserialize. If something is below 100% it is faster than Utf8Json. If it is 200% then it is two times slower. Now the pictures above make hopefully more sense to you. Humans are notoriously bad at judging relative performance when large and small values are sitting next to each other. The graph should make it clear. The new Json Serializer which comes with .NET Core 3 is faster (NetCoreJsonSerializer) but not by a factor 2 as the post of Scott Hanselman suggests. It is about about 30% faster in the micro benchmarks and ca. 15% faster in my serializer test suite which de/serializes larger 1M objects.

Below is the same data presented in absolute times for .NET Core 3.0.0 when 1 million objects are de/serialized. I have grouped them by their serialization format which makes it easier to choose similar breeds if you want to stay at a specific e.g. binary or textual protocol. If you want to profile on your own you can use the supplied SerializerTests\SerializerStack.stacktags file to visualize the performance data directly in CPU Sampling and Context Switch Tracing.


Here the same data with .NET Framework 4.8


In terms of serialized data size there is Bois_LZ4 a special one because it is a compressing serializer with pretty good numbers. Since large parts of my BookShelf data is very similar it can compress the data pretty well which might play a role if network bandwidth is relevant in your scenario and you cannot use http compression.

Measurement Errors

I have collected data over the months also with .NET 4.7.2 where the numbers were much better. It looks like .NET Framework 4.8 (4.8.4010.0) has some regression issue with Xml handling which leads to slowdowns up to 6 times. These manifest in large degradations in  DataContract and XmlSerializer based tests. You do not believe me? I have measured this! Unfortunately I have played around a lot with debugging .NET recently (https://aloiskraus.wordpress.com/2019/09/08/how-net-4-8-can-break-your-application/) where I have added .ini files in all of my GAC assemblies to make it easy to debug with full locals. This works great but what will happen when NGen tries to update it NGen images? It will honor the .ini settings which tell it to not inline or optimize code too much.  No this is not a regression issue of .NET 4.8 but it was my error by playing around too much with JIT compiler flags. Now I have proof that method inlining is indeed very beneficial especially for Xml parsers with many small methods which speeds up parsing up to a factor 6. There is always something you can learn from an error.

What Is The Theoretical Limit?

To test how fast a serializer can get I have added NopSerializer which tests only deserialization because data is much often read than written and it is the more complex operation. Deserialization consists of two main stages

  • Data Parsing
  • Object Creation

Because any serializer must parse either Xml, Json or its other data format we can test the theoretical limit by skipping the parsing step and measure only object creation. The code for this is at its core getting the string of a preparsed UTF8 encoded byte array:

                // This will test basically the allocation performance of .NET which will when the allocation rate becomes too high put 
                // Thread.Sleeps into the allocation calls. 
                for(int i=0;i<myCount;i++)
                    books.Add(new Book()
                        Id = i+1,
                        Title = Encoding.UTF8.GetString(myUtf8Data, (int) myStartIdxAndLength[i*2], (int) myStartIdxAndLength[i*2+1] )

For 1 million book objects which consist only of an integer and a string of the form “Book dddd” the deserialize times are ca. 0,12s which results in a deserialize performance of ca. 90 MB/s which is not bad. If you look at the data charts above you still find a faster one named ApexSerializer. Why is this one faster than the NopSerializer? Great question! When you parse a text file it is usually stored as ASCII or UTF-8 text. But .NET and Windows uses as internal string representation UTF-16. This means after reading from a text file the data needs to be inflated from ASCII/UTF8 (8 bits per char) to UTF-16 (16 bits per char). This is complex and done in .NET with Encoding.UTF8.GetString. This conversion overhead takes time and has become with .NET Core 3 better which is great because it is a very central method which is called by virtually all code which deals with reading text files. When we use the fastest Json serializer UTf8Json which needs 0,3s then we find that 60% is still spent in parsing Json. This overhead could be lessened by using SimdJsonSharp which claims to give you GB/s Json parsing speed which uses advanced bit fiddling magic and AVX128/256 SIMD instructions which can process up to 64 bytes within one or a few instruction cycles. So far the interop costs of the native version (SimdJsonSharpSerializerN) still outweigh the benefits by far. The managed version does pretty well (SimdJsonSharpSerializer) but it is only a few % faster than Utf8Json which indicates that I am either using the library wrong or that the generated code still needs more fine tuning. But be warned that SimdJsonSharpSerializer/N is NOT a serializer for generic objects. You get from  SimdJsonSharp only the Json parser. You need to write the parsing logic by yourself if you want to go this route.

Which One Is Fastest?


When raw speed is what you want and you accept to keep the serialized object layout fixed (no versioning!) then Apex is worth a look. In the first table I have added a column if the serializer does support versioning in one or the other form. If there is a no expect crashes like ExecutionEngineException, StackoverflowException, AcessViolationException  and other non recoverable exceptions if anything in your execution environment changes. If speed is what you want speed is what you get by memcopying the data to the target location.

GroBuf is also worth a look but is comes with a significantly bigger serialized binary blob. Only slightly slower is MessagePackSharp which is written by the same guy as Utf8Json and uses with MessagePack an open source  binary encoding scheme.


When the data format is Json then Utf8Json is the fastest one followed by Jil which is only 4% slower in this macro benchmark. If you measure for small objects the overhead with Benchmark.NET then Utf8Json is about 40% faster which is not reflected by this macro benchmark. Benchmark.NET tries to warm up the CPU caches and its branch predictor as much as possible which results in many CPU caching effects. This is great to optimize for small tight loops but if you process larger chunk’s of memory like 35 MB of Json for 1M Book objects you get different numbers because now a significant amount of memory bandwidth cannot be used to eagerly load data into the caches.

The Fastest Is Best Right?

Speed is great until you miss an essential feature of one of the tested serializers. If you are trying to replace something slow but very generic like BinaryFormatter which can serialize virtually anything you have not many options because most other serializer out there will not be able to serialize e.g. Delegates or classes with interfaces with different implementations behind each instance. At that point you are for practical reasons bound to BinaryFormatter. You can try SlimSerializer which tries also be pretty generic at the cost of versioning support as next best option.

Things are easier if you start from DataContractSerializer. You can get pretty far with ProtoBuf which allows easy transformation  by supporting the same attributes as DataContractSerializer. But there are some rough edges around interface serialization. You can register only one implementing type for an interface but that should be ok for most cases.

If you are starting from scratch and you want to use Json then go for Utf8Json which is generic and very fast. Also the startup costs are lower than from JIL because Utf8Json and MessagePackSharp (same author) seem to hit a fine spot by having most code already put into the main library and generate only a small amount of code around that. DataContractSerializer generates only the de/serialize code depending on the code path you actually hit on the fly.

First Time Init Performance (Serialize One Object)

To make the tests comparable a new process is used for every serializer alone to prevent issues like that the first serializer needs to startup the reflection infrastructure which takes time while the later called serializers would benefit from the already warmed up reflection infrastructure. Inside a fresh process four different types are serialized and then the process exits. The measured time is not the executable runtime (.NET Runtime startup is not measured) but how much time each serialize call did need . Below is the Max (first run) serialize time for one small object shown and the min time for any of the other three objects to give you an idea how much time the serializer needs for the first object and how much performance you can expect if you add new types to the mix.

You can execute this also manually by running RunAll.cmd which runs all tests including the startup timing tests in the output folder as csv files. Or if you want to go into details use the command line:

D:\Source\git\SerializerTests\bin\Release\net472>SerializerTests.exe -test firstcall
Serializer      Objects "Time to serialize in s"        "Size in bytes" FileVersion     Framework       ProjectHome     DataFormat      FormatDetails   Supports Versioning
ApexSerializer  1       0.1803  84 .NET Framework 4.8.4010.0       https://github.com/dbolin/Apex.Serialization    Binary          No
ApexSerializer  1       0.0331  86 .NET Framework 4.8.4010.0       https://github.com/dbolin/Apex.Serialization    Binary          No
ApexSerializer  1       0.0344  86 .NET Framework 4.8.4010.0       https://github.com/dbolin/Apex.Serialization    Binary          No
ApexSerializer  1       0.1089  148 .NET Framework 4.8.4010.0       https://github.com/dbolin/Apex.Serialization    Binary          No
ServiceStack    1       0.0200  69 .NET Framework 4.8.4010.0       https://github.com/ServiceStack/ServiceStack.Text       Text    Json    Yes
ServiceStack    1       0.0020  70 .NET Framework 4.8.4010.0       https://github.com/ServiceStack/ServiceStack.Text       Text    Json    Yes
ServiceStack    1       0.0020  70 .NET Framework 4.8.4010.0       https://github.com/ServiceStack/ServiceStack.Text       Text    Json    Yes
ServiceStack    1       0.0091  117 .NET Framework 4.8.4010.0       https://github.com/ServiceStack/ServiceStack.Text       Text    Json    Yes


.NET Framework Everything NGenned


.NET Framework Only Default Assemblies NGenned


.NET Core 3 – Defaults


The pictures show that JIT compilation takes a noticeable amount of time which is usually “fixed” with NGen on .NET Framework or Crossgen on .NET Core. Especially ApexSerializer needs to start 500ms on .NET Core 3 or even 1s on .NET Framework if the assembly and its dependencies are not precompiled. But if you NGen then the times are 10 times better for Apex which is great. If you try to choose the best serializer for reading your configuration values during application startup this metric becomes important because the startup costs of the serializer might be higher than the de/serialize times of your data. Performance is always about using the right tradeoffs at the right places.


Although I have tried my best to get the data right: Do not believe me. Measure for yourself in your environment with a data types representative for your code base. My testing framework can help here by being easily extendable. You can plug in your own data types and let the serializers run. There can be rough spots in some of the serializers how they treat DateTime, TimeSpan, float, double, decimal or other data types. Do your classes have public default ctors? Not all of them support private ctors or you need to add some attribute to your class to make it work. This are still things you need to test. If you find some oddity please leave a comment so we do not fall into the same trap as others. Collecting the data and verifying with ETW Tracing many times that I really measure the right thing was already pretty time consuming. Working around of all the peculiarities of the serializers is left as an exercise to you.

After doing much performance work during my professional life I think I have understood why performance is so hard:

Number of ways to make something slow


Number of ways to make it fastest


When in doubt infinity will win. But even when you have created the fastest possible solution it will not stay there because the load profile will change or your application will run on new hardware with new capabilities. That makes performance a moving target. The time dependency of O(1) makes sure that I am never running out of work Zwinkerndes Smiley.


How .NET 4.8 Can Break Your Application

A colleague was calling me in to look at a broken .NET application after installing a preview up an upcoming Windows KB. The application was not starting up all the time. That are easy issues because it is easy and fast to reproduce the problem. Since it was happening on a VM with no Visual Studio installed I used the best und most underestimated tool named dnSpy. I learned about this tool by reading the book Pro .NET Memory Management from Konrad Kokosa is not only an in depth explanation how the GC works but it covers also many not so well known tools to drill into .NET for Windows and Linux. The most useful sections are in my opinion the last ones which cover in depth usage of Span and  System.IO.Pipelines. The GC is a fascinating piece of software but in practice you are fastest if you do not need to rely on it. dnSpy is a xcopy deployable Visual Studio Debugger clone which automatically decompiles managed code on the fly where you can set breakpoints and do line level debugging without having the actual source code. If you ever have to do production debugging you need to have this tool in your toolbox. In many ways dnSpy is better than Windbg because until today Windbg has no way to debug .NET code with a source code window which is in my opinion a shame.

One really cool option of dnSpy is to edit the decompiled source code and modify a running binary. Why am I excited about that? Will strong name verification not prevent loading such a tampered assembly? First of all: No you can recompile a strong named assembly and it will load normally since .NET 3.5 SP1

Bypass signature verification of trusted assemblies

Starting with the .NET Framework 3.5 Service Pack 1, strong-name signatures are not validated when an assembly is loaded into a full-trust application domain,

Not many people are aware of this. To repeat: Strong naming won´t give you any security since over 10 years.

This makes it easy to force a  highly sporadic exception on production machines (e.g. connection loss, disk full, ….) with ease by modifying the original method where the exception was thrown. If you have e.g. a small application which for some reason ends up in the general exception handler and there the error handling does bad things with some code like this:

        static void Main(string[] args)
                new FileStream(@"C:\temp\NotexistingFile.txt", FileMode.Open);
                Console.WriteLine("Got Here");
            catch(FileNotFoundException ex)
                Console.WriteLine("File was not found.");
            catch(Exception ex)
                Console.WriteLine("Fatal exception encountered!");


If you start the application from dnSpy it will skip using Native Images which will give you even for release builds all local variables with full fidelity. While debugging the same application in VS 2019 even with Source Server support often it cannot locate .NET Framework sources and you see no local variables and no C# code:


That is not nice. What is the reason for that? First Visual Studio uses Native Images if present. If these were compiled with no debug settings then the local variables will be optimized away. Even if you check


in Debugger – Options will not help because it only affects JITed code like your application but not loaded NGenned images. But there is a way out

1. Set the environment variable COMPLUS_ZAPDISABLE=1  or the registry key HKLM\SOFTWARE\Microsoft\.NETFramework\ZapDisable  and launch from that shell either VS or your application to which you attach later. See here for more information.

2. Create a file in C:\Windows\Microsoft.NET\assembly\GAC_32\mscorlib\v4.0_4.0.0.0__b77a5c561934e089\mscorlib.ini with the content

[.NET Framework Debugging Control]

Point 1 will disable loading of native images. If you set this system wide your application will start much slower but all of your code will be JITed now. Point 2 will disable JIT optimizations for a given dll where the ini file must be named for an assembly xxxx.dll like xxxx.ini but NOT xxxx.dll.ini or xxxx.ini.dll. You need to do this for each assembly you want to disable JIT optimizations. The  post of Scott Hanselman gives misleading advice. His post leaves you with the impression you need to create that ini file only for your executable. That is not the case. To make things easier I have wrapped that stuff into a powershell script which allows me to disable the use of native images system wide and generate the ini files in the GAC and additionally in my application directory.

PS D:\Utils\EnableLocalVars.ps1 -enable 1 -dir "C:\Program Files\Test" -gac -ngen 0
Did set NGen Flag to 0. NGen Images are disabled now. This will impact application startup!
Adding 1518 files from GAC
Adding 68 files from GAC64
Adding 2 files from C:\Program Files\Test
Skipped 8 files because ini file already exists
Created 1580 ini files.

That script makes it easy to enable/disable on an installed machine. I have put a link to my OneDrive Folder here.

EnableLocalVars.ps1 -enable 0/1 -dir xxxx [-recurse] or [-gac] [-Ngen 0/1]
 by Alois Kraus 2018-2019
 Use this script when you want to attach to a running process or you want to get memory dumps of crashed processes of release builds with all local variables not optimized away.
 This will disable JIT optimizations to enable viewing local variables of methods in a debugger which otherwise would be optimized away.
  -enable 0/1         Enable local variables to view in debugger
                      If 1 enable local variables. The .ini files are created.
                      If 0 disable local variables. The .ini files are removed.
  -dir directory      Select directory where all dll/exe files get a .ini file
  -recurse            If set it will search recursively for all dlls below that path
  -gac                Make all GAC dlls debuggable
  -NGen 0/1           Disable/enable usage of NGen binaries system wide by setting the registry key HKLM\SOFTWARE\Microsoft\.NETFramework\ZapDisable
                      1 Enable NGen Images  == ZapDisable=1
                      0 Disable NGen Images == ZapDisable=0
                      You need to enable this if you want to view variables of .NET Framework code or if your application was NGenned
You need to restart the applications after executing this command for the changes to take effect.
 Enable locals for all GAC and  application dlls on an installed and NGenned machine
   EnableLocalVars.ps1 -enable 1 -gac -dir "c:\program files\myApp\bin" -ngen 0
 Reset settings
   EnableLocalVars.ps1 -enable 0 -gac -dir "c:\program files\myApp\bin" -ngen 1
 Enable NGen Images again
  EnableLocalVars.ps1  -ngen 1
WARNING: NGen images are disabled. Do not forget to enable them (EnableLocalVars.ps1 -Ngen 1) when you are done or all .NET applications will start ca. x2 times slower!

It comes with extensive help so it should make it easy enough that you perform this step automatically before taking a memory dump. This will make memory dump debugging a whole lot easier. Usually I make the error configurable by e.g. checking for the existence of a file to force the error when I want to inject some error in .NET Framework code like this:


This trick helped me a lot to force the error only when the application has got into the initial non faulted state. Now I can trigger the issue by creating a file from the command line with

 C>echo Hi > c:\temp\ForceError.txt

This will trigger the exception in the patched file:

if (File.Exists(“C:\\Temp\\ForceError.txt”))
         throw new Exception(“This one is rare!”);

When you try to save the changed mscorlib.dll you get an access denied error because the file is in use


But since we are pros we rename the running mscorlib.dll with no fear


because we know that Windows opens dlls always with the FILE_SHARE_DELETE sharing permission. This  gives us the right to always rename a running dll. We can now save the patched mscorlib.dll without any issue:


When we run the modified mscorlib.dll we get indeed into the impossible error region once the trigger is activated:


That is a great way to test how robust your application really is if you suspect that the error handling is wonky. But I am digressing here. Now back to the .NET 4.8 issue.

When I debugged it with Windbg the error did occur but not with dnSpy which left me back puzzled. If you have followed this far you can probably guess that this issue only surfaces when native images are loaded into the process. That was the reason why dnSpy was working perfectly. By enabling local variables and Ngenning the binaries again it did help to find out what was happening. The exact details are not important but it suffices to say that the application did use an old Dependency Injection framework named ObjectBuilder which was part of the Enterprise Library many years ago. To select the right ctor the default policy was to always use the first one by a call to Type.GetConstructors:

// Microsoft.Practices.ObjectBuilder.DefaultCreationPolicy
public ConstructorInfo SelectConstructor(IBuilderContext context, Type typeToBuild, string idToBuild)
    ConstructorInfo[] constructors = typeToBuild.GetConstructors();
    if (constructors.Length > 0)
        return constructors[0];
    return null;

That worked well for a over a decade but no more with .NET 4.8. If you read the API help for Type.GetConstructors


The GetConstructors method does not return constructors in a particular order, such as declaration order. Your code must not depend on the order in which constructors are returned, because that order varies.

it becomes evident that this approach can cause problems. In this particular case the first ctor was the default ctor for a given type but with .NET 4.8 and NGenning the application the order of ctors did change. The first ctor was now ctor(string) which did cause object creation to fail miserably. To hit this issue many things must come together. If you e.g. add another type to the specific assembly that contained the type the issue goes away because for the error to surface you need to craft your IL metadata a specific offsets to make the issue happen. I have created a small reproducer which contains 10K classes with two ctors which shows the opposite issue. With JITed code class I have one class which has mixed ctor order. The Source code is here.

Type DerivedSealed1808 has different ordered ctor!

The application checks if the first ctor is the default ctor which should be nearly always the same:

        private static void CheckCtorOrder()
            foreach(var type in typeof(Base).Assembly.GetTypes())
                if( type.Name.StartsWith("DerivedSealed"))
                    ConstructorInfo[] infos = type.GetConstructors();
                    if( infos[0].GetParameters().Length > 0 )
                        Console.WriteLine($"Type {type.Name} has different ordered ctor!");

After NGenning the application all classes have the same ctor order. This is pretty fragile. The advice on MSDN is therefore there for a reason. I have checked other more popular Dependency Injection Framework’s namely

  • Spring.NET
  • Windsor
  • AutoFac
  • Unity

which all have done their homework and contain not such problematic code. I might be the only one hitting that issue. But if you find strange application failures after a Windows KB check if the KB contains .NET 4.8 which could be the reason why your .NET application fails. .NET 4.8 has been available for 6 month and is pushed out via Windows Update as mandatory update now. MS is careful to not roll this out globally at once. Currently every 20th machine which requests all mandatory Windows Updates will get it but I am not sure when the switch will be flipped to 100%. This approach of MS is good but if you wonder why only some machines of your department behave strangely you should check if they really got all the same KBs installed even if everyone did install all mandatory updates. If you have a larger department and maintain your own Windows Update Service you should definitely keep an eye on strange .NET application behavior after you get the update which contains .NET 4.8. There are some issues with themed WPF context menus which can lead to misplaced, empty or non themed context menus but these should happen only sporadically. WPF still has issues with grouped Virtualizing Stackpanels (since .NET 4.5 where it was introduced) which can cause indefinite layout calls which can surface as UI hangs due to semi memory leaks. But that is part of another story.