Concurrent Dictionary Modification Pitfalls

Dictionary<TKey,TValue> is one of the most heavily used collection classes after List<T>. One of the lesser known things is how it behaves when you accidentally modify it concurrently.

What is the output of this simple program under .NET 3.1, .NET 5.0 and .NET 4.8?

using System;
using System.Collections.Generic;
using System.Threading.Tasks;

namespace ConcurrentDictionary
{
    class Program
    {
        static Dictionary<string, int> _dict = new();

        static void Modify()
        {
            _dict.Clear();
            for (int i = 0; i < 10; i++)
            {
                string key = $"Key {i}";
                _dict.ContainsKey(key);
                _dict[key] = 1;
            }
        }

        static void Main(string[] args)
        {
            for(int k=0; k < 1_000_000; k++)
            {
                if (k % 100 == 0)
                {
                    Console.WriteLine($"Modify Run {k}");
                }

                Parallel.Invoke(Modify, Modify, Modify);
            }
        }
    }
}

Under .NET Core 3.1, .NET 5.0 it will produce

D:\>ConcurrentDictionary.exe
Modify Run 0
Modify Run 0
Modify Run 100
Unhandled exception. 
 ---> System.InvalidOperationException: Operations that change non-concurrent collections must have exclusive access. A concurrent update was performed on this collection and corrupted its state. The collection's state is no longer correct.
   at System.Collections.Generic.Dictionary`2.FindValue(TKey key)
   at ConcurrentDictionary.Program.<>c__DisplayClass0_0.<Main>g__Modify|0() in D:\Source\vc19\ConcurrentDictionary\Program.cs:line 26
   at System.Threading.Tasks.Task.InnerInvoke()
   at System.Threading.Tasks.Task.<>c.<.cctor>b__277_0(Object obj)
   at System.Threading.ExecutionContext.RunInternal(ExecutionContext executionContext, 

This is similar what you get when you modify a list

Unhandled exception. 
 ---> System.InvalidOperationException: Collection was modified; enumeration operation may not execute.
   at System.Collections.Generic.List`1.Enumerator.MoveNextRare()
   at System.Collections.Generic.List`1.Enumerator.MoveNext()
   at ConcurrentDictionary.Program.ModifyList() in D:\Source\vc19\ConcurrentDictionary\Program.cs:line 31

When you modify a Dictionary or you enumerate a list while it is modified you get an exception back. That is good and expected.

Lets try the Dictionary sample with .NET 4.8 (or an earlier version)

D:>ConcurrentDictionary.exe
Modify Run 0
Modify Run 100

This time it will stop working and just hang around. Whats the problem with .NET 4.x? When in doubt take a memory dump with procdump

C:\temp>procdump -ma 2216

ProcDump v10.0 - Sysinternals process dump utility
Copyright (C) 2009-2020 Mark Russinovich and Andrew Richards
Sysinternals - www.sysinternals.com

[22:32:25] Dump 1 initiated: C:\temp\ConcurrentDictionary.exe_210810_223225.dmp
[22:32:25] Dump 1 writing: Estimated dump file size is 68 MB.
[22:32:25] Dump 1 complete: 68 MB written in 0.3 seconds
[22:32:25] Dump count reached.

Then open the Dump with Windbg

c:\Debuggers\windbg.exe -z C:\temp\ConcurrentDictionary.exe_210810_223225.dmp

Load sos dll and then dump all managed thread stacks

Microsoft (R) Windows Debugger Version 10.0.17763.132 AMD64
Copyright (c) Microsoft Corporation. All rights reserved.

Loading Dump File [C:\temp\ConcurrentDictionary.exe_210810_223225.dmp]
User Mini Dump File with Full Memory: Only application data is available

Comment: '
*** procdump  -ma 2216 
*** Manual dump'
Symbol search path is: srv*
Executable search path is: 
Windows 10 Version 19043 MP (8 procs) Free x64
Product: WinNt, suite: SingleUserTS
19041.1.amd64fre.vb_release.191206-1406
Machine Name:
Debug session time: Tue Aug 10 22:32:25.000 2021 (UTC + 2:00)
System Uptime: 0 days 0:22:07.133
Process Uptime: 0 days 0:00:45.000
.............................

0:000> .loadby sos clr

0:000> ~*e!ClrStack

OS Thread Id: 0x2370 (0)
Call Site
System.Collections.Generic.Dictionary`2[[System.__Canon, mscorlib],[System.Int32, mscorlib]].Insert(System.__Canon, Int32, Boolean)
ConcurrentDictionary.Program.Modify() [D:\Source\vc19\ConcurrentDictionary\Program.cs @ 18]
System.Threading.Tasks.Task.Execute()
System.Threading.ExecutionContext.RunInternal(System.Threading.ExecutionContext, System.Threading.ContextCallback, System.Object, Boolean)
System.Threading.ExecutionContext.Run(System.Threading.ExecutionContext, System.Threading.ContextCallback, System.Object, Boolean)
System.Threading.Tasks.Task.ExecuteWithThreadLocal(System.Threading.Tasks.Task ByRef)
System.Threading.Tasks.Task.ExecuteEntry(Boolean)
System.Threading.Tasks.ThreadPoolTaskScheduler.TryExecuteTaskInline(System.Threading.Tasks.Task, Boolean)
System.Threading.Tasks.TaskScheduler.TryRunInline(System.Threading.Tasks.Task, Boolean)
System.Threading.Tasks.Task.InternalRunSynchronously(System.Threading.Tasks.TaskScheduler, Boolean)
System.Threading.Tasks.Parallel.Invoke(System.Threading.Tasks.ParallelOptions, System.Action[])
ConcurrentDictionary.Program.Main(System.String[]) [D:\Source\vc19\ConcurrentDictionary\Program.cs @ 44]

OS Thread Id: 0x1e68 (6)
Call Site
System.Collections.Generic.Dictionary`2[[System.__Canon, mscorlib],[System.Int32, mscorlib]].FindEntry(System.__Canon)
System.Collections.Generic.Dictionary`2[[System.__Canon, mscorlib],[System.Int32, mscorlib]].ContainsKey(System.__Canon)
ConcurrentDictionary.Program.Modify() [D:\Source\vc19\ConcurrentDictionary\Program.cs @ 17]
System.Threading.Tasks.Task.Execute()
System.Threading.ExecutionContext.RunInternal(System.Threading.ExecutionContext, System.Threading.ContextCallback, System.Object, Boolean)
System.Threading.ExecutionContext.Run(System.Threading.ExecutionContext, System.Threading.ContextCallback, System.Object, Boolean)
System.Threading.Tasks.Task.ExecuteWithThreadLocal(System.Threading.Tasks.Task ByRef)
System.Threading.Tasks.Task.ExecuteEntry(Boolean)
System.Threading.ThreadPoolWorkQueue.Dispatch()

0:000> !runaway
 User Mode Time
  Thread       Time
    0:2370     0 days 0:00:44.843
    6:1e68     0 days 0:00:44.796
    2:928      0 days 0:00:00.015
    5:233c     0 days 0:00:00.000
    4:3560     0 days 0:00:00.000
    3:92c      0 days 0:00:00.000
    1:1fb8     0 days 0:00:00.000
	
0:000> .time
Debug session time: Tue Aug 10 22:32:25.000 2021 (UTC + 2:00)
System Uptime: 0 days 0:22:07.133
Process Uptime: 0 days 0:00:45.000
  Kernel time: 0 days 0:00:00.000
  User time: 0 days 0:01:29.000

We find that we consume a lot of CPU on thread 0 and 6 (44s until the dump was taken). These methods call into Dictionary.FindEntry and Dictionary.Insert which is strange. Why should it take now much more CPU to insert things into a Dictionary?

In the meantime Visual Studio has become pretty good at loading memory dumps as well. The by far best view is still the Parallel Stacks Window

But for now we turn back to Windbg because it offers information which threads consume how much CPU with !runaway on a per thread basis and .time for the complete process. This helps a lot to look at the right threads which are consuming most CPU.

To drill deeper into the innards of the Dictionary we switch to the best Windbg Extension for managed code which is netext. Unzip the latest release for your Windbg into the Windbg\winext folder or you need to fully qualify the path to the x86/x64 version of netext.

0:000> .load netext
netext version 2.1.64.5000 Jun  2 2021
License and usage can be seen here: !whelp license
Check Latest version: !wupdate
For help, type !whelp (or in WinDBG run: '.browse !whelp')
Questions and Feedback: https://github.com/rodneyviana/netext/issues 
Copyright (c) 2014-2015 Rodney Viana (http://blogs.msdn.com/b/rodneyviana) 
Type: !windex -tree or ~*e!wstack to get started

0:006> ~6s
0:006> !wstack

Listing objects from: 0000007a30afe000 to 0000007a30b00000 from thread: 6 [1e68]

Address          Method Table    Heap Gen      Size Type
@r8=000001eb5a279ef8 00007ffa81d38538   0  0         92 System.Int32[]
@rcx=000001eb5a27ffe8 00007ffa81d4d880   0  0        432 System.Collections.Generic.Dictionary_Entry[]
@rdi=000001eb5a2d69e0 00007ffa81d359c0   0  0         36 System.String Key 1
@rdx=000001eb5a27ffe8 00007ffa81d4d880   0  0        432 System.Collections.Generic.Dictionary_Entry[]
@rsi=000001eb5a272d88 00007ffa81d46db0   0  0         80 System.Collections.Generic.Dictionary
000001eb5a2d69a8 00007ffa81d385a0   0  0         24 System.Int32 0n1
000001eb5a274d48 00007ffa81d3a440   0  0        216 System.Globalization.NumberFormatInfo
000001eb5a2d69c0 00007ffa81d359c0   0  0         28 System.String 1
000001eb5a28e350 00007ffa81d39e00   0  0         48 System.Text.StringBuilder
000001eb5a279a50 00007ffa81d359c0   0  0         40 System.String Key {0}
19 unique object(s) found in 1,716 bytes

0:006> !wdo 000001eb5a27ffe8
Address: 000001eb5a27ffe8
Method Table/Token: 00007ffa81d4d880/200000004 
Class Name: 
System.Collections.Generic.Dictionary_Entry<System.String,System.Int32>[]
Size : 432
Components: 17
[0]: -mt 00007FFA81D4D820 000001EB5A27FFF8
[1]: -mt 00007FFA81D4D820 000001EB5A280010
[2]: -mt 00007FFA81D4D820 000001EB5A280028
[3]: -mt 00007FFA81D4D820 000001EB5A280040
[4]: -mt 00007FFA81D4D820 000001EB5A280058
...
0:006> !wdo -mt 00007ffa81d4d820 000001eb5a280010
System.Collections.Generic.Dictionary_Entry<System.String,System.Int32>
00007ffa81d3a238 System.__Canon+0000    key 000001eb5a2d6980 Key 0
00007ffa81d385a0 System.Int32 +0008     hashCode 6278e7e7
00007ffa81d385a0 System.Int32 +000c     next 1 (0n1)
00007ffa81d385a0 System.Int32 +0010     value 1 (0n1)

The .NET Dictionary stores the data in buckets which is a linked list. The nodes are referenced via an index with the next field. In the dysfunctional case we find at array location 1 as next node index 1 which will cause the node walk to never end.

The decompiled code shows this

The for loop calls i = this.entries[i].next in the incremeent condition. This never incrementing loop will not end and explains the high CPU on two threads.

How do these things usually crop up? When more and more code is made multithreaded and async await is added to the mix you can find in a slow or stuck process often high CPU in FindEntry or Insert. Now you know this indicates not a huge Dictionary which has just become inefficient.

Actually you have found a Dictionary multithreading bug if you are running on .NET 4.x!

You can create a stack tag for these methods and flag them as potential concurrent Dictionary modification if these consume one core or more constantly like this to directly show up in WPA:

The stacktag file for this is

<?xml version="1.0" encoding="utf-8"?>

<Tag Name=""> 
<Tag Name=".NET">	
<Tag Name="Dictionary FindEntry\If one or more Core is consumed this indicates a concurrent dictionary modification which results in endless loops!">
		<Entrypoint Module="mscorlib.ni.dll" Method="System.Collections.Generic.Dictionary*.FindEntry*"/>
		<Entrypoint Module="mscorlib.dll" Method="System.Collections.Generic.Dictionary*::FindEntry*"/>
		<Entrypoint Module="System.Private.CoreLib.dll" Method="System.Collections.Generic.Dictionary*.FindEntry*"/>
	</Tag>
	<Tag Name="Dictionary Insert\If one or more Core is consumed this indicates a concurrent dictionary modification which results in endless loops!">
		<Entrypoint Module="mscorlib.ni.dll" Method="System.Collections.Generic.Dictionary*.Insert*"/>
		<Entrypoint Module="mscorlib.dll" Method="System.Collections.Generic.Dictionary*::Insert*"/>
		<Entrypoint Module="System.Private.CoreLib.dll" Method="System.Collections.Generic.Dictionary*.Insert*"/>
	</Tag>
</Tag>
</Tag>

This is enough debugging Voodoo for today. Drop me a note if you have been biten by this issue and how you found it.

How To Not Satisfy Customers – Pixel 5 Android Crash

Recently I have bought a Pixel 5 which worked for about a week perfectly. While driving at a motorway and trying to connect to some Bluetooth device which did not work it rebooted a few minutes later while browsing the web. No problem, after a quick boot it will be right back, I thought. But I was greeted with this (German) error message:

“Cannot load Android system. Your data may be corrupt. If you continue to get this message, you may need to perform a factory data reset and erase all user data stored on this device.”

image

Retry or “Delete Everything” which is the synonym for Factory Reset are the only options. That sounds like a bad trade. Several retries later it still refused to boot. In some forum I have read that I can copy the OS image back via the Debugger Interface. Luckily I had enabled Developer Mode on my phone. After downloading first the wrong version because Google had at that point in time the OS Image I had installed not listed (https://developers.google.com/android/ota#redfin) and did not mention that you cannot install an OS Image which is older than the current security patch level. I got it  via https://www.getdroidtips.com/february-2021-patch-pixel-device/ on my phone:

image

Unfortunately that did also not work. There is a secret menu in Android which allows to view e.g. some logs and some other options which I cannot tell if they are useful.

image

Here are some log messages which might or might not be useful to anyone else

Filesystem on /dev/block/by-name/metadata was not cleanly shutdown; state flags: 0x1, incompat feature flags: 0xc6

Unable to set property "ro.boottime.init.fsck.metadata" to "17": error code: 0x18


Unable to enable ext4 casefold on /dev/block/by-name/metadata because /system/bin/tune2fs is missing
google/reding/redfin
11/RQ2A.210305.006/719741
user/release-keys

Android System could not be loaded

At that point I gave up and opened a thread https://support.google.com/pixelphone/thread/105622202/pixel-5-boot-loop-how-to-recover-data?hl=en at the Pixel support forum. The problem is that I wanted to save my photos because I do not trust Google to not scan over my entire image gallery nor any other Cloud provider. A manual USB sync does the trick and it has served me well. The problem is that Android encrypts every sector on the phone so there is zero chance to read anything unless you have a booted Android system which has unlocked the master key.

I know Bitlocker from Windows which allows me to export my secret keys so I can later access the encrypted data. Perhaps I have missed some tutorial to export my phone encryption keys but it turns out there is no way to export the key! I find it hard to believe that an OS used by billions of people has no data recovery workflow I could follow.

Google wants me to sync everything to the Cloud or I will loose my data if the OS has a bad day on an otherwise fully functional device. If the phone would have an external SD slot it would not have encrypted my data by default (I can enable that if I need to) which would be ok. But Pixel phones have no SD Card slot. I feel thrown back to the 90s where the rule save often, save early was good advice to prevent data loss. 30 years and millions of engineering hours later using an Android device feels like a retro game with a bad user experience if you to not want to share your data with Google.

Unfortunately that is the end of the story. I am pretty good with computers and having studied particle physics has helped me to analyze and solve many difficult problems. But if a system is designed in a way to lock me out when an error occurs then the bets are off. It is certainly possible to fix these shortcomings but I guess this is pretty low on Googles Business priority list where the Cloud and AI the future and it does not matter on which device your data is living. In theory this sounds great, but  corner cases like what happens to unsynced photos/videos if the same error happens, or if you do not trust the Cloud provider are just minor footnotes on the way to the Cloud.

After a factory reset the phone worked and so far it has not happened again. I think the Cloud is a great thing but as a customer I should have a choice to not share my data with Google without the fear to risk complete data loss every time the OS has detected some inconsistency.

When Things Get Really Bad – NTFS File System Corruption War Story

The setup is a PC which runs software version 1.0. After an upgrade to 2.0 everything did work but after 1-2 weeks the machine would not start the software v2.0 anymore. When the issue was analyzed it turned out that a xml configuration file was starting with binary data which prevented v2.0 from starting up.

Good file

<?xml version="1.0" encoding="utf-8"?>
  <import resource="../../Config1.xml"/>    
  <import resource="../../Config2.xml"/>
  <import resource="../../Config3.xml"/>
  <import resource="../../Config4.xml"/>

Bad file

DCMPA30€×ÕÞ±# EÔ°ƒ3¡;k”€
0…\
*d*“B  ...
    <control name="MissileControl" ref="IMissileControl"/>
    <control name="PositionControl" ref="IPositionControl"/>
    <control name="ReservationNotify" ref="IReservationNotify"/>
</config>

The binary data did cover the first 4096 bytes of the xml file which was certainly not good. This raises the question if other files have been corrupted and if we can we trust that machine at all? That calls for a tool that scans the entire hard drive and calculates the checksum of all files on that machine which can then be compared to all other files from a good machine with identical setup. This is not a new problem so there should be some open source software out there. I found hashdeep which was developed by Jesse Kornblum an agent of some government agency. It was exactly the thing I did need. You can generate from a directory recursively a list of files with their hashes (audit mode) from a good machine and compare these hashes on another machine and print the missing/new/changed files as diff output.

The audit file generation and file verification are straightforward:

hashdeep -r dir  > /tmp/auditfile            # Generate the audit file
hashdeep -a -k /tmp/auditfile -r dir         # test the audit

The current version seems to be 4.4 and can be downloaded from Github. If you are curious here are the supported options:

C:\> hashdeep64.exe [OPTION]... [FILES]...
-c <alg1,[alg2]> - Compute hashes only. Defaults are MD5 and SHA-256
                   legal values: md5,sha1,sha256,tiger,whirlpool,
-p <size> - piecewise mode. Files are broken into blocks for hashing
-r        - recursive mode. All subdirectories are traversed
-d        - output in DFXML (Digital Forensics XML)
-k <file> - add a file of known hashes
-a        - audit mode. Validates FILES against known hashes. Requires -k
-m        - matching mode. Requires -k
-x        - negative matching mode. Requires -k
-w        - in -m mode, displays which known file was matched
-M and -X act like -m and -x, but display hashes of matching files
-e        - compute estimated time remaining for each file
-s        - silent mode. Suppress all error messages
-b        - prints only the bare name of files; all path information is omitted
-l        - print relative paths for filenames
-i/-I     - only process files smaller than the given threshold
-o        - only process certain types of files. See README/manpage
-v        - verbose mode. Use again to be more verbose
-d        - output in DFXML; -W FILE - write to FILE.
-j <num>  - use num threads (default 8)

After checking all files I found no other corruptions. This is really strange. Why did the xml file not become corrupt during the upgrade but 2 weeks later? The config file is read only and should be never written by anybody. It could be a simple glitch in some memory bank of the SSD but since the exact same error did happen on several machines we can rule out cosmic rays as source of our problem. But can radiation be a problem? Intel has tried it out by putting a SSD into the beam of a particle accelerator. When enough radiation in the game anything can happen but it will be happening at random locations.

Could it be a hard power off while the system was writing data to the disk? If the OS does everything right the SSD might be having not enough power to store the data. But that theory could also be ruled out because the SSD of that machine had a built in capacitor to ensure that all pending writes are safely written even during a hard power off event.

If you have thought of enough nice theories you need to get more evidence (read data). In that case checking the Windows Event Logs could help to find some clues. One first hint was that after the machine was bricked the application event log showed that Defrag has been run. Why on earth is Defrag running on a SSD? That should not be needed since we have no need to realign our data to optimize disk seek times. It turned out that Defrag was reenabled by accident during the update of v2.0.

image

The default settings were that it would run every day late in the evening. Windows is smart enough to not defrag a SSD. Instead it will trigger SSD Trim to let the SSD do its own reorg things. But that is not all. Although Defrag is not performing a typical defrag operation it still defragments directories and other NTFS tables. NTFS becomes dead slow when you store ~ 100K files in a single directory. Then you will hit hard limits of the NTFS file system structures. These internal lists need some cleanup in case there are many file add/remove operations happening in some hot directories which could lead after some time to a slow or broken file system. To prevent that Defrag is still doing some house keeping and moving some things around even on SSDs.

Since not all machines are running until 9 PM it also explains why after the software update the machine did break randomly within two weeks after the update. When defrag did run and the machine was shut down in the next morning the machine is turned on it would refuse to work due to the corrupted xml configuration file.

That is a nice finding but it still does not explain why Defrag would ruin our xml file. Lets check the system event log for interesting events. With interesting I mean messages like these

System Event Log Messages

Id Source Meaning Example Message
6005 EventLog System Start The Event log service was started.
6006 EventLog System Stop The Event log service was stopped.
6008 EventLog Hard Reboot The previous system shutdown at xxxx AM on ‎dd/dd/dddd‎ was unexpected.
41 Microsoft-Windows-Kernel-Power Hard Reboot The system has rebooted without cleanly shutting down first. This error could be caused if the system stopped responding, crashed, or lost power unexpectedly.
109 Microsoft-Windows-Kernel-Power Reboot initiated by OS The kernel power manager has initiated a shutdown transition
131 Ntfs NTFS Corruption “The file system structure on volume C: cannot be corrected.”
130 Ntfs NTFS Repair (chkdsk) The file system structure on volume C: has now been repaired.

Application Event Log Messages

Id Source Meaning Example Message
258 Microsoft-Windows-Defrag Defrag Run The storage optimizer successfully completed defragmentation on System (C:)
258 Microsoft-Windows-Defrag Defrag Run The storage optimizer successfully completed retrim on System (C:)
26228 Chkdsk chkdsk Repair Chkdsk was executed in verify mode on a volume snapshot.

 

The next clue was that after Defrag has been run the NTFS volume needed a repair.

Chkdsk        26228        None        "Chkdsk was executed in verify mode on a volume snapshot.

Checking file system on \Device\HarddiskVolume2

Volume label is System.

Examining 1 corruption record ...

Record 1 of 1: Bad index ""$I30"" in directory ""\Users\xxxx\AppData\Roaming\Microsoft\Windows\Start Menu\Programs\System Tools <0x4,0x15437>"" ... The multi-sector header signature for VCN 0x0 of index $I30

in file 0x15437 is incorrect.

44 43 4d 01 50 41 33 30 ?? ?? ?? ?? ?? ?? ?? ?? DCM.PA30........

corruption found.

That is a helpful message! After Defrag has been run NTFS is broken. This NTFS corruption is then fixed by chkdsk which is complaining about some start menu entry. Luckily chkdsk prints also the first bytes of the corrupted file which starts with DCM.PA30 which is the same as our corrupted xml file! I did extract the binary file contents of the broken xml  file and put it in an extra file. Then I did run hashdeep.exe over it to check if this binary data is the file content of some other file. Since I have only the first 4K of the data I was not hopeful to find something. But lucky me I did find a file which did match. It looks like we did mix the contents of

c:\Windows\WinSxS\Manifests\amd64_windows-defender-management-onecore_…_none_….manifest

with our xml file and the start menu entry shortcut! The question remains what Windows Defender files have to do with our xml and the start menu entry. Lets check the System Event Log for interesting events:

image

Now we can replay the events that lead to the NTFS corruption

  1. OS is installed
  2. Device Guard is enabled (not shown in the logs)
  3. Windows shuts itself down (“The kernel power manager has initiated a shutdown transition”)
  4. Sometimes the system hangs and the person installing the machine presses the power button to perform a hard reset
    • This caused the initial NTFS corruption
  5. After boot NTFS detects that something is corrupt which is repaired although one message tells us that it cannot be repaired
    1. Volume C … needs to be taken offline to perform a Full Chkdsk
    2. A corruption was discovered in the file system structure on volume
    3. The file system structure on volume C: cannot be corrected
    4. The file system structure on volume C: has now been repaired
  6. Now comes our part. After the repair was done automatically the software v1.0 is installed which writes data on an ill repaired NTFS volume and continues to run happily until …
  7. v2.0 of the software is installed which enables defrag by accident again
  8. The software version 2.0 runs happily it seems …
  9. But when the machine did run at 9:00 PM in the evening defrag is executed which defragments NTFS tables which uses referential information from other tables which destroys the first 4K of our xml file and we end up with some Windows Defender file contents.
  10. The next morning the machine would refuse to work
  11. Problems ….

That was an interesting issue and as you can guess this analysis was not done by me alone but several people contributed to connect all the loose ends. Thank You! You know who you are.

What can we learn from this?

  • Log messages are really important
  • Keeping all of them is essential to find the root cause
    • Overwriting the event log with myriads of messages is not the brightest idea
  • Luck: We got lucky that the System Event Log history included the initial OS installation

Hard Power Off

This is another case where transactions do not help to get an always consistent file system. What else can happen? Lets try it out by writing to some log files and then power off the machine while it is writing data to disk.

I used this simple C# program to write to several files in an alternating manner. Not sure if that is needed but it seems to help to force issues sooner

using System.Collections.Generic;
using System.IO;

class Program
{
    static void Main(string[] args)
    {
        List<Stream> streams = new List<Stream>();
        const int Files = 5;

        for (int i = 0; i < Files; i++)
        {
            string fileName = $"C:\\temp\\TestCorruption{i}.txt";
            FileStream stream = new FileStream(fileName, FileMode.OpenOrCreate, FileAccess.Write, FileShare.ReadWrite);
            streams.Add(stream);
        }

        while (true)
        {
            for (int i = 0; i < Files; i++)
            {
                using StreamWriter writer = new StreamWriter(streams[i], leaveOpen:true);
                writer.WriteLine("Hello this is string 1");
                writer.Flush();
                writer.Write("Hello this is string 2");
                writer.Flush();
            }
        }
    }
}

We are writing to 5 log files the string “Hello this is string1”, flush the data to enforce a WriteFile call to the OS so that our process has no pending buffered data, then another string and enforce WriteFile again. If the process crashes we get consistent data in our log file because the OS has already got the data and it will write at some time later the pending data to disk. But what will happen when we do a hard power off?

Lets look at our log file

Hello this is string 2Hello this is string1
Hello this is string 2Hello this is string 1
Hello this is string 2Hello this is string 1
Hello this is string 2Hello this is string 1
Hello this is string 2Hello this is string 1
Hello this is string 2Hello this is string 1
Hello this is string 2Hello this is string 1
Hello this is string 2Hello this isNULNULNULNULNULNULNULNULNULNUL....

Who did write the 0 bytes to our text file? We did write only ASCII data to our files! When WriteFile(“Buffered data) is executed and the OS wants to write that data to disk the OS needs to

  1. Increase the file size in the directory
  2. Write the buffered data over the now nulled cluster

Step 1 and 2 are related but a hard power off can lets 1 happen but not 2. This can result in a text file which contain larger amounts of 0 bytes at the end for no apparent reason. See https://stackoverflow.com/questions/10913632/streamwriter-writing-nul-characters-at-the-end-of-the-file-when-system-shutdown where someone with more NTFS know how than me answers that question:

That happens. When you append a file first its size is corrected in the directory (and that’s transactional in NTFS) and then the actual new data is written. There’s good chance that if you shut down the system you end up with a file appended with lots of null bytes because data writes are not transactional unlike metadata (file size) writes.

There’s no absolute solution to this problem.

answered Jun 6 ’12 at 11:53

sharptooth

156k

When a Cluster is allocated it is nulled out to ensure that no previous data can be there. While you append data to a file you can miss the last write although the file size already contains the updated value which makes the 0s visible in a text editor. That is a nasty issue if you expect that nobody will ever reset a machine. I am not aware of a way around that except that you need to be prepared that some text files might contain 0 bytes at the end due to a hard power off.

Was the file system corrupted this time again? What does the System event log tell us?

Critical    9/5/2020 5:40:52 PM    Microsoft-Windows-Kernel-Power    41    (63)    The system has rebooted without cleanly shutting down first. This error could be caused if the system stopped responding, crashed, or lost power unexpectedly.
Information    9/5/2020 5:40:52 PM    Microsoft-Windows-Ntfs    98    None    Volume \\?\Volume{12a882c8-5384-11e3-8255-806e6f6e6963} (\Device\HarddiskVolume2) is healthy.  No action is needed.

NTFS says everything is green. From a file integrity perspective everything did work, until power was cut. Your hard disk did miss the last write but hey: Who did guarantee you that this is an atomic operation?

Conclusions

Even impossible errors have an explanation if you are willing to dig deep. What was your nastiest error you did manage to find the root cause?

MS Performance HUD – Analyze EventLog Reading Performance In Realtime

This post explains why your home machine can work fast while many corporate PCs show high CPU consumption at random times especially after boot for several minutes with no apparent explanation.  Event Tracing for Windows (ETW) is a fantastic piece of technology. I love it because it enables me and perhaps you to tackle systemic issues regardless if the problem is my application, an Explorer plugin, a slow device driver or your Antivirus Scanner gone wild. Last time I have complained about slow WMI queries (https://aloiskraus.wordpress.com/2019/11/17/the-wmi-query-from-hell/) which were induced by some System Management software. It turns out that reading from the Windows Eventlog is a quite complex undertaking when you try to understand the slow read performance. For starters I try to read all Application event log messages via a WMI query

 

Slow Event Log Read Performance

SELECT * FROM Win32_NTLogEvent WHERE Logfile = ‘Application’

This runs ca. 35s which is quite a while. To put the number into perspective lets check the size of the event log which is 25 MB where 60232 messages were stored. This means we are reading a binary file with 0,7MB/s on my i7-4770K CPU @ 3.50GHz. That is really slow! Lets record some data with ETW and analyze it with the new Performance HUD Tool which was released at 29. May 2020. When I select the event log service which shows high CPU I can easily find why it is so busy:

image

We are reading 60K messages. While we are doing that the event log service process accesses the Registry over 800K times to find the currently configured language! The method EvtRpcMessageRender formats an event log message which in turn calls FormatMessage which does not reuse any cached thread culture but always reads the data from the registry. This means we have for every read message 13 registry accesses which is way to much. The Performance HUD shows on the lower grid performance insights and a tooltip what you can do about it. The performance of FormatMessage seems to be less important than to ensure 100% correctness when the language is switched because it must use the updated value instantaneously. The HUD Tool makes ETW data much more accessible than it was before. If I want to know how much data the Event Log service has read I just need to switch over to the File I/O tab which shows me the 25 MB parsed Application event log. If I need it I can also get the call stack by hovering over the blue underlined and press Ctrl+Click to find the corresponding stack traces.

image

Now we have proven that the event log service reads 25 MB of data but to format the read data it needs  800K registry reads just to determine the current language which costs in total over 35s of execution time. At least that is what I can see with the HUD Tool inside a single process. This tool is definitely not a v1.0 and is used within MS for quite some time based on the many helpful tooltips from performance experts. The HUD Tools is a great addition to the toolbox which warrants more detailed posts in the future. If you have problems to get it running drop me a note and I will see if I can help.

Performance HUD can be run in two modes

  • Realtime attached to a specific process
  • Offline while parsing data from an existing recorded ETL file
    • This requires CPU Profiling, Context Switch, File IO, Registry tracing with stacks enabled
    • For UI Hangs and User/GDI Handle leaks you need additionally to record the Microsoft-Windows-Win32k ETW Provider with 0x8000030000240000 flags + Stacks
      • The new GDI/User Leak tracking flags are (Win 10 1903. Windows Server 2019 e.g. is based on Win 10 1807. No server OS or Win 10 LTS release supports that yet …)
        • 0x0000010000000000  GdiHandleOperation
        • 0x0000020000000000  UserHandleOperation

Realtime ETW tracing is a great thing which makes it much easier for the average performance investigator to analyze ETW data right away. Especially UI thread hangs due to blocking calls got fantastic support which makes finding your UI hangs a no brainer.

 

System Wide Analysis

A first check is great to do in HUD but a more detailed analysis is still best done in WPA (and a 4K monitor). Lets see how the CPU consumption across processes is spent:

 

image

image

The WMI Query is executed from my test process named EventLogFlusherAndReader.exe (14796) which consumes 10s mostly reading registry keys for the COM class factory of some WMI COM objects. …

Next is is the svchost(1676) which is the Event Log Service Process with also 10s of CPU. There it is spending also most of the time as we have seen earlier with Registry Reading. In our case it is 5s/10s. Below is a picture how a WMI Event Log Query is actually executed:

 

image

WMI can do much but only with the help of many lackeys which work for us. We are reading a binary file and decode its messages but with WMI this causes significant CPU in 4 other system processes! Most time is spent inside the Event Log Service and the WMI Worker process and our WMI reading process. I was wondering why the EventLogReader process consumes so much CPU while it should only receive the data from WMI. It turns out there is more behind it because the COM objects which are used are not only for WMI but also for VirtualBox. Seem the VM host intercepts quite a bit of calls on my machine. It did uninstall VirtualBox (this was quite a journey because I have move the C:\Windows\Installer folder to a different drive which breaks many things. Note to self: Do this never again!) and checked again. Sure enough the additional Registry Queries are gone but the total runtime did not change much. The reason is that the Performance HUD Tool enables expensive ETW Logging event for every Registry access which exaggerates the costs of Registry accesses quit a bit. With HUD profiling enabled I did see application slowdowns of up to 40%. You should therefore still measure without Performance HUD enabled to get a less distorted picture what is really our bottleneck.

So what can we do about slow event log reading performance with WMI? As it turns out not much. MS should really check if it needs to access the language registry keys so often without any caching. In theory we can format the message by ourselves but this is only part of the story since ETW supports for an ETW event log message up to 3 resource dlls.

image

 

  1. MessageFileName: Same as old event log message table dlls which contains the localized resource string where the paramters are expanded at %1 %2, … as classic events
  2. ParameterMessageFile:  This contains constants such as Access Denied, Granted, …  which are defined with two %% followed by the resource number e.g. %%8100
  3. ResourceFileName: Defines the WEVT_TEMPLATE resource which contains event names, task names etc inside this binary structure

What will happen when I format messages in the old way by loading the resource dll and format the expansion parameters by myself? For an ETW Provider which is nice and which defines only resource based strings in the EventLog like this

image

I can correctly decode the message with the old EventLog API of .NET:

image

But when I remove the EventMessageFile registry key then I get an unresolved message just like any “modern” ETW provider would declare it in the EventLog if it is not nice to legacy APIs: logMsg.Message would error out and give me a message

The description for Event ID ‘xxx’ in Source ‘xxxxx’ cannot be found. The local computer may not have the necessary registry information or message DLL files to display the message, or you may not have permission to access them.

Event Log providers since Windows Vista only need a ProviderGuid below the event log registry key which makes it impossible to format resource based messages without the newer ETW Event Rendering APIs.

This is an issue if you are monitoring log messages issued by low level device drivers to report e.g. disk failures and other hardware issues. These important messages will slip through your monitoring system if you use the .NET System.Diagnostics.EventLog class. It is the fastest way to read messages but unfortunately you will miss out messages. The situation can be fixed to some extent by creating a registry key EventMessageFile for the ETW provider below the corresponding Event Log Source registry node. You need to look up the ProviderGuid below Computer\HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion\WINEVT\Publishers\ to find the corresponding MessageFileName. That will enable proper rendering of ETW based providers.

 

image

 

There is one caveat though. Fixed string constants which are formatted via %% in the resource based messages that are baked up by the ParameterFileName resource dll will not be formatted and will stay as placeholders in your formatted message. I have seen so far mostly for security related ETW providers to use that more advanced string expansion machinery.

The best option is still to use the newer .NET APIs in the form of System.Diagnostics.Eventing.Reader.EventLogReader which will will properly format all messages. The only issue is that this API just like WMI calls into the event log service where the messages formatting is done out of process which is much slower. I might be obsessed with performance when I complain about newer but more powerful APIs. Lets check things by reading and formatting 10K messages from the Application and Security Event log with the old System.Diagnostics.EventLog (readoldapi) and the newer System.Diagnostics.Eventing.Reader.EventLogReader (read) API.

C>EventLogFlusherAndReader.exe read -n 10000 -p 1 -l Application
Did read 10000 messages from log Application in 0.10s, 97,749 Messages/s, Format Repeat Count: 1
Total Formatting time 10.4 s.

C>EventLogFlusherAndReader.exe readoldapi -f -n 10000 -l Application
Read up to 10000 entries from Application log
Did read 10000 message table based event log messages from log Application in 1.45s, 6,899 Messages/s
Formatting time 0.4 s.
Reading and formatting for 10000 did take 1.85s, 5,410 Messages/s

 

C>EventLogFlusherAndReader.exe read -n 10000 -p 1 -l Security
Did read 10000 messages from log Security in 0.12s, 82,068 Messages/s, Format Repeat Count: 1
Total Formatting time 129.4 s.

C>EventLogFlusherAndReader.exe readoldapi -f -n 10000 -l Security
Read up to 10000 entries from Security log
Did read 10000 message table based event log messages from log Security in 1.37s, 7,277 Messages/s
Formatting time 0.6 s.
Reading and formatting for 10000 did take 1.98s, 5,041 Messages/s

 

With the “new” API we are over 20 times slower because formatting is now done in the EventLog Service which causes like WMI a lot of overhead. But what stands out is that reading data from the Security Event Log is > 200 times slower compared to the “old” EventLog API. Since .NET is open source it is easy to debug this and I have opened an issue https://github.com/dotnet/runtime/issues/34568 with a request to fix. As usual there is little motivation to change such old legacy code but I was invited to provide a PR which I did. After a bit of back and forth due to test failures I have landed this single line fix: https://github.com/dotnet/runtime/pull/36188/commits/dd0e212d75218f3b0eef3bc1ab812d7eb763279e to remove a superfluous check which did not serve any real purpose. The underlying issue was that for every message format call a check was done to format all task names. The Security Event Log defines 60 task names which explains the bad parsing performance for the Security Event Log. By removing a single line it did get 26 times faster which is not bad. This change will light up in .NET Core 5.0 so stay tuned. In the meantime you can still recompile the event log reading dll at your own from .NET and use it from a regular .NET application without any issues. It is even so easy to copy the source code from .NET Core and put it into your own project. Here are the results:

 

C>EventLogFlusherAndReader.exe read -n 10000 -p 1 -l Application
Did read 10000 messages from log Application in 0.10s, 99,245 Messages/s, Format Repeat Count: 1
Total Formatting time 2.7 s.

 

C>EventLogFlusherAndReader.exe read -n 10000 -p 1 -l Security
Did read 10000 messages from log Security in 0.12s, 80,601 Messages/s, Format Repeat Count: 1
Total Formatting time 4.3 s.

 

That speeds up reading event logs with .NET between 3-30 times depending on how many task names the log you are parsing has defined. It is still ca. 7 times slower compared to the old API which might be ok if you are not overdoing things.

Conclusions

The Security Event Log can be much bigger than 20 MB because some corporate security departments follow the MS security guidance to audit everything which needs a huge 256 MB or even a 1 GB Security Event Log file. If it would be much smaller it would overwrite the oldest events within hours depending on how your audit policies are configured. To check your current Audit Policies you can run auditpol.exe. On my home machine everything looks good but in a corporate setup things might be different. Especially Success audit messages can fill up your log files …

C>auditpol /get /category:*
System audit policy
Category/Subcategory                      Setting
System
  Security System Extension               No Auditing
  System Integrity                        Success and Failure
  IPsec Driver                            No Auditing
  Other System Events                     Success and Failure
  Security State Change                   Success
Logon/Logoff
  Logon                                   Success and Failure
  Logoff                                  Success
  Account Lockout                         Success
  IPsec Main Mode                         No Auditing
  IPsec Quick Mode                        No Auditing
  IPsec Extended Mode                     No Auditing
  Special Logon                           Success
  Other Logon/Logoff Events               No Auditing
  Network Policy Server                   Success and Failure
  User / Device Claims                    No Auditing
  Group Membership                        No Auditing
Object Access
  File System                             No Auditing
  Registry                                No Auditing
  Kernel Object                           No Auditing
  SAM                                     No Auditing
  Certification Services                  No Auditing
  Application Generated                   No Auditing
  Handle Manipulation                     No Auditing
  File Share                              No Auditing
  Filtering Platform Packet Drop          No Auditing
  Filtering Platform Connection           No Auditing
  Other Object Access Events              No Auditing
  Detailed File Share                     No Auditing
  Removable Storage                       No Auditing
  Central Policy Staging                  No Auditing
Privilege Use
  Non Sensitive Privilege Use             No Auditing
  Other Privilege Use Events              No Auditing
  Sensitive Privilege Use                 No Auditing
Detailed Tracking
  Process Creation                        No Auditing
  Process Termination                     No Auditing
  DPAPI Activity                          No Auditing
  RPC Events                              No Auditing
  Plug and Play Events                    No Auditing
  Token Right Adjusted Events             No Auditing
Policy Change
  Audit Policy Change                     Success
  Authentication Policy Change            Success
  Authorization Policy Change             No Auditing
  MPSSVC Rule-Level Policy Change         No Auditing
  Filtering Platform Policy Change        No Auditing
  Other Policy Change Events              No Auditing
Account Management
  Computer Account Management             No Auditing
  Security Group Management               Success
  Distribution Group Management           No Auditing
  Application Group Management            No Auditing
  Other Account Management Events         No Auditing
  User Account Management                 Success
DS Access
  Directory Service Access                No Auditing
  Directory Service Changes               No Auditing
  Directory Service Replication           No Auditing
  Detailed Directory Service Replication  No Auditing
Account Logon
  Kerberos Service Ticket Operations      No Auditing
  Other Account Logon Events              No Auditing
  Kerberos Authentication Service         No Auditing
  Credential Validation                   No Auditing

 

Reading a big event log by **** Some State Of The Art Security Solution ****  to check for malicious activity might kill interactive user experience since you are now maxing out two cores (the reader process and the event log service formatting your messages) for many minutes. A 1 GB Security Event Log file would need even with the fixed .NET version ca. 10 minutes to read. The current way how .NET handles event log messages would cause it to scan for over 5h through a 1 GB Security Event Log file. I hope such a security “solution” is not in production anywhere but one never knows for sure if some Admin has created a nice powershell script which uses Get-WinEvent to read all messages with a non deprecated up to date API in his Active Directory Forest. This post exists to make more people aware of the performance issues related to event log reading and applying tight security auditing policies. If you see minutes of high CPU spent in svchost.exe (it is a generic service host) which hosts the Event log service you can be pretty sure that someone is reading large event log file/s on your machine via powershell, WMI, or other ways.

image

Most often this is some security related software which is not fully optimized yet. Lets not give up but improve the tools you use and check if the policies which generate most log messages are really needed. If not you can reduce the size of the log and still keep the rollover time for the oldest log messages constant.

With Performance HUD it should be much easier to come to the same conclusions than I did. Try out the new tool and tell me what you have found! And lets hope that someone makes FormatMessage faster.

The WMI Query From Hell

this is about a case where I did wonder why my Windows installation at home is running fast but the Windows machine inside a company network is so slow, especially boot. “Nothing exists independently“, a verdict from Dalai Lama applies very much to performance engineering. I am not sure if he had that in mind but be prepared for unexpected twists during the following posts.

First a little background what you can expect in a corporate Windows network. Normally you have an Active Directory installation which takes care of user management. This involves rolling out central Group Policies defined by your administrators, deploying software centrally with e.g. MS System Center, rolling out virus scanners and virus pattern updates. This is happening in the background and you will not notice anything except that sometimes stuff is downloaded and installed and you are asked to reboot because some newly installed software or Windows update needs one.

After a reboot I was noticing that my machine was always working feverishly for the next 5 minutes on at least 2 cores. That is not good. Task manager was showing on my 4 Core notebook up to over 50% CPU utilization which means that I could barely work when something demanding was needed. The natural reaction to anyone who is dealing with performance issues on windows is to turn on ETW tracing. Why? Because the recording tool WPR is part of every Windows 10 installation. If you want to record for longer than one minute I would set the sampling rate down to 10ms to get not too much information like this:

C>wpr -start GeneralProfile -start CPU -start DiskIO -start FileIO -setprofint 100000

 

   Reproduce Issue

 

C>wpr -stop c:\temp\PerfIssue.etl

The CPU profile of WPR will always record context switch events which is interesting but produces a lot of data. I am still not sold to wpr as the successor to xperf because you need to extract the builtin profiles of the exe, create your own derivative profile and then save it to a file which is error prone and not easy to tailor. But for short recording durations it is okay to record pretty much anything so I do not complain. When you want to analyze the data you still need to download the Windows Performance Toolkit (WPT) from the Windows 10 SDK. Then you can apply the OverView.wpaProfile profile from ETWController which comes with many useful stacktags located here: https://github.com/Alois-xx/etwcontroller/tree/master/ETWController/ETW (get the complete folder).

image

We see a large CPU activity in WmiPrvSE.exe, svchost.exe and lsass.exe. What is this all about? On a live machine you can use ProcessHacker (the current version is 3.0.2824 do not download the old version 2.xxx from 2016) to easily see which processes are consuming CPU. A very useful but little known feature is that you can filter by multiple processes separated with | in the input window on the toolbar. It filters on all visible columns.

image

When you hover over the processes you immediately see which service or WMI provider is hosted in this process which can be very helpful.

image

WmiPrVSE.exe is a WMI Provider host process where the WMI provider objects are living. Whenever you call into a WMI object the call ends up in one of the several provider processes. The list WMI Providers tells us that this process contains the Event log provider. Lets turn to the next most CPU consuming process

image

This is the Event Log service process. Looks like the WMI provider calls into the Windows Event Log Service. Interesting.

image

lasss.exe is the process which manages all security related things inside Windows. This one tells you if you get access denied or not.

image

The last one is a Windows services which is the WMI service. That are a lot of different processes which are consuming CPU. But why? It looks like some WMI stuff is dealing with the Event Log but how can we find out which WMI query this could be?

The brute force approach is to check the memory contents of the WmiPrvSE.exe process and search for WMI queries which contain e.g. the select keyword. ProcessHacker can actually scan for strings in any process. Lets do that as quick investigation:

image

image

And behold we immediately get a nice list of current and past queries executed by WMI:

image

How can we verify that which WMI query was so slow? We do not need to write a C++/C# application. Instead we use Powershell which can execute any WMI Query with the Get-WmiObject -query “select …” cmdlet. Lets try to query all Windows Event Logs for errors (eventtype=1) and measure its duration with Measure-Command :

PS C:\>Measure-Command -Expression { Get-WmiObject -query "select Eventcode,Eventtype,Logfile,Message,Sourcename,TimeGenerated,user from Win32_NTLogEvent where eventtype = 1"  }
                                                                                                                                                                                 
Minutes           : 4                                                                                                                                                            
Seconds           : 5                                                                                                                                                            

Looks like we have found it. What is so slow with this WMI Query? The first observation is that the supplied query contains a time filter which tries to get events from the last 3 days. Normally you want to query a specific event log but this query reads from all event logs on the system all error (eventtype = 1)  messages.

Up to now we have identified an expensive WMI Query which reads all error events from the last 3 days. When you have a performance problem you have two options now:

  1. Find out who did call the query and prevent it from running
  2. Identify the root cause of the bad WMI Event Log Query performance

Lets start with option 1. Since Windows 8.1 we can enable WMI Activity Tracing and query for the WMI Execution events.

To enable WMI Tracing execute this on an elevated command prompt

                                                              
C>wevtutil sl Microsoft-Windows-WMI-Activity/Trace /e:true    

To query for WMI Query execution events we need to filter for event 11 which contains also the initiating process id

                                                                                                              
C>wevtutil qe Microsoft-Windows-WMI-Activity/Trace /q:"*[System[EventID=11]]" /f:text | findstr /i ExecQuery  
CorrelationId = {00000000-0000-0000-0000-000000000000}; GroupOperationId = 42920; OperationId = 42932; Operation = Start IWbemServices::ExecQuery - root\cimv2 : SELECT * FROM Win32_PhysicalMedia WHERE Tag="\\\\.\\PHYSICALDRIVE1"; ClientMachine = SKYRMION; User = NT AUTHORITY\SYSTEM; ClientProcessId
= 6276; NamespaceName = 132184730473612738 CorrelationId = {00000000-0000-0000-0000-000000000000}; GroupOperationId = 42992; OperationId = 42993; Operation = Start IWbemServices::ExecQuery - root\cimv2 : SELECT AdapterRAM FROM Win32_VideoController WHERE AdapterRAM >= 2147483648; ClientMachine = SKYRMION; User = NT AUTHORITY\SYSTEM; ClientPr
ocessId = 6276; NamespaceName = 132184730473612738 CorrelationId = {00000000-0000-0000-0000-000000000000}; GroupOperationId = 42996; OperationId = 42997; Operation = Start IWbemServices::ExecQuery - root\cimv2 : SELECT AdapterRAM FROM Win32_VideoController WHERE AdapterRAM >= 1073741824; ClientMachine = SKYRMION; User = NT AUTHORITY\SYSTEM; ClientPr
ocessId = 6276; NamespaceName = 132184730473612738 
CorrelationId = {00000000-0000-0000-0000-000000000000}; GroupOperationId = 43000; OperationId = 43001; Operation = Start IWbemServices::ExecQuery - root\cimv2 : SELECT AdapterRAM FROM Win32_VideoController WHERE AdapterRAM >= 4293918720; ClientMachine = SKYRMION; User = NT AUTHORITY\SYSTEM; ClientPr
ocessId = 6276; NamespaceName = 132184730473612738 
CorrelationId = {00000000-0000-0000-0000-000000000000}; GroupOperationId = 43046; OperationId = 43047; Operation = Start IWbemServices::ExecQuery - root\cimv2 : SELECT AdapterRAM FROM Win32_VideoController WHERE AdapterRAM >= 2147483648; ClientMachine = SKYRMION; User = NT AUTHORITY\SYSTEM; ClientPr
ocessId = 6276; NamespaceName = 132184730473612738 
CorrelationId = {00000000-0000-0000-0000-000000000000}; GroupOperationId = 43050; OperationId = 43051; Operation = Start IWbemServices::ExecQuery - root\cimv2 : SELECT AdapterRAM FROM Win32_VideoController WHERE AdapterRAM >= 1073741824; ClientMachine = SKYRMION; User = NT AUTHORITY\SYSTEM; ClientPr
ocessId = 6276; NamespaceName = 132184730473612738 
CorrelationId = {00000000-0000-0000-0000-000000000000}; GroupOperationId = 43054; OperationId = 43055; Operation = Start IWbemServices::ExecQuery - root\cimv2 : SELECT AdapterRAM FROM Win32_VideoController WHERE AdapterRAM >= 4293918720; ClientMachine = SKYRMION; User = NT AUTHORITY\SYSTEM; ClientPr
ocessId = 6276; NamespaceName = 132184730473612738 
CorrelationId = {C9FD2187-9C4E-0004-3FBF-FEC94E9CD501}; GroupOperationId = 43058; OperationId = 43059; Operation = Start IWbemServices::ExecQuery - root\cimv2 : select Eventcode,Eventtype,Logfile,Message,Sourcename,TimeGenerated,user from Win32_NTLogEvent where (type = 2 ) ; ClientMachine = SKYRMION
; User = Skyrmion\Alois; ClientProcessId = 1112; NamespaceName = 132183970163500741

With the process ID it is trivial to identify which process did initiate the WMI Query. Who is it?

image

lspush.exe? Never heard of. This one is not from Microsoft

image

A search later we get to https://www.lansweeper.com/knowledgebase/how-to-scan-with-lspush/. Lansweeper is a network inventory tool which gathers client machine information on Windows in a general way by e.g. calling it at boot or login via a group policy. There is a free edition which I did install at home to reproduce the issue. And lspush did show the same behavior on my machine:

image

It is of course much easier to spot the issue when you see the Lansweeper UI where the scan operation is even visible in the UI. In corporate networks such things are normally run in the background and you need to resort to advanced tooling to drill deeper. I have contacted Lansweeper if there is something I could to about it. I have got as answer back that other customers have complained as well but this issue will only be tackled when enough customers do complain. Currently there is NO way to turn this offensive event log query off which is frustrating. I know Lansweeper has many customers but not many seem to have been able to correlate lspush induced CPU actvitiy in WMI related processes and Lansweeper yet. If you are a Lansweeper customer and you are using lspush and not the agent approach to scan your machines you definitely should reach out to their support to raise the priority of the backlog item. Lansweeper is used by many big companies (https://www.lansweeper.com/) where definitely more than one will experience this issue. When addressing this the first time to our IT guys they did not believe me because the Lansweeper UI shows also how long the scan was taking:

image

According to the Lansweeper UI my scan time is in the sub second region. What this report is failing to show are the scans which cannot be turned off and how long they did take…

In meantime if you are a Lansweeper customer measure how long lspush.exe is executing in your setup and call their support to prevent wasted CPU cycles at scale and bad user experience after logon. This issue is not happening everywhere which brings me back to point 2. Why is this Eventlog query so slow? This is the topic for the next post.

.NET Serialization Benchmark 2019 Roundup

With .NET Core 3.0  and .NET 4.8 released it is time to check again which serializer performs best. The older overviews still contain some interesting insights because the focus tends to shift with year to year a bit:

Time moves on, some libraries are no longer maintained and no longer compatible with .NET Core 3. These were removed:

Name Reason
ZeroFormatter Project was abandoned by maintainer in favor of MessagePack-CSharp Github issue.
Crash on .NET Core 3 with System.BadImageFormatException: ‘Bad IL format.’
Wire Deserialization fails. See Issue1 and  Issue2. Besides this there is no project Activity since 2017.
Hyperion Serialization does not work due to accessing internal fields of Exceptions. See Issue.

 

Tested Serializers

Serializer

DataFormat/

Protocol

Version Tolerant

# Commits / Year Of Last Commit at 8/2019

Apex Binary No 143 / 2019
BinaryFormater Binary Yes ?
Bois Binary ? 181 /2019
DataContract Xml Yes ?
FastJson Json Yes 138 / 2019
FlatBuffer Binary Yes 1701 / 2019
GroBuf Binary Yes 322 / 2019
JIL Json Yes 1272 / 2019
Json.NET Json Yes 1787 / 2019
JsonSerializer (.NET Core 3.0) Json Yes ?
MessagePackSharp Binary, MessagePack Yes 715 / 2019
MsgPack.Cli Binary,
MessagePack
Yes 3296 /2019
Protobuf.NET Binary
Protocol Buffer
Yes 1018 / 2019
SerivceStack Json Yes 2526 /2019
SimdJsonSharp Json No 74 /2019
SlimSerializer Binary No ~100 /2019
UTF8Json Json Yes 180 /2019
XmlSerializer Xml Yes ?

New Additions since 2018 are Apex, SimdJsonSharp, Utf8Json and JsonSerializer which comes with .NET Core 3.

Test Execution

You can clone my Serializer Testing Framework: https://github.com/Alois-xx/SerializerTests and compile it. Then execute RunAll.cmd from the SerializerTests directory which will execute the tests on .NET Framework and .NET Core 3.  I have put the output directory above the source directory (e.g. SerializerTests\..\SerializerTests_hh_mm_ss.ss\) to not check in measured data by accident.

Test Results

image

And another time for .NET 4.8 were some of the new .NET Core only serializers are missing:

image

The times are shown for .NET Core 3.0 and .NET 4.8 for 1 million objects sorted by deserialize time. The tests were performed on my i7-4770K CPU @ 3.50GHz on Win 10.0.18362.356 (= 1903) with 16 GB RAM. Normally you read data more often than write it. This is the reason why I did put emphasis on the deserialize times. NopSerializer and SimdJsonSharpSerializer test only deserialize times hence the 0 in the graph for the serialization times.

But You Are Doing It All Wrong Without Benchmark.NET

How accurate are the measured numbers? As usual you have many factors such as virus scanners, Windows update downloads, and other things which can show different timings in the region of 10%. For more memory intensive (read slower) serializers it can reach sometimes up to 30% because page fault performance can differ quite a lot and has many influencing factors. Although the absolute timings may vary the relative ordering remains pretty stable when you measure at one point in time. The measurement error during one test run is < 5% which is pretty good to decide which serializer is faster for your specific use case. Every test was executed 5 times and the median value was taken. I have experimented with 10 or more runs but the resulting numbers did not stabilize because if the test suite runs for an extended period of time (hours) then the machine may reach different states (like downloading and unpacking a windows or other application update) which did the introduce systematic errors. For that reason I did get back to rather short test timings to ensure that the relative performance of the tested serializers is comparable. And no I have not used Benchmark.NET because it is only good for micro benchmarks. It goes into great length to get stable numbers but the price for that is that it executes a test which runs for one microsecond for several minutes. If the same approach would be used for the serializers then the test suite would run for days and you still would not know if the numbers are repeatable the next day because this are not real micro benchmarks which depend only on CPU. Besides that Benchmark.NET makes first time init effect measurements pretty hard to test the JIT timing overhead. I am aware that there is a repository of Benchmark.NET enabled performance test named dotnet Performance on github where I have added the .NET Core 3 testcase for IndexViewModel as it was mentioned in https://github.com/dotnet/performance/pull/456/commits where where I get with Benchmark.NET tests only a perf increase of 30% with the new .NET Core 3 Json serializer. I find it strange the the current repo contains no tests for the new Json Serializer of .NET Core 3. I have added the tests as any other for the new Json Serializer of .NET Core in the .NET Testsuite like this:

#if NETCOREAPP3_0
        [BenchmarkCategory(Categories.CoreFX)]
        [Benchmark(Description = "NetCoreJson")]
        public T NetCoreJson_()
        {
            memoryStream.Position = 0;
            return (T) System.Text.Json.JsonSerializer.DeserializeAsync(memoryStream, typeof(T)).Result;
        }
#endif

#if NETCOREAPP3_0
        [GlobalSetup(Target =nameof(NetCoreJson_))]
        public void SetupNetCoreJson()
        {
            memoryStream.Position = 0;
            System.Text.Json.JsonSerializer.SerializeAsync<T>(memoryStream, value).Wait();
        }
#endif

Json_FromStream<IndexViewModel>

|                     Method |      Mean |    Error |   StdDev |    Median |       Min |       Max |   Gen 0 |  Gen 1 | Gen 2 | Allocated |
|--------------------------- |----------:|---------:|---------:|----------:|----------:|----------:|--------:|-------:|------:|----------:|
|                        Jil |  64.25 us | 0.159 us | 0.124 us |  64.23 us |  64.07 us |  64.45 us |  6.4037 | 0.7684 |     - |   26.7 KB |
|                   JSON.NET |  81.11 us | 0.808 us | 0.756 us |  81.12 us |  79.98 us |  82.31 us |  8.3682 |      - |     - |  34.27 KB |
|                   Utf8Json |  38.81 us | 0.530 us | 0.496 us |  38.66 us |  38.29 us |  39.76 us |  5.1306 |      - |     - |  21.58 KB |
|               NetCoreJson  |  49.78 us | 0.692 us | 0.648 us |  49.51 us |  49.07 us |  51.00 us |  5.3066 | 0.5896 |     - |  21.91 KB |
| DataContractJsonSerializer | 343.39 us | 6.726 us | 6.907 us | 340.58 us | 335.44 us | 355.18 us | 20.3125 | 1.5625 |     - |  88.38 KB |

Then I executed MicroBenchmarks.exe with testcase #92 which tries the deserialization tests on IndexViewModel. In my test suite I only test stream based serializers because I want to support also large datasets which need no transformation into a string before the serializer is willing to deserialize the data. You can extend the Json_FromString which leads to similar results:

Json_FromString<IndexViewModel>

|      Method |     Mean |    Error |   StdDev |   Median |      Min |      Max |  Gen 0 |  Gen 1 | Gen 2 | Allocated |
|------------ |---------:|---------:|---------:|---------:|---------:|---------:|-------:|-------:|------:|----------:|
| NetCoreJson | 51.52 us | 0.498 us | 0.441 us | 51.49 us | 50.78 us | 52.53 us | 8.2508 | 0.4125 |     - |   34.2 KB |
|         Jil | 42.83 us | 0.414 us | 0.388 us | 42.73 us | 42.27 us | 43.56 us | 5.6662 | 0.6868 |     - |  23.49 KB |
|    JSON.NET | 77.07 us | 0.580 us | 0.542 us | 77.22 us | 76.19 us | 78.12 us | 7.6523 | 0.9183 |     - |  31.31 KB |
|    Utf8Json | 42.64 us | 0.263 us | 0.246 us | 42.65 us | 42.27 us | 43.06 us | 8.1967 | 0.3415 |     - |  33.87 KB |

This emphasis again to not blindly trust random performance numbers which are mentioned on github. This one was created with MicroBenchmarks.exe –filter “MicroBenchmarks.Serializers.Json_FromString*” after adding NetCoreJson to the mix. There we find again a perf increase of 32% compared to Json.NET measured with Benchmark.NET. But this is nowhere the originally factor 2 mentioned by Scott Hanselmann. Perhaps at that point in time the serializer did not fully support all property types which did result in great numbers. To combat confirmation bias I write the serialized data to disk so I can check the file contents to verify that indeed all required data was written and not e.g. private fields were omitted by this or that serializer which is a common source of measurement errors when a serializer has too good numbers.

Show Me The Real Numbers

The big pictures above need some serious explanation. In my first article I have shown that serialized data size and performance correlate. It is common wisdom that binary formats are superior to textual formats. As it turns out this is still somewhat true but the battle tested binary protobuf-net serializer is beaten by 50% on .NET Core by UTF8Json a Json serializer. There is one guy who did write MessagePackSharp, Utf8Json, ZeroFormatter and others named Yoshifumi Kawai who really knows how to write fast serializers. He is able to deliver best in class performance both for .NET Framework and .NET Core. I have chosen the fastest version tolerant Json serializer Utf8Json and normalized all values to it. Utf8Json is therefore 100% for serialize and deserialize. If something is below 100% it is faster than Utf8Json. If it is 200% then it is two times slower. Now the pictures above make hopefully more sense to you. Humans are notoriously bad at judging relative performance when large and small values are sitting next to each other. The graph should make it clear. The new Json Serializer which comes with .NET Core 3 is faster (NetCoreJsonSerializer) but not by a factor 2 as the post of Scott Hanselman suggests. It is about about 30% faster in the micro benchmarks and ca. 15% faster in my serializer test suite which de/serializes larger 1M objects.

Below is the same data presented in absolute times for .NET Core 3.0.0 when 1 million objects are de/serialized. I have grouped them by their serialization format which makes it easier to choose similar breeds if you want to stay at a specific e.g. binary or textual protocol. If you want to profile on your own you can use the supplied SerializerTests\SerializerStack.stacktags file to visualize the performance data directly in CPU Sampling and Context Switch Tracing.

image

Here the same data with .NET Framework 4.8

image

In terms of serialized data size there is Bois_LZ4 a special one because it is a compressing serializer with pretty good numbers. Since large parts of my BookShelf data is very similar it can compress the data pretty well which might play a role if network bandwidth is relevant in your scenario and you cannot use http compression.

Measurement Errors

I have collected data over the months also with .NET 4.7.2 where the numbers were much better. It looks like .NET Framework 4.8 (4.8.4010.0) has some regression issue with Xml handling which leads to slowdowns up to 6 times. These manifest in large degradations in  DataContract and XmlSerializer based tests. You do not believe me? I have measured this! Unfortunately I have played around a lot with debugging .NET recently (https://aloiskraus.wordpress.com/2019/09/08/how-net-4-8-can-break-your-application/) where I have added .ini files in all of my GAC assemblies to make it easy to debug with full locals. This works great but what will happen when NGen tries to update it NGen images? It will honor the .ini settings which tell it to not inline or optimize code too much.  No this is not a regression issue of .NET 4.8 but it was my error by playing around too much with JIT compiler flags. Now I have proof that method inlining is indeed very beneficial especially for Xml parsers with many small methods which speeds up parsing up to a factor 6. There is always something you can learn from an error.

What Is The Theoretical Limit?

To test how fast a serializer can get I have added NopSerializer which tests only deserialization because data is much often read than written and it is the more complex operation. Deserialization consists of two main stages

  • Data Parsing
  • Object Creation

Because any serializer must parse either Xml, Json or its other data format we can test the theoretical limit by skipping the parsing step and measure only object creation. The code for this is at its core getting the string of a preparsed UTF8 encoded byte array:

                // This will test basically the allocation performance of .NET which will when the allocation rate becomes too high put 
                // Thread.Sleeps into the allocation calls. 
                for(int i=0;i<myCount;i++)
                {
                    books.Add(new Book()
                    {
                        Id = i+1,
                        Title = Encoding.UTF8.GetString(myUtf8Data, (int) myStartIdxAndLength[i*2], (int) myStartIdxAndLength[i*2+1] )
                    });
                }

For 1 million book objects which consist only of an integer and a string of the form “Book dddd” the deserialize times are ca. 0,12s which results in a deserialize performance of ca. 90 MB/s which is not bad. If you look at the data charts above you still find a faster one named ApexSerializer. Why is this one faster than the NopSerializer? Great question! When you parse a text file it is usually stored as ASCII or UTF-8 text. But .NET and Windows uses as internal string representation UTF-16. This means after reading from a text file the data needs to be inflated from ASCII/UTF8 (8 bits per char) to UTF-16 (16 bits per char). This is complex and done in .NET with Encoding.UTF8.GetString. This conversion overhead takes time and has become with .NET Core 3 better which is great because it is a very central method which is called by virtually all code which deals with reading text files. When we use the fastest Json serializer UTf8Json which needs 0,3s then we find that 60% is still spent in parsing Json. This overhead could be lessened by using SimdJsonSharp which claims to give you GB/s Json parsing speed which uses advanced bit fiddling magic and AVX128/256 SIMD instructions which can process up to 64 bytes within one or a few instruction cycles. So far the interop costs of the native version (SimdJsonSharpSerializerN) still outweigh the benefits by far. The managed version does pretty well (SimdJsonSharpSerializer) but it is only a few % faster than Utf8Json which indicates that I am either using the library wrong or that the generated code still needs more fine tuning. But be warned that SimdJsonSharpSerializer/N is NOT a serializer for generic objects. You get from  SimdJsonSharp only the Json parser. You need to write the parsing logic by yourself if you want to go this route.

Which One Is Fastest?

Binary

When raw speed is what you want and you accept to keep the serialized object layout fixed (no versioning!) then Apex is worth a look. In the first table I have added a column if the serializer does support versioning in one or the other form. If there is a no expect crashes like ExecutionEngineException, StackoverflowException, AcessViolationException  and other non recoverable exceptions if anything in your execution environment changes. If speed is what you want speed is what you get by memcopying the data to the target location.

GroBuf is also worth a look but is comes with a significantly bigger serialized binary blob. Only slightly slower is MessagePackSharp which is written by the same guy as Utf8Json and uses with MessagePack an open source  binary encoding scheme.

Text

When the data format is Json then Utf8Json is the fastest one followed by Jil which is only 4% slower in this macro benchmark. If you measure for small objects the overhead with Benchmark.NET then Utf8Json is about 40% faster which is not reflected by this macro benchmark. Benchmark.NET tries to warm up the CPU caches and its branch predictor as much as possible which results in many CPU caching effects. This is great to optimize for small tight loops but if you process larger chunk’s of memory like 35 MB of Json for 1M Book objects you get different numbers because now a significant amount of memory bandwidth cannot be used to eagerly load data into the caches.

The Fastest Is Best Right?

Speed is great until you miss an essential feature of one of the tested serializers. If you are trying to replace something slow but very generic like BinaryFormatter which can serialize virtually anything you have not many options because most other serializer out there will not be able to serialize e.g. Delegates or classes with interfaces with different implementations behind each instance. At that point you are for practical reasons bound to BinaryFormatter. You can try SlimSerializer which tries also be pretty generic at the cost of versioning support as next best option.

Things are easier if you start from DataContractSerializer. You can get pretty far with ProtoBuf which allows easy transformation  by supporting the same attributes as DataContractSerializer. But there are some rough edges around interface serialization. You can register only one implementing type for an interface but that should be ok for most cases.

If you are starting from scratch and you want to use Json then go for Utf8Json which is generic and very fast. Also the startup costs are lower than from JIL because Utf8Json and MessagePackSharp (same author) seem to hit a fine spot by having most code already put into the main library and generate only a small amount of code around that. DataContractSerializer generates only the de/serialize code depending on the code path you actually hit on the fly.

First Time Init Performance (Serialize One Object)

To make the tests comparable a new process is used for every serializer alone to prevent issues like that the first serializer needs to startup the reflection infrastructure which takes time while the later called serializers would benefit from the already warmed up reflection infrastructure. Inside a fresh process four different types are serialized and then the process exits. The measured time is not the executable runtime (.NET Runtime startup is not measured) but how much time each serialize call did need . Below is the Max (first run) serialize time for one small object shown and the min time for any of the other three objects to give you an idea how much time the serializer needs for the first object and how much performance you can expect if you add new types to the mix.

You can execute this also manually by running RunAll.cmd which runs all tests including the startup timing tests in the output folder as csv files. Or if you want to go into details use the command line:

D:\Source\git\SerializerTests\bin\Release\net472>SerializerTests.exe -test firstcall
Serializer      Objects "Time to serialize in s"        "Size in bytes" FileVersion     Framework       ProjectHome     DataFormat      FormatDetails   Supports Versioning
ApexSerializer  1       0.1803  84      1.3.0.0 .NET Framework 4.8.4010.0       https://github.com/dbolin/Apex.Serialization    Binary          No
ApexSerializer  1       0.0331  86      1.3.0.0 .NET Framework 4.8.4010.0       https://github.com/dbolin/Apex.Serialization    Binary          No
ApexSerializer  1       0.0344  86      1.3.0.0 .NET Framework 4.8.4010.0       https://github.com/dbolin/Apex.Serialization    Binary          No
ApexSerializer  1       0.1089  148     1.3.0.0 .NET Framework 4.8.4010.0       https://github.com/dbolin/Apex.Serialization    Binary          No
ServiceStack    1       0.0200  69      5.5.0.0 .NET Framework 4.8.4010.0       https://github.com/ServiceStack/ServiceStack.Text       Text    Json    Yes
ServiceStack    1       0.0020  70      5.5.0.0 .NET Framework 4.8.4010.0       https://github.com/ServiceStack/ServiceStack.Text       Text    Json    Yes
ServiceStack    1       0.0020  70      5.5.0.0 .NET Framework 4.8.4010.0       https://github.com/ServiceStack/ServiceStack.Text       Text    Json    Yes
ServiceStack    1       0.0091  117     5.5.0.0 .NET Framework 4.8.4010.0       https://github.com/ServiceStack/ServiceStack.Text       Text    Json    Yes

 

.NET Framework Everything NGenned

image

.NET Framework Only Default Assemblies NGenned

image

.NET Core 3 – Defaults

image

The pictures show that JIT compilation takes a noticeable amount of time which is usually “fixed” with NGen on .NET Framework or Crossgen on .NET Core. Especially ApexSerializer needs to start 500ms on .NET Core 3 or even 1s on .NET Framework if the assembly and its dependencies are not precompiled. But if you NGen then the times are 10 times better for Apex which is great. If you try to choose the best serializer for reading your configuration values during application startup this metric becomes important because the startup costs of the serializer might be higher than the de/serialize times of your data. Performance is always about using the right tradeoffs at the right places.

Conclusions

Although I have tried my best to get the data right: Do not believe me. Measure for yourself in your environment with a data types representative for your code base. My testing framework can help here by being easily extendable. You can plug in your own data types and let the serializers run. There can be rough spots in some of the serializers how they treat DateTime, TimeSpan, float, double, decimal or other data types. Do your classes have public default ctors? Not all of them support private ctors or you need to add some attribute to your class to make it work. This are still things you need to test. If you find some oddity please leave a comment so we do not fall into the same trap as others. Collecting the data and verifying with ETW Tracing many times that I really measure the right thing was already pretty time consuming. Working around of all the peculiarities of the serializers is left as an exercise to you.

After doing much performance work during my professional life I think I have understood why performance is so hard:

Number of ways to make something slow

image

Number of ways to make it fastest

image

When in doubt infinity will win. But even when you have created the fastest possible solution it will not stay there because the load profile will change or your application will run on new hardware with new capabilities. That makes performance a moving target. The time dependency of O(1) makes sure that I am never running out of work Zwinkerndes Smiley.

How .NET 4.8 Can Break Your Application

A colleague was calling me in to look at a broken .NET application after installing a preview up an upcoming Windows KB. The application was not starting up all the time. That are easy issues because it is easy and fast to reproduce the problem. Since it was happening on a VM with no Visual Studio installed I used the best und most underestimated tool named dnSpy. I learned about this tool by reading the book Pro .NET Memory Management from Konrad Kokosa is not only an in depth explanation how the GC works but it covers also many not so well known tools to drill into .NET for Windows and Linux. The most useful sections are in my opinion the last ones which cover in depth usage of Span and  System.IO.Pipelines. The GC is a fascinating piece of software but in practice you are fastest if you do not need to rely on it. dnSpy is a xcopy deployable Visual Studio Debugger clone which automatically decompiles managed code on the fly where you can set breakpoints and do line level debugging without having the actual source code. If you ever have to do production debugging you need to have this tool in your toolbox. In many ways dnSpy is better than Windbg because until today Windbg has no way to debug .NET code with a source code window which is in my opinion a shame.

One really cool option of dnSpy is to edit the decompiled source code and modify a running binary. Why am I excited about that? Will strong name verification not prevent loading such a tampered assembly? First of all: No you can recompile a strong named assembly and it will load normally since .NET 3.5 SP1

Bypass signature verification of trusted assemblies

Starting with the .NET Framework 3.5 Service Pack 1, strong-name signatures are not validated when an assembly is loaded into a full-trust application domain,

Not many people are aware of this. To repeat: Strong naming won´t give you any security since over 10 years.

This makes it easy to force a  highly sporadic exception on production machines (e.g. connection loss, disk full, ….) with ease by modifying the original method where the exception was thrown. If you have e.g. a small application which for some reason ends up in the general exception handler and there the error handling does bad things with some code like this:

        static void Main(string[] args)
        {
            try
            {
                new FileStream(@"C:\temp\NotexistingFile.txt", FileMode.Open);
                Console.WriteLine("Got Here");
            }
            catch(FileNotFoundException ex)
            {
                Console.WriteLine("File was not found.");
            }
            catch(Exception ex)
            {
                Console.WriteLine("Fatal exception encountered!");
            }
        }

image

If you start the application from dnSpy it will skip using Native Images which will give you even for release builds all local variables with full fidelity. While debugging the same application in VS 2019 even with Source Server support often it cannot locate .NET Framework sources and you see no local variables and no C# code:

image

That is not nice. What is the reason for that? First Visual Studio uses Native Images if present. If these were compiled with no debug settings then the local variables will be optimized away. Even if you check

image

in Debugger – Options will not help because it only affects JITed code like your application but not loaded NGenned images. But there is a way out

1. Set the environment variable COMPLUS_ZAPDISABLE=1  or the registry key HKLM\SOFTWARE\Microsoft\.NETFramework\ZapDisable  and launch from that shell either VS or your application to which you attach later. See here for more information.

2. Create a file in C:\Windows\Microsoft.NET\assembly\GAC_32\mscorlib\v4.0_4.0.0.0__b77a5c561934e089\mscorlib.ini with the content

[.NET Framework Debugging Control]
GenerateTrackingInfo=1
AllowOptimize=0

Point 1 will disable loading of native images. If you set this system wide your application will start much slower but all of your code will be JITed now. Point 2 will disable JIT optimizations for a given dll where the ini file must be named for an assembly xxxx.dll like xxxx.ini but NOT xxxx.dll.ini or xxxx.ini.dll. You need to do this for each assembly you want to disable JIT optimizations. The  post of Scott Hanselman gives misleading advice. His post leaves you with the impression you need to create that ini file only for your executable. That is not the case. To make things easier I have wrapped that stuff into a powershell script which allows me to disable the use of native images system wide and generate the ini files in the GAC and additionally in my application directory.

PS D:\Utils\EnableLocalVars.ps1 -enable 1 -dir "C:\Program Files\Test" -gac -ngen 0
Did set NGen Flag to 0. NGen Images are disabled now. This will impact application startup!
Adding 1518 files from GAC
Adding 68 files from GAC64
Adding 2 files from C:\Program Files\Test
Skipped 8 files because ini file already exists
Created 1580 ini files.

That script makes it easy to enable/disable on an installed machine. I have put a link to my OneDrive Folder here.

.\EnableLocalVars.ps1 
EnableLocalVars.ps1 -enable 0/1 -dir xxxx [-recurse] or [-gac] [-Ngen 0/1]
 by Alois Kraus 2018-2019
 Use this script when you want to attach to a running process or you want to get memory dumps of crashed processes of release builds with all local variables not optimized away.
 This will disable JIT optimizations to enable viewing local variables of methods in a debugger which otherwise would be optimized away.
  -enable 0/1         Enable local variables to view in debugger
                      If 1 enable local variables. The .ini files are created.
                      If 0 disable local variables. The .ini files are removed.
  -dir directory      Select directory where all dll/exe files get a .ini file
  -recurse            If set it will search recursively for all dlls below that path
  -gac                Make all GAC dlls debuggable
  -NGen 0/1           Disable/enable usage of NGen binaries system wide by setting the registry key HKLM\SOFTWARE\Microsoft\.NETFramework\ZapDisable
                      1 Enable NGen Images  == ZapDisable=1
                      0 Disable NGen Images == ZapDisable=0
                      You need to enable this if you want to view variables of .NET Framework code or if your application was NGenned
You need to restart the applications after executing this command for the changes to take effect.
Examples
 Enable locals for all GAC and  application dlls on an installed and NGenned machine
   EnableLocalVars.ps1 -enable 1 -gac -dir "c:\program files\myApp\bin" -ngen 0
 Reset settings
   EnableLocalVars.ps1 -enable 0 -gac -dir "c:\program files\myApp\bin" -ngen 1
 Enable NGen Images again
  EnableLocalVars.ps1  -ngen 1
WARNING: NGen images are disabled. Do not forget to enable them (EnableLocalVars.ps1 -Ngen 1) when you are done or all .NET applications will start ca. x2 times slower!

It comes with extensive help so it should make it easy enough that you perform this step automatically before taking a memory dump. This will make memory dump debugging a whole lot easier. Usually I make the error configurable by e.g. checking for the existence of a file to force the error when I want to inject some error in .NET Framework code like this:

image

This trick helped me a lot to force the error only when the application has got into the initial non faulted state. Now I can trigger the issue by creating a file from the command line with

 C>echo Hi > c:\temp\ForceError.txt

This will trigger the exception in the patched file:

if (File.Exists(“C:\\Temp\\ForceError.txt”))
{
         throw new Exception(“This one is rare!”);
}

When you try to save the changed mscorlib.dll you get an access denied error because the file is in use

image

But since we are pros we rename the running mscorlib.dll with no fear

image

because we know that Windows opens dlls always with the FILE_SHARE_DELETE sharing permission. This  gives us the right to always rename a running dll. We can now save the patched mscorlib.dll without any issue:

image

When we run the modified mscorlib.dll we get indeed into the impossible error region once the trigger is activated:

image

That is a great way to test how robust your application really is if you suspect that the error handling is wonky. But I am digressing here. Now back to the .NET 4.8 issue.

When I debugged it with Windbg the error did occur but not with dnSpy which left me back puzzled. If you have followed this far you can probably guess that this issue only surfaces when native images are loaded into the process. That was the reason why dnSpy was working perfectly. By enabling local variables and Ngenning the binaries again it did help to find out what was happening. The exact details are not important but it suffices to say that the application did use an old Dependency Injection framework named ObjectBuilder which was part of the Enterprise Library many years ago. To select the right ctor the default policy was to always use the first one by a call to Type.GetConstructors:

// Microsoft.Practices.ObjectBuilder.DefaultCreationPolicy
public ConstructorInfo SelectConstructor(IBuilderContext context, Type typeToBuild, string idToBuild)
{
    ConstructorInfo[] constructors = typeToBuild.GetConstructors();
    if (constructors.Length > 0)
    {
        return constructors[0];
    }
    return null;
}

That worked well for a over a decade but no more with .NET 4.8. If you read the API help for Type.GetConstructors

Remarks

The GetConstructors method does not return constructors in a particular order, such as declaration order. Your code must not depend on the order in which constructors are returned, because that order varies.

it becomes evident that this approach can cause problems. In this particular case the first ctor was the default ctor for a given type but with .NET 4.8 and NGenning the application the order of ctors did change. The first ctor was now ctor(string) which did cause object creation to fail miserably. To hit this issue many things must come together. If you e.g. add another type to the specific assembly that contained the type the issue goes away because for the error to surface you need to craft your IL metadata a specific offsets to make the issue happen. I have created a small reproducer which contains 10K classes with two ctors which shows the opposite issue. With JITed code class I have one class which has mixed ctor order. The Source code is here.

D:\CtorOrderNgenNotNGen\bin\Debug>CtorOrderNgenNotNGen.exe
Type DerivedSealed1808 has different ordered ctor!

The application checks if the first ctor is the default ctor which should be nearly always the same:

        private static void CheckCtorOrder()
        {
            foreach(var type in typeof(Base).Assembly.GetTypes())
            {
                if( type.Name.StartsWith("DerivedSealed"))
                {
                    ConstructorInfo[] infos = type.GetConstructors();
                    if( infos[0].GetParameters().Length > 0 )
                    {
                        Console.WriteLine($"Type {type.Name} has different ordered ctor!");
                    }
                }
            }
        }

After NGenning the application all classes have the same ctor order. This is pretty fragile. The advice on MSDN is therefore there for a reason. I have checked other more popular Dependency Injection Framework’s namely

  • Spring.NET
  • Windsor
  • AutoFac
  • Unity

which all have done their homework and contain not such problematic code. I might be the only one hitting that issue. But if you find strange application failures after a Windows KB check if the KB contains .NET 4.8 which could be the reason why your .NET application fails. .NET 4.8 has been available for 6 month and is pushed out via Windows Update as mandatory update now. MS is careful to not roll this out globally at once. Currently every 20th machine which requests all mandatory Windows Updates will get it but I am not sure when the switch will be flipped to 100%. This approach of MS is good but if you wonder why only some machines of your department behave strangely you should check if they really got all the same KBs installed even if everyone did install all mandatory updates. If you have a larger department and maintain your own Windows Update Service you should definitely keep an eye on strange .NET application behavior after you get the update which contains .NET 4.8. There are some issues with themed WPF context menus which can lead to misplaced, empty or non themed context menus but these should happen only sporadically. WPF still has issues with grouped Virtualizing Stackpanels (since .NET 4.5 where it was introduced) which can cause indefinite layout calls which can surface as UI hangs due to semi memory leaks. But that is part of another story.

The Unknown Error Code

When you look at log files you will frequently find negative error codes inside them (that is mostly the reason why you are looking at the log file anyway). For some of them you can find on Google what they mean but pretty frequently you end up with nothing. Why is that so? Most likely you have a COM error code in front of you. They have a different structure which is described here.

The most important thing to remember about COM error codes is that if bit 31 is set it signals an error. If bit 31=1 the signed 32 bit value becomes negative. That explains the strange values of COM error codes.  The bits 0..15 are most of the time the WIN32 error codes. Other bits above bit 15 can be set but these are irrelevant for normal readers of an error code.

If you ever encounter in a log file a negative error code you can follow that simple recipe to find out what it might be about. Lets suppose you have e.g. Error: -2147480631.

1. Open calc.exe, set it to Programmer mode and paste it as decimal number.

image

2. Get the first 4 digits of the hex number. Type in calc BC9 to get the decimal value.

image

3. Google for that Win32 error code or type on a command shell

net helpmsg 3017

image

and you finally have got a hint why some chained installer did fail. In that case it was some file in use issue which caused a MSI installer to demand a reboot and in the installer log file only that cryptic error code was to be found. Not many people seem to know that negative error codes are mostly a hint to try to convert the lower 16 bit of the error code to decimal and try to interpret that one as Win32 error code.

If you are a powershell guy

-2147480631 -band 0xfff

image

And here is the shortest form

net helpmsg (-2147480631 -band 0xfff)

image

That’s a simple but powerful trick which should be more widely known.

How Fast Can You Get With .NET Core?

With .NET Core 2.1 released I want to come back to an old SO question of mine where I asked how fast one can parse a text file with C# and .NET 4.0. Now I want to look how fast we can get today with .NET 4.7.2, .NET Core 2.1 and C++ on x86 and x64 . The structure of the text file is a double followed by an integer and a newline like this

1.1 0
2.1 1
3.1 2

for 10 million times. The result is a 179 MB text file which we want to parse as fast as possible.

2011 I did have as my home machine a Core Duo 6600 @ 2.40 GHz with Windows 7 running .NET 4.0 Lets see how much the numbers from 2011 have improved after I have a

  • Better CPU – Haswell CPU Architecture I7 4770K 3.5 GHz
  • Newer Windows – Windows 10 1803
  • Newer .NET Framework – .NET 4.7.2
  • New .NET Core 2.1

To compare how fast we can get with managed code I thought it would be good to compare the same thing also to a C implementation which reads the ANSI text file with fgets, tokenizes the string with strtok and converts the numbers with atof and atoi. That should be pretty fast right?

The Tests

The source code for the project IOBound  can be found at Github . You can download the binaries to test for yourself from here: https://github.com/Alois-xx/IOBound/releases.

 

ReadFile

This one checks how fast we can read a file into one huge buffer from disk. But if we execute that a second time the data will be served from the file system cache. If warmed up we measure the memcopy and soft page fault performance of the OS to assign memory to our application. It may look very simple but it can tell you a lot about disk performance and caching behavior of your OS.

        /// <summary>
        /// Check native reading speed from the OS
        /// </summary>
        private unsafe static uint ReadFileIntoByteBuffer(string dataFile)
        {
            using (var stream = new FileStream(dataFile, FileMode.Open))
            {
                byte[] buf = new byte[200 * 1024 * 1024];
                fixed (byte* pBuf = &buf[0])
                {
                    uint dwRead = 0;
                    if (ReadFile(stream.SafeFileHandle, pBuf, 200 * 1000 * 1000, out dwRead, IntPtr.Zero) == 0)
                    {
                        throw new Win32Exception();
                    }
                    return dwRead;
                }

            }
        }

 

MessagePack

After having so many professional serializers at our disposal it would be a sin to dismiss one of the fastest ones. MessagePack is an efficient binary format for which a great library exist from Yoshifumi Kawai (aka neuecc) who did write the fastest one for C#. With 133 MB the resulting file is the smallest one.

    [MessagePackObject]
    public class Container
    {
        [Key(0)]
        public List<double> Doubles = new List<double>();
        [Key(1)]
        public List<int> Ints = new List<int>();
    }

    private static void DeserializeMessagePack(string dataFile)
    {
       using (var stream = new FileStream(dataFile, FileMode.Open))
       {
          Container deser = MessagePack.MessagePackSerializer.Deserialize<Container>(stream);
       }
    }

 

JSON

When using a text based format JSON is also a must. Interestingly the JSON file is with 169 MB even a bit smaller than the original text format. Here we use also the fastest JSON serializer named UTF8Json. Credits for writing the fastest JSON serializer go to Yoshifumi Kawai.

        private static void DeserializeUTF8Json(string dataFile)
        {
            using (var stream = new FileStream(dataFile, FileMode.Open))
            {
                Container deser = Utf8Json.JsonSerializer.Deserialize<Container>(stream);
            }
        }

Count Lines

That reads the file line by line but does nothing with the data. Since we do not allocate much except some temp strings we find how fast we can get when reading the UTF8 file and expand the data to UTF-16 strings.

        private static int CountLines(string dataFile)
        {
            using (var reader = new StreamReader(dataFile))
            {
                string line;
                int count = 0;
                while ((line = reader.ReadLine()) != null)
                {
                    count++;
                }

                return count;
            }
        }

Span Parse

One of the biggest improvements of .NET Core was the introduction of Span based APIs everywhere. That allows you to read characters from raw buffers without the need to copy the data around again. With that you can write nearly zero allocating API which operates on unmanaged memory without the need to use unsafe code. To make that fast the JIT compiler was enhanced to emit the most efficient code possible. For the regular .NET Framework there exists the System.Memory nuget package to bring the type also onboard but with no help from the JIT compiler. This one is especially interesting since it shows how much you loose if you do not have a helping hand of the JIT compiler here.

 

        private static void ParseSpan(string dataFile)
        {
            var doubles = new List<double>();
            var ints = new List<int>();

            double curMeasurement = 0.0d;
            int curMeasurement2 = 0;

            ReadOnlySpan<byte> newLineSpan = NewLine.AsSpan();

            using (var stream = new FileStream(dataFile, FileMode.Open))
            {
                const int bufferSize = 10*4096;
                byte[] buffer = new byte[bufferSize];
                int readBytes = 0;

                int lastLineSize = 0;
                while ((readBytes = stream.Read(buffer, lastLineSize, bufferSize- lastLineSize)) != 0)
                {
                    Span<byte> bufferSpan = new Span<byte>(buffer, 0, readBytes+ lastLineSize);

                    if( bufferSpan.StartsWith(Encoding.UTF8.GetPreamble()) ) // skip byte order mark
                    {
                        bufferSpan = bufferSpan.Slice(Encoding.UTF8.GetPreamble().Length); 
                    }

                    int newLineStart = 0;
                    while( (newLineStart = bufferSpan.IndexOf(newLineSpan)) > 0 )
                    {
                        if( ParseLine( bufferSpan.Slice(0, newLineStart), ref curMeasurement, ref curMeasurement2) )
                        {
                            doubles.Add(curMeasurement);
                            ints.Add(curMeasurement2);
                        }
                        bufferSpan = bufferSpan.Slice(newLineStart + newLineSpan.Length);
                    }

                    bufferSpan.CopyTo(buffer.AsSpan());
                    lastLineSize = bufferSpan.Length;
                }
            }
        }

Unsafe Parse

This is how one could write a single method parser with some hard coded assumptions to become as fast as possible. Interestingly this one is slower than the Span based API which is safer and faster.

        unsafe private static void ParseLinesUnsafe(string dataFile)
        {
            var dobules = new List<double>();
            var ints = new List<int>();
            char decimalChar = CultureInfo.CurrentCulture.NumberFormat.NumberDecimalSeparator[0];

            using (var reader = new StreamReader(dataFile))
            {
                string line;
                double d = 0;
                long a = 0, b = 0;
                int ix = 0;
                while ((line = reader.ReadLine()) != null)
                {
                    int len = line.Length;
                    ix = 0;
                    a = 0;
                    b = 0;
                    fixed (char* ln = line)
                    {
                        while (ix < len && char.IsNumber(ln[ix]))
                        {
                            a = a * 10 + (ln[ix++] - '0');
                        }

                        if (ln[ix] == decimalChar)
                        {
                            ix++;
                            long div = 1;
                            while (ix < len && char.IsNumber(ln[ix]))
                            {
                                b += b * 10 + (ln[ix++] - '0');
                                div *= 10;
                            }
                            d = a + ((double)b) / div;
                        }

                        while (ix < len && char.IsWhiteSpace(ln[ix]))
                        {
                            ix++;
                        }

                        int i = 0;
                        while (ix < len && char.IsNumber(ln[ix]))
                        {
                            i = i * 10 + (ln[ix++] - '0');
                        }

                        dobules.Add(d);
                        ints.Add(i);
                    }
                }
            }
        }

 

C++ Is Still A Thing!

C++ should be the fastest with its highly optimized library routines which were carefully optimized and profiled for decades now. Looks like plain C 20 years ago but it is only faster by a slight margin compared to the naïve Standard Parse C# version.

	char pLine[1024];
	std::vector<double> doubles;
	std::vector<int> ints;
	int lines = 0;

	while (fgets(pLine, sizeof(pLine), fp) != nullptr)
	{
		lines++;
		ParseLine(doubles, ints, pLine);
	}

void ParseLine(std::vector<double> &doubles, std::vector<int> &ints, char *pline)
{
	char *pDouble = strtok(pline, " \n");

	if (pDouble != nullptr)
	{
		double d = atof(pDouble);
		char *pInt = strtok(NULL, " \n");
		if (pInt != nullptr && pInt != pDouble)
		{
			int i = atoi(pInt);
			doubles.push_back(d);
			ints.push_back(i);
		}
	}
	
}

 

Standard Parse C#

That is how you would do it when writing idiomatic C# code.

        private static void ParseLines(string dataFile)
        {
            var dobules = new List<double>();
            var ints = new List<int>();

            using (var reader = new StreamReader(dataFile))
            {
                string line;
                char[] sep = new char[] { ' ' };
                while ((line = reader.ReadLine()) != null)
                {
                    var parts = line.Split(sep);
                    if (parts.Length == 2)
                    {
                        dobules.Add(double.Parse(parts[0]));
                        ints.Add(int.Parse(parts[1]));
                    }
                }
            }
        }

 

The Results

 

image

Newer CPU

One obvious way to become faster is to buy a newer CPU. That works and can give you a speedup of roughly a factor two if you wait long enough between significant CPU generations. But the economic benefit to shrink CPU dies has decreased a lot with 7nm processes at the door you need  EUV lithography. With all of its challenges further huge improvements are not expected soon. And since one atom is about 0,1nm we do not have so much room left to scale much further. If you want to increase the frequency a lot you also run into fundamental problems. The speed of light in conducting materials is ca 200 000 km/s. During 1ns (=1GHz) a signal can travel 20cm far. If you increase the frequency to 10 GHz you can transmit a signal only 2cm. If your chip is bigger you cannot transmit a signal in the necessary time to its destination. We have a natural upper limit how much faster you can get with current CPU designs if you rely on ack signals of some sort.

With the rise of AI and its huge power consumption we still can and will become much more energy efficient. Although Google is leading with dedicated Tensorflow CPUs the amount of power consumption to perform simple image recognition is at least 1000 times worse compared to their biological counterparts (that is only a guess and this number is likely much to low). The next big thing are definitely neuromorphic chips which ran rewire themselves depending the the problem to solve. With these processors you could make Google search a commodity where not a single corporation owns all of your data. Instead it could be made a public service run by many providers where the search graph is shared and we as customers can sell our data on our own. If that ever becomes true depends on who creates such highly energy efficient chips first.

Newer Windows

The plain ReadFile call is about 4 times faster. That performance increase is not only related to the better CPU but also the soft page fault performance of Windows 10 Fall Creators Update did become better compared to Windows 7 which I was using at that time. The better soft fault performance is also part of Windows Server 2019 (finally). More details about soft fault performance can be found here.

32 vs 64 Bit

The biggest differences between x86/x64 has the C version . It turns out the x86 version is about 40% slower due to much higher costs in atof mainly which seems to be pretty slow implemented in x86. In the screenshot below you can see that the ucrtbase:common_atof needs 2,4 vs 1,3s in the x64 version which is a pretty big perf hit. Although C/C++ can be very efficient this is not always the case for the used runtime libraries. By using standard libraries you are not guaranteed to get world class performance on your current hardware. Especially extended instruction sets can only be used by specifically crafted binaries for each CPU. The standard approach in C/C++ is to create a library for each CPU architecture like

  • libSimd.dll
  • libAVX.dll
  • libAVX256.dll
  • libAVX512.dll

That quickly becomes pretty complicated to test. On the other hand in a JIT compiled world we still need to test on different CPUs but we get the best possible assembly code on our CPU for free if the runtime is up to date like it is the case with RyuJIT of .NET Core.

image

The next observation is that the new Span based version is 80% slower (1.35 vs 0.75 .NET Core 2.1) on x86. If you are after high throughput scenarios it definitely makes sense to switch over to 64 bit and use mainly index based structures to alleviate the 100% increase of pointer size for the most significant cases.

.NET Core 2.1 vs .NET 4.7.2

.NET 4.7.2 is for streaming data into memory between 2% (Unsafe Parse x86), 7% (Unsafe Parse x64) and 40% (Span Parse x64) slower than .NET Core. Especially the 40% speed drop for Span based Apis due to the lack of JIT support by the .NET Runtime is unfortunate. But since .NET was declared legacy and most innovation will happen for the years to come at .NET Core it is unlikely that the .NET Framework JIT compiler will get these new capabilities. The main reasosn were compat issues but we can already switch over to the legacy JIT if we do not want to use the new RyuJIT compiler. It should be easy to make the new JIT compiler of .NET Core a optional and the users of the .NET Framework can enable the new functionality if they have verified that their scenarios work.

 

Conclusions

Based on the numbers it makes sense to switch over to x64 and Span based APIs if you are running on .NET Core. But even if you are not using .NET Core Span based APIs offer quite a lot. Span based APIs make it possible to spare the copy operation from unmanaged locations into the managed heap. The result are nearly allocation free applications which can scale great. Another learning is that if you can change your problem a bit by e.g. changing the data format you can get even faster. The MessagePack serializer on x64 .NET Core 2.1 is 25% faster compared to the Span text based approach. That is a bit unfair since there is practically no parsing involved but shows that you can achieve near optimal performance with efficient binary formats which contain mainly primitive types. Even our pretty generic UTF8 JSON file approach performed 34% better than the x64 C/C++ version. In the end you can become very fast with any language if you profile your application and fix the obvious bottlenecks. To be able to do that you need to use some performance profiling tools for your platform. My favorite tool is ETW  as you might have guessed from my previous posts. Are you profiling your code before problems are reported? What is your favorite profiling tool?