Why Does My Synchronous WCF Call Hang?

This is a limitation of WCF which is not widely known. Lets suppose you have an WCF interface which contains a mixture of Task based and non task based methods:

[ServiceContract(Namespace = "WCFDispatching")]
public interface IRemotedService
{
    [OperationContract]
    Task<bool> MakeAsyncCall(int id);

    [OperationContract]
    void SyncCall(int id);
}

What will happen when you call both methods?

    async Task Work(IRemotedService service)
    {
        await service.MakeAsyncCall(50);
        service.SyncCall(150);
    }

The sad truth is that the second call will hang indefinitely with a rather long call stack:

System.Threading.WaitHandle.WaitOne
System.Runtime.TimeoutHelper.WaitOne
System.ServiceModel.Dispatcher.DuplexChannelBinder.SyncDuplexRequest.WaitForReply
System.ServiceModel.Dispatcher.DuplexChannelBinder.Request
System.ServiceModel.Channels.ServiceChannel.Call
System.ServiceModel.Channels.ServiceChannelProxy.InvokeService
System.ServiceModel.Channels.ServiceChannelProxy.Invoke
System.Runtime.Remoting.Proxies.RealProxy.PrivateInvoke
WCFDispatching.Program.Work
[Resuming Async Method]
System.Runtime.CompilerServices.AsyncMethodBuilderCore.MoveNextRunner.InvokeMoveNext
System.Threading.ExecutionContext.RunInternal
System.Threading.ExecutionContext.Run
System.Runtime.CompilerServices.AsyncMethodBuilderCore.MoveNextRunner.Run
System.Runtime.CompilerServices.AsyncMethodBuilderCore.OutputAsyncCausalityEvents.AnonymousMethod__0
System.Runtime.CompilerServices.AsyncMethodBuilderCore.ContinuationWrapper.Invoke
System.Runtime.CompilerServices.TaskAwaiter.OutputWaitEtwEvents.AnonymousMethod__0
System.Runtime.CompilerServices.AsyncMethodBuilderCore.ContinuationWrapper.Invoke
System.Threading.Tasks.AwaitTaskContinuation.RunOrScheduleAction
System.Threading.Tasks.Task.FinishContinuations
System.Threading.Tasks.Task.FinishStageThree
System.Threading.Tasks.Task<bool>.TrySetResult
System.Threading.Tasks.TaskFactory<bool>.FromAsyncCoreLogic
System.Threading.Tasks.TaskFactory<bool>.FromAsyncImpl.AnonymousMethod__0
System.Runtime.AsyncResult.Complete
System.ServiceModel.Channels.ServiceChannel.SendAsyncResult.FinishSend
System.ServiceModel.Channels.ServiceChannel.SendAsyncResult.SendCallback
System.Runtime.Fx.AsyncThunk.UnhandledExceptionFrame
System.Runtime.AsyncResult.Complete
System.ServiceModel.Dispatcher.DuplexChannelBinder.AsyncDuplexRequest.Done
System.ServiceModel.Dispatcher.DuplexChannelBinder.AsyncDuplexRequest.GotReply
System.ServiceModel.Dispatcher.DuplexChannelBinder.HandleRequestAsReplyCore
System.ServiceModel.Dispatcher.DuplexChannelBinder.HandleRequestAsReply
System.ServiceModel.Dispatcher.ChannelHandler.HandleRequestAsReply
System.ServiceModel.Dispatcher.ChannelHandler.HandleRequest
System.ServiceModel.Dispatcher.ChannelHandler.AsyncMessagePump

System.ServiceModel.Dispatcher.ChannelHandler.OnAsyncReceiveComplete
System.Runtime.Fx.AsyncThunk.UnhandledExceptionFrame
System.Runtime.AsyncResult.Complete
System.ServiceModel.Channels.TransportDuplexSessionChannel.TryReceiveAsyncResult.OnReceive
System.Runtime.Fx.AsyncThunk.UnhandledExceptionFrame
System.Runtime.AsyncResult.Complete
System.ServiceModel.Channels.SynchronizedMessageSource.ReceiveAsyncResult.OnReceiveComplete
System.ServiceModel.Channels.SessionConnectionReader.OnAsyncReadComplete
System.ServiceModel.Channels.PipeConnection.OnAsyncReadComplete
System.ServiceModel.Channels.OverlappedContext.CompleteCallback
System.Runtime.Fx.IOCompletionThunk.UnhandledExceptionFrame
System.Threading._IOCompletionCallback.PerformIOCompletionCallback

The interesting thing is that the synchronous call completes on the remote endpoint but the WCF client call hangs in this call stack. The problem is that WCF runs asynchronous method completions on the WCF channel dispatcher which seems to be single threaded just like a UI application with a message pump. When a blocking synchronous call is performed WCF waits normally in a stack for the read operation to complete like this

System.Threading.WaitHandle.InternalWaitOne
System.Threading.WaitHandle.WaitOne
System.Runtime.TimeoutHelper.WaitOne
System.ServiceModel.Channels.OverlappedContext.WaitForSyncOperation
System.ServiceModel.Channels.OverlappedContext.WaitForSyncOperation
System.ServiceModel.Channels.PipeConnection.WaitForSyncRead
System.ServiceModel.Channels.PipeConnection.Read
System.ServiceModel.Channels.DelegatingConnection.Read
System.ServiceModel.Channels.SessionConnectionReader.Receive
System.ServiceModel.Channels.SynchronizedMessageSource.Receive
System.ServiceModel.Channels.TransportDuplexSessionChannel.Receive
System.ServiceModel.Channels.TransportDuplexSessionChannel.TryReceive
System.ServiceModel.Dispatcher.DuplexChannelBinder.Request
System.ServiceModel.Channels.ServiceChannel.Call
System.ServiceModel.Channels.ServiceChannelProxy.InvokeService
System.ServiceModel.Channels.ServiceChannelProxy.Invoke
System.Runtime.Remoting.Proxies.RealProxy.PrivateInvoke

But in our case we have a different wait call stack where we are not waiting for a read to complete but in DuplexChannelBinder.SyncDuplexRequest.WaitForReply we are waiting that another thread sets an event to signal completion. This assumes that another thread is still receiving input from the remote connection which is not the case. We can see this when one looks who is setting the event:

image

To release our waiting thread another thread must call GotReply which is never going to happen. To get things working again you must make in your remoted interface either all methods synchronous or asynchronous. A sync/async mixture of remoted methods will likely cause deadlocks like shown above.

Below is the full sample code to reproduce the issue if you are interested

using System;
using System.ServiceModel;
using System.Threading.Tasks;

namespace WCFDispatching
{
    [ServiceContract(Namespace = "WCFDispatching")]
public interface IRemotedService
{
    [OperationContract]
    Task<bool> MakeAsyncCall(int id);

    [OperationContract]
    void SyncCall(int id);

    [OperationContract]
    Task SyncCallAsync_(int id);
}

[ServiceBehavior(InstanceContextMode = InstanceContextMode.PerSession, ConcurrencyMode = ConcurrencyMode.Multiple, IncludeExceptionDetailInFaults = true)]
public class RemotedService : IRemotedService
{
    public async Task<bool> MakeAsyncCall(int id)
    {
        await Task.Delay(10);
        Console.WriteLine($"Async call with id: {id} completed.");
        return true;
    }

    public async Task SyncCallAsync_(int id)
    {
        await Task.Delay(0);
        Console.WriteLine($"SyncCallAsync call with id {id} called.");
    }

    public void SyncCall(int id)
    {
        Console.WriteLine($"Sync call with id {id} called.");
    }
}

class Program
{
    const string PipeUri = "net.pipe://localhost/WCFDispatching";

    static void Main(string[] args)
    {
        new Program().Run(args);
    }

    bool bUseAsyncVersion = false;
    readonly string Help = "WCFDispatching.exe [-server] [-client [-async]]" + Environment.NewLine +
                "    -server      Create WCF Server" + Environment.NewLine +
                "    -client      Create WCF Client" + Environment.NewLine + 
                "    -async       Call async version of both API calls" + Environment.NewLine +
                "No options means client mode which calls async/sync WCF API which produces a deadlock.";


    private void Run(string[] args)
    {
        if( args.Length == 0)
        {
            Console.WriteLine(Help);
            return;
        }
        else
        {
            for (int i = 0; i < args.Length; i++)
            {
                string arg = args[i];
                switch(arg)
                {
                    case "-server":
                        StartServer();
                        Console.WriteLine("Server started");
                        Console.ReadLine();
                        break;
                    case "-client":
                        // this is the default
                        break;
                    case "-async":
                        bUseAsyncVersion = true;
                        break;
                    default:
                        Console.WriteLine(Help);
                        Console.WriteLine($"Command line argument {args[0]} is not valid.");
                        return;
                }
            }

            var service = CreateServiceClient<IRemotedService>(new Uri(PipeUri));
            Task waiter = Work(service);
            waiter.Wait();
            return;
        }

    }

    async Task Work(IRemotedService service)
    {
        await service.MakeAsyncCall(50);
        if (bUseAsyncVersion)  // this will work
        {
            await service.SyncCallAsync_(50);
        }
        else
        {
            service.SyncCall(150);  // this call will deadlock!
        }
    }

    internal static T CreateServiceClient<T>(Uri uri)
    {
        var binding = CreateDefaultNetNamedPipeBinding();
        var channelFactory = new ChannelFactory<T>(binding, PipeUri.ToString());
        var serviceClient = channelFactory.CreateChannel();
        var channel = (IContextChannel)serviceClient;
        channel.OperationTimeout = TimeSpan.FromMinutes(10);

        return serviceClient;
    }

    internal static ServiceHost StartServer()
    {
        var host = new ServiceHost(typeof(RemotedService));
        host.AddServiceEndpoint(implementedContract: typeof(IRemotedService), binding: CreateDefaultNetNamedPipeBinding(), address: PipeUri);
        host.Open();

        return host;
    }

    internal static NetNamedPipeBinding CreateDefaultNetNamedPipeBinding()
    {
        //Default setting for NetNamedPipeBinding.MaxReceivedMessageSize = 65,536 bytes
        //Default settings for NetNamedPipeBinding.ReaderQuotas
        //MaxDepth = 32, MaxStringContentLength = 8192, MaxArrayLength = 16384, MaxBytesPerRead = 4096, MaxNameTableCharCount = 16384
        TimeSpan timeOut = TimeSpan.FromMinutes(1000);
        var binding = new NetNamedPipeBinding(NetNamedPipeSecurityMode.None)
        {
            ReceiveTimeout = timeOut,
            MaxReceivedMessageSize = Int32.MaxValue,
            ReaderQuotas =
            {
                MaxArrayLength = Int16.MaxValue,
                MaxStringContentLength = Int32.MaxValue,
                MaxBytesPerRead = Int32.MaxValue
            }
        };
        return binding;
    }
}
}

To try it out first start the server from a shell with

WCFDispatching.exe -server

and then start the client with -client as option to get the deadlock. To call the fixed version add to the client call -client -async and the deadlock will not occur.

Advertisements

Why Skylake CPUs Are Sometimes 50% Slower – How Intel Has Broken Existing Code

I got a call that on newer hardware some performance regression tests have become slower. Not a big deal. Usually it is a bad configuration somewhere in Windows or some BIOS settings were set to non optimal values. But this time we were not able to find a setting that did bring performance back to normal. Since the change was not small 9s vs 19s (blue is old hardware orange is new hardware) we needed to drill deeper:

image

Same OS, Same Hardware, Different CPU – 2 Times Slower

A perf drop from 9,1s to 19,6s is definitely significant. We did more checks if the software version under test, Windows, BIOS settings were somehow different from the old baseline hardware. But nope everything was identical. The only difference was that the same tests were running on different CPUs. Below is a picture of the newest CPU

image

And here is the one used for comparison

image

The Xeon Gold runs on a different CPU Architecture named Skylake which is common to all CPUs produced by Intel since mid 2017. *As commenters have pointed out the consumer Skylake CPUs were released already 2015. The server Xeon CPUs with SkylakeX were released mid 2017. All later CPUs Kaby Lake, … share the same issue. If you are buying current hardware you will get a CPU with Skylake CPU architecture. These are nice machines but as the tests have shown newer and slower is not the right direction. If all else fails get a repro and use a real profiler ™ to drill deeper. When you record the same test on the old hardware and on the new hardware it should quickly lead to somewhere:

image

Remember the diff view in WPA *(Windows Performance Analyzer is a profiling UI which is free and part of the Windows Performance Toolkit which is part of the Windows SDK) shows in the table the delta of Trace 2 (11s) – Trace 1 (19s). Hence a negative delta in the table indicates a CPU consumption increase of the slower test. When we look at the biggest CPU consumer differences we find AwareLock::Contention, JIT_MonEnterWorker_InlineGetThread_GetThread_PatchLabel and ThreadNative.SpinWait. Everything points towards CPU spinning when threads are competing for locks. But that is a false red herring because spinning is not the root cause of slower performance. Increased lock contention means that something in our software did become slower while holding a lock which as a consequence results in more CPU spinning. I was checking locking times and other key metrics, like disk and alike but I failed to find anything relevant which could explain the performance degradation. Although not logical I turned back to the increased CPU consumption in various methods.

To find where exactly the CPU was stuck would be interesting. WPA has file and line columns but these work only with private symbols which we do not have because it is .NET Framework code. The next best thing is to get the address of the dll where the instruction is located which is called Image RVA (Relative Virtual Address). When I load the same dll into the debugger and then do

u xxx.dll+ImageRVA

then I should see the instruction which was burning most CPU cycles which was basically only one hot address.

image

Lets examine the hot code locations of the different methods with Windbg:

0:000> u clr.dll+0x19566B-10
clr!AwareLock::Contention+0x135:
00007ff8`0535565b f00f4cc6        lock cmovl eax,esi
00007ff8`0535565f 2bf0            sub     esi,eax
00007ff8`05355661 eb01            jmp     clr!AwareLock::Contention+0x13f (00007ff8`05355664)
00007ff8`05355663 cc              int     3
00007ff8`05355664 83e801          sub     eax,1
00007ff8`05355667 7405            je      clr!AwareLock::Contention+0x144 (00007ff8`0535566e)
00007ff8`05355669 f390            pause
00007ff8`0535566b ebf7            jmp     clr!AwareLock::Contention+0x13f (00007ff8`05355664)

We do this for the JIT method as well

0:000> u clr.dll+0x2801-10
clr!JIT_MonEnterWorker_InlineGetThread_GetThread_PatchLabel+0x124:
00007ff8`051c27f1 5e              pop     rsi
00007ff8`051c27f2 c3              ret
00007ff8`051c27f3 833d0679930001  cmp     dword ptr [clr!g_SystemInfo+0x20 (00007ff8`05afa100)],1
00007ff8`051c27fa 7e1b            jle     clr!JIT_MonEnterWorker_InlineGetThread_GetThread_PatchLabel+0x14a (00007ff8`051c2817)
00007ff8`051c27fc 418bc2          mov     eax,r10d
00007ff8`051c27ff f390            pause
00007ff8`051c2801 83e801          sub     eax,1
00007ff8`051c2804 75f9            jne     clr!JIT_MonEnterWorker_InlineGetThread_GetThread_PatchLabel+0x132 (00007ff8`051c27ff)

Now we have a pattern. One time the hot location is a jump instruction and the other time it is a subtraction. But both hot instructions are preceded by the same common instruction named pause. Different methods execute the same CPU instruction which is for some reason very time consuming. Lets measure the duration of the pause instruction to see if we are on the right track.

If You Document A Problem It Becomes A Feature

CPU Pause Duration In ns
Xeon E5 1620v3 3.5GHz 4
Xeon(R) Gold 6126 CPU @ 2.60GHz 43

Pause on the new Skylake CPUs is an order of magnitude slower. Sure things can get faster and sometimes a bit slower. But over 10 times slower? That sounds more like a bug. A little internet search about the pause instruction leads to the Intel manuals where the Skylake Microarchitecture and the pause instruction are explicitly mentioned:

https://www.intel.com/content/dam/www/public/us/en/documents/manuals/64-ia-32-architectures-optimization-manual.pdf

image

No this is not a bug, it is a documented feature. There exists even a web page which contains the timings of pretty much all CPU instructions

http://www.agner.org/optimize/instruction_tables.pdf

  • Sandy Bridge     11
  • Ivy Bridege         10
  • Haswell                9
  • Broadwell            9
  • SkylakeX             141

The numbers are CPU cycles. To calculate the actual time you need to divide the cycle counts by the CPU frequency (usually GHz) to get the time in ns.

That means when I execute heavily multithreaded applications on .NET on latest hardware things can become much slower. Someone else has this noticed already in August 2017 and has written an issue for it:  https://github.com/dotnet/coreclr/issues/13388. The issue has been fixed with .NET Core 2.1 and .NET Framework 4.8 Preview contains also the fixes for it.

https://github.com/Microsoft/dotnet-framework-early-access/blob/master/release-notes/build-3621/dotnet-build-3621-changes.md#clr

Improved spin-waits in several synchronization primitives to perform better on Intel Skylake and more recent microarchitectures. [495945, mscorlib.dll, Bug]

But since .NET 4.8 is still one year away I have requested a backport of the fixes to get .NET 4.7.2 back to speed on latest hardware. Since many parts of .NET are using spinlocks you should look out for increased CPU consumption around Thread.SpinWait and other spinning methods.

 

image

E.g. Task.Result will internally Spin internally where I could see for other tests also a significant increase in CPU consumption and degraded performance.

How Bad Is It?

I have looked at the .NET Core code how long the CPU will keep spinning when the lock is not released before calling into WaitForSingleObject to pay the “expensive” context switch. A context switch is somewhere in the microsecond region and becomes much slower when many threads are waiting on the same kernel object.

.NET Locks multiply the maximum Spin duration with the number of cores which has the fully contended case in mind where every core has a thread waiting for the same lock and tries to spin long enough to give everyone a chance to work a bit before paying for the kernel call. Spinning inside .NET uses an exponential back off algorithm where spinning starts with 50 pause calls in a loop where for each iteration the number of spins is multiplied by 3 until the next spin count becomes greater than the maximum spin duration. I have calculated the total time how long a thread would spin on pre Skylake CPU and current Skylake CPUs for various core numbers:

image

Below is some simplified code how .NET Locks perform spinning:

/// <summary>
/// This is how .NET is spinning during lock contention minus the Lock taking/SwitchToThread/Sleep calls
/// </summary>
/// <param name="nCores"></param>
void Spin(int nCores)
{
	const int dwRepetitions = 10;
	const int dwInitialDuration = 0x32;
	const int dwBackOffFactor = 3;
	
	int dwMaximumDuration = 20 * 1000 * nCores;

	for (int i = 0; i < dwRepetitions; i++)
	{
		int duration = dwInitialDuration;
		do
		{
			for (int k = 0; k < duration; k++)
			{
				Call_PAUSE();
			}
			duration *= dwBackOffFactor;
		}
		while (duration < dwMaximumDuration);
	}
}

The old spinning times were in the millisecond region (19ms for 24 cores) which is already quite a lot compared to the always mentioned high costs of context switches which are an order of magnitude faster. But with Skylake CPUs the total CPU Spinning times for a contended lock have exploded and we will spin up to 246ms on a 24 or 48 core machine only because the latency of the new Intel CPUs has increased the pause instruction by a factor 14. Is this really the case? I have created  a small tester to check full CPU spinning and the calculated numbers nicely match my expectations. I have 48 threads waiting on a 24 core machine for a single lock where I call Monitor.PulseAll to let the race begin:

image

Only one thread will win the race but 47 threads will spin until the give up. This is experimental evidence that we indeed have a real issue with CPU consumption and very long Spin times are a real issue. Excessive spinning hurts scalability because CPU cycles are burned where other threads might need the CPU, although the usage of the pause instruction frees up some of the shared CPU resources while “sleeping” for longer times. The reason for spinning is to acquire the lock fast without going to the kernel. If that is true the increased CPU consumption might not look good in task manager but it should not influence performance at all as long as there are cores left for other tasks. But what the tests did show that nearly single threaded operations where one thread adds something to a worker queue while the worker thread waits for work and then performs some task with the work item are slowed down.

The reason for that can be shown best with a diagram. Spinning for a contended lock happens in steps where the amount of spinning is tripled after each step. After each Spin Round the lock checks again if the current thread can get it. While spinning the lock tries to be fair and switches over to other threads from time to time to help the other thread/s to complete its work. That increases the chances the lock has been released when we check again later. The problem is that only after a complete Spin Round has completed the lock checks if it can be taken:

image

If e.g. during Spin Round 5 the lock becomes signaled right after we did start Round 5 we wait for the complete Spin Round until we can acquire the lock. By calculating the spin duration for the last round we can estimate the worst case of delay that can happen to our thread:

image

That are many milliseconds we can wait until spinning has completed. Is that a real issue?

I have created a simple test application that implements a producer consumer queue where the worker thread works for each work item 10ms and the consumer has a delay of 1-9 ms before sending in the next work item. That is sufficient to see the effect:

image

We see for some sender thread delays of one and two ms a total duration of 2,2s whereas the other times we are twice as fast with ca. 1,2s. This shows that excessive CPU spinning is not only a cosmetic issue which only hurts heavily multithreaded applications but also simple producer consumer threading which involves only two threads. For the run above the ETW data speaks on its own that the increased CPU spinning is really the cause for the observed delay:

image

When we zoom into the slow section we find in red the 11ms of spinning although the worker (light blue) has completed its work and has returned the lock a long time ago.

image

The fast non degenerate case looks much better where only 1ms is spent spinning for the the lock.

image

The test application I did use is named  SkylakeXPause and located at https://1drv.ms/u/s!AhcFq7XO98yJgsMDiyTk6ZEt9pDXGA which contains a zip file with the source code and the binaries for .NET Core and .NET 4.5. What I actually did to compare things was to install on the Skylake machine .NET 4.8 Preview which contains the fixes and .NET Core 2.0 which still implements the old spinning behavior. The application targets .NET Standard 2.0 and .NET 4.5 which produces an exe and a dll. Now I can test the old and new spinning behavior side by side without the need to patch anything which is very convenient.

readonly object _LockObject = new object();
int WorkItems;
int CompletedWorkItems;
Barrier SyncPoint;
	
void RunSlowTest()
{
	const int processingTimeinMs = 10;
	const int WorkItemsToSend = 100;
	Console.WriteLine($"Worker thread works {processingTimeinMs} ms for {WorkItemsToSend} times");

	// Test one sender one receiver thread with different timings when the sender wakes up again
	// to send the next work item

	// synchronize worker and sender. Ensure that worker starts first
	double[] sendDelayTimes = { 1, 2, 3, 4, 5, 6, 7, 8, 9 };

	foreach (var sendDelay in sendDelayTimes)
	{
		SyncPoint = new Barrier(2);  // one sender one receiver

		var sw = Stopwatch.StartNew();
		Parallel.Invoke(() => Sender(workItems: WorkItemsToSend,          delayInMs: sendDelay),
						() => Worker(maxWorkItemsToWork: WorkItemsToSend, workItemProcessTimeInMs: processingTimeinMs));
		sw.Stop();
		Console.WriteLine($"Send Delay: {sendDelay:F1} ms Work completed in {sw.Elapsed.TotalSeconds:F3} s");
		Thread.Sleep(100);  // show some gap in ETW data so we can differentiate the test runs
	}
}

/// <summary>
/// Simulate a worker thread which consumes CPU which is triggered by the Sender thread
/// </summary>
void Worker(int maxWorkItemsToWork, double workItemProcessTimeInMs)
{
	SyncPoint.SignalAndWait();

	while (CompletedWorkItems != maxWorkItemsToWork)
	{
		lock (_LockObject)
		{
			if (WorkItems == 0)
			{
				Monitor.Wait(_LockObject); // wait for work
			}

			for (int i = 0; i < WorkItems; i++)
			{
				CompletedWorkItems++;
				SimulateWork(workItemProcessTimeInMs); // consume CPU under this lock
			}

			WorkItems = 0;
		}
	}
}

/// <summary>
/// Insert work for the Worker thread under a lock and wake up the worker thread n times
/// </summary>
void Sender(int workItems, double delayInMs)
{
	CompletedWorkItems = 0; // delete previous work
	SyncPoint.SignalAndWait();
	for (int i = 0; i < workItems; i++)
	{
		lock (_LockObject)
		{
			WorkItems++;
			Monitor.PulseAll(_LockObject);
		}
		SimulateWork(delayInMs);
	}
}

Conclusions

This is not a .NET issue. It affects all Spinlock implementations which use the pause instruction. I have done a quick check into the Windows Kernel of Server 2016 but there is no issue like that visible. Looks like Intel was kind enough to give them a hint that some changes in the spinning strategy are needed.

When the issue was reported to .NET Core in August 2017 in September 2017 it was already fixed and pushed out with .NET Core 2.0.3 (https://github.com/dotnet/coreclr/issues/13388). It is not only that the reaction speed of the .NET Core team is amazing but also that the issue has been fixed on the mono branch a few days ago now as well and discussions about even more Spinning improvements are ongoing. Unfortunately the Desktop .NET Framework is not moving as fast but at least we have with .NET Framework 4.8 Preview at least a proof of concept that the fixes work there as well. Now I am waiting for the backport to .NET 4.7.2 to be able to use .NET at its full speed also on latest hardware. This was my first bug which was directly related to a performance change in one CPU instruction. ETW remains the profiling tool of choice on Windows. If I had a wish I would Microsoft make to port the ETW infrastructure to Linux because the current performance tooling still sucks at Linux. There were some interesting kernel capabilities added recently but an analysis tool like WPA remains yet to be seen there.

If you are running .NET Core 2.0 or desktop .NET Framework on CPUs which were produced since mid 2017 you should definitely check out your application with a profiler if you are running at reduced speed due to this issue and upgrade to the newer .NET Core and hopefully soon .NET Desktop version. My test application can tell you if you could be having issues

D:\SkylakeXPause\bin\Release\netcoreapp2.0>dotnet SkylakeXPause.dll -check
Did call pause 1,000,000 in 3.5990 ms, Processors: 8
No SkylakeX problem detected

or 

D:\SkylakeXPause\SkylakeXPause\bin\Release\net45>SkylakeXPause.exe -check
Did call pause 1,000,000 in 3.6195 ms, Processors: 8
No SkylakeX problem detected

The tool will report an issue only if you are running a not fixed .NET Framework on a Skylake CPU. I hope you did find the issue as fascinating as I did. To really understand an issue you need to create a reproducer which allows you to experiment to find all relevant influencing factors. The rest is just boring work but now I understand the reasons and consequences of CPU spinning much better.

* Denotes changes to make things more clear and add new insights. This article has gained quite some traction at Hacker News (https://news.ycombinator.com/item?id=17336853) and Reddit  (https://www.reddit.com/r/programming/comments/8ry9u6/why_skylake_cpus_are_sometimes_50_slower/). It is even mentioned at Wikipedia (https://en.wikipedia.org/wiki/Skylake_(microarchitecture)). Wow. Thanks for the interest.

Serialization Performance Update With .NET 4.7.2

With .NET Framework 4.7.2 out of the door it was time to update my Serialization Performance test suite (https://github.com/Alois-xx/SerializerTests). There have been many serializers added since the article https://aloiskraus.wordpress.com/2017/04/23/the-definitive-serialization-performance-guide/ was written which warrants a post on its own. The performance numbers were updated but not all of the text.

First of all the pesky BinaryFormatter O(n^2) issue is gone with .NET 4.7.2  if you add to your App.Config

<?xml version="1.0" encoding="utf-8"?>
<configuration>
  <runtime>
    <!-- Use this switch to make BinaryFormatter fast with large object graphs starting with .NET 4.7.2 -->
      <AppContextSwitchOverrides value="Switch.System.Runtime.Serialization.UseNewMaxArraySize=true" />
  </runtime>
</configuration>
 

.NET Core does not need such a setting because it did contain the fixed BinaryFormatter from the start which was added to .NET Core 2.0. The de/serialization performance of .NET Core (2.0.6)  is not faster compared to the .NET Framework. You might ask: How can that be? .NET Core is performance obsessed and now it is slower? The answer is that not .NET Core is slower but some serializers targeting .NET Standard execute workarounds for early .NET Core versions. As always you should measure in your actual target environment to prevent bad surprises. 

Which Serializers Are Slower under .NET Core?

Most notably MsgPack.Cli, ServiceStack.Text, BinaryFormatter and Bois perform significantly worse on .NET Core. MsgPack.Cli is over two times slower on .NET Core! How does this look under a CPU profiler?

I have recorded two profiling sessions. One for .NET Framework and the other for .NET Core. Trace #1 is .NET Core and Trace #2 is the full .NET Framework. The visible WPA tab is a comparative diff view where the graph shows .NET Core and the table contains the diff values. To read the values you need to know that the displayed values are calculated by subtracting Trace #2 from Trace #1 for each row. Negative values mean that .NET Core consumed more CPU relative to the .NET Framework tests.  The Weight column shows the CPU time difference in ms with , as thousand separator and . as decimal point. I know that is a lot of information but once you get used to that level of detail you will never go back to simple timing based tests where you wonder why the timing always fluctuates.  Here I have dissected the worse performing serializers under WPA:

image

What is the reason for that? If one factors out Reflection Get/Set Value and Activator.CreateInstance from the profiled data we get a delta which is within the error margin of ca. 10%. The delta table below now has no longer large deserialization time differences. All the differences come from many calls to FieldInfo.Get/SetValue and Activator.CreateInstance. This is not the case if the same serializers (same serializer but different dll and hence different code!) are running on the regular .NET Framework.

image

The key takeaway is that the mentioned serializers switch to good old slow Reflection and Activator.CreateInstance calls if you use the .NET Standard version of these serializers. This also includes BinaryFormatter which is 20% slower on .NET Core compared to the full .NET Framework.

Designed To Be Profiled

These are nice graphs which can tell you a lot about an application. With Stack Tags you can compare the CPU consumption of an exe and a hosted dll which executes the same code on different runtimes where a difference by call stack drill down would be useless because even the Main methods are located in different dlls. But by extracting the relevant information from your application logic into logical groups you can compare runtime and resource consumption between a SerializerTests.exe and a dotnet SerializerTests.dll call with no problems. The stack tag file used for these views are part of my Serializer test suite at https://github.com/Alois-xx/SerializerTests/blob/master/SerializerStack.stacktags if you are interested.

The format of the .stacktag files is pretty simple

<?xml version="1.0" encoding="utf-8"?>
<!-- This is a Stacktag file for WPA to analyze the performance of serializers under ETW profiling 
     It allows easy comparison of .NET and .NET Core profiling data
-->
<Tag Name="">
<Tag Name="Deserialize">
    <Tag Name="BinaryFormatter">
        <Entrypoint Module="SerializerTests.*" Method="SerializerTests.Serializers.BinaryFormatter*::Deserialize*"/>
    </Tag>

To be able to compare .NET and .NET Core the dll is SerializerTests.* because on .NET it compiles to SerializerTests.exe and on .NET Core to SerializerTests.dll which is executed by the dotnet.exe process. If you compare the CPU Time of the profiling data with the actual test duration from the CSV file which is also created you will find that the test duration is always longer. Even worse it is pretty hard to zoom into the section of time where the test actually executes. The advantage of creating a test suite is that you can make it profiling friendly. The solution to the timing problem is to use an extra thread that starts waiting for an event when the test starts and it becomes signaled when the test has stopped. That way we get a Context Switch event for each test run and we can also visualize the thread wait time for each and every test run.

The graph below shows the CPU consumption for the deserialize tests of all tested formatters. The formatters are executed from fast to slow (in rough order). 

image

The magic happens in the CPU Usage (Precise) view with our custom stacktags we can visualize the test duration as bar chart with the nice alternating pattern. If one test has a strange runtime between test runs and we have profiling data we can now drill into each and every test case and do a full root cause analysis. Looking at a specific profiler test run is now as easy as zooming into the right test run and check out what did take so long:

image

And The Winners Are

image

The fastest De/Serializer is MessagePack-CSharp (https://github.com/neuecc/MessagePack-CSharp) from Yoshifumi Kawai which beats Protobuf by a factor >2.5 and GroBuf (https://github.com/skbkontur/GroBuf) from Andrew Kostousov! I have no idea how these guys did make it so fast but this is the fastest C# code I have seen so far. MessagePack-CSharp even comes with Code Analyzers to make the annotation of your existing objects easy.

The problem with such fast serializers is that you cannot serialize object cycles (StackoverFlowException with MessagePack-CSharp, GroBuf and Wire). Another problem is that they do not keep object identity (Wire has an opt in flag see SerializerOptions(preserveObjectReferences). If your objects contain 100 references to the same 1 MB string it will be serialized 100 times by value without preserving object identity and you end up with 100×1 MB strings. As a general rule you need to pay attention not only to performance but also its concrete feature set. If you cannot guarantee that your objects never contain object cycles your application will crash hard without any further notice when you are using such performance optimized libraries. But if you design a reasonable data structure then MessagePack-CSharp or GroBuf are hot options. Yoshifumi Kawai did also create ZeroFormatter which has the crazy property of having 0 deserialization time. The reason why ZeroFormatter is not showing up in the winners section as well is that it sort of cheats the normal benchmarks. ZeroFormatter creates proxy objects on the fly which only contain an index to the actual byte array it was deserialized from. The actual deserialization cost will show up when you access the properties which need to be public virtual for that reason. To not distort the measured values I included in the deserialize test also a touch phase to access each deserialized property once and measure that as total deserialization cost. It turns out that the touch deserialize + touch costs are much higher compared to protobuf-net. Personally I do not like ZeroFormatter because it is intrusive to your object design and you would need to design your data structures in a way that the least amount of data is accessed in your use case. But use cases can and will change.  Now you need to redesign your object hierarchy every time you have a different access pattern or you need to live with suboptimal performance. 

Similar to ZeroFormatter is FlatBuffer (https://google.github.io/flatbuffers/)  which comes with its own IDL and compiler to generate the code from a schema. It is basically writing structs via memcopy into a byte array where each object reference is an index to another array which makes it a great candidate if you want to share large datasets between processes via shared memory when you are reading only a fraction of the data. Just as ZeroFormatter the data is deserialized when you actually access it. This is both a curse and a blessing. If you are reading some objects many times you are creating many temporary objects which will hurt GC performance. On the other hand if you are needing only a few items of a large array this lazy deserialization approach is perfect. FlatBuffer does not work with existing objects and it cannot cope with dictionaries which makes it rather inflexible. One needs to know that FlatBuffer comes from game programming where most of the data are coordinates and textures. For that it can be good choice. On the plus side it fully supports versioning despite being pretty low level.

Serializer Can Preserve Object References Observations
Message
PackSharp
No  
GroBuf No  
FlatBuffer No Data structures are created from IDL compiler.
Wire Only On Paper
new Serializer(
    new SerializerOptions(preserveObjectReferences :true)
The Wire unit tests contain cyclic references. But that seems to work only for cycles on the same object.
Real objects with many identical references will still be serialized by value.
Jil No Cannot serialize Dictionaries with DateTime as Keys.
Protobuf_net Yes, Opt in at declaration level

[ProtoContract]
public class A {
    ...
    [ProtoMember(5, AsReference=true)]
    public C Foo {get;set;}
}
See StackOverFlow Question
 
SimSerializer By default  
ZeroFormatter No Serializes only virtual properties.
DataContract Yes
new DataContractSerializer(type, new DataContractSerializerSettings
                                { PreserveObjectReferences=true} )
 
Bois No  
JSON.NET Works Not Always

JsonSerializer.Create(

new JsonSerializerSettings

{

          PreserveReferencesHandling =

           PreserveReferencesHandling.All

});

Still serializes by value when tried with more complex types like a dictionary.
ServiceStack No Closes  Input Stream
See SO Question
XmlSerializer No  
MsgPack.Cli No  
BinaryFormatter Yes  
FastJson No Cannot round trip DateTime with 100ns resolutions.

 

By default all serializers do not track object references which speeds up serialization time significantly at the expense of bigger serialized data.

Which One To Choose?

That is a tricky question because it strongly depends if you can change your existing object model radically or if you have to keep backwards compatibility while switching to a different serializer. Based on experience I would mention protobuf-net as the most feature complete and very fast serializer where you will almost certainly will find no blocking issues. For lesser known serializers you should check the number of commits to the project and if there is recent activity. If the library is no longer actively maintained because the author has shifted focus you should not use it for mission critical applications. If you are bound to a specific data format like JSON Jil is by far the fastest serializer on the planet. But be prepared for unpleasant surprises like that you cannot use Dictionary<DateTime, xxx> with Jil because it throws a NotSupportedException at you.

Despite their claims to track references and object cycles not all serializers fulfill their claims advertised at API level. That is a sign that not all code paths are equally well tested and if your scenario differs from the main usage you are likely to hit unexpected issues.

If you can change everything I would go for the fastest one which is MessagePackSharp because the author has a great track record on creating other serializers and he is very active at his project. GroBuf although equally fast produces significantly larger serialized data which can be an issue if you need to take into account not only the serializer performance but also the data size sent over the wire. A three times larger binary payload can easily defat any performance gain by using a faster serializer if a slow network is in between.  To be really sure if your data types work well with the target serializers you can use my test suite, and add your data object/s to the test suite (https://github.com/Alois-xx/SerializerTests) and measure for yourself.

Plugging in a new data type to the test suite is as simple as referencing your assembly which defines your type you care about.  Then change the tested data type from BookShelf to your custom type, supply a object factory delegate to create your test data (Data) and optionally add a touch delegate to touch all properties after deserialization to take into account lazy on access deserializing serializers.

        private void CreateSerializersToTest()
        {
            SerializersToTest = new List<ISerializeDeserializeTester>
            {
                new MessagePackSharp<BookShelf>(Data, TouchBookShelf),
                …
            };
D:\SerializerTests\bin\Release\net471>SerializerTests -test combined
Serializer      Objects "Time to serialize in s"        "Time to deserialize in s"      "Size in bytes" FileVersion     Framework
MessagePackSharp<BookShelf>     1       0.000   0.000   11      1.7.3.4 .NET Framework 4.7.3062.0
MessagePackSharp<BookShelf>     1       0.000   0.000   11      1.7.3.4 .NET Framework 4.7.3062.0
MessagePackSharp<BookShelf>     10      0.000   0.000   93      1.7.3.4 .NET Framework 4.7.3062.0
MessagePackSharp<BookShelf>     100     0.000   0.000   996     1.7.3.4 .NET Framework 4.7.3062.0
MessagePackSharp<BookShelf>     500     0.000   0.000   6014    1.7.3.4 .NET Framework 4.7.3062.0
MessagePackSharp<BookShelf>     1000    0.000   0.000   12515   1.7.3.4 .NET Framework 4.7.3062.0
MessagePackSharp<BookShelf>     10000   0.001   0.001   138516  1.7.3.4 .NET Framework 4.7.3062.0
MessagePackSharp<BookShelf>     50000   0.004   0.006   738516  1.7.3.4 .NET Framework 4.7.3062.0
MessagePackSharp<BookShelf>     100000  0.010   0.013   1557449 1.7.3.4 .NET Framework 4.7.3062.0
MessagePackSharp<BookShelf>     200000  0.016   0.032   3357449 1.7.3.4 .NET Framework 4.7.3062.0

and watch if the numbers are worth the change and that all data ends up in the serialized payload. The serialized data is written to  file

image

to make it easy to check if all data was really written to the output or if your class is missing some [MessagePackObject] or [Index] attributes to make the data show up in the serialized output. If you want to check out two different serializers you can let the test run only for the selected ones with

SerializerTests -Runs 1 -test combined -serializer protobuf,MessagePackSharp

to get your results fast. Now go and fix your serialization performance issues!

The Mysterious UI Hang Which Resolved Itself After 20s

Warning: This post includes ETW, Windbg, Kernel and Process memory dumps. If you don´t want to deep dive into the Windows Internals you should stop reading now.

One strange issue was a UI hang. Normally these are easy to solve because something is doing CPU intensive things on the UI thread, the UI thread is stuck because of a blocking call to wait for something to happen (e.g. to read a 2 GB large file),  or a deadlock has occurred. But this case was different. The UI was stuck but sometimes it did recover after 20s. The UI looked like this while it was not responding:

image

With Windbg we can examine where the UI thread is stuck from a live process or a memory dump. For managed code we need to load sos.dll as usual.

image

From the screenshot above we find that the managed stack is calling WaitMessage

0:000> !ClrStack
OS Thread Id: 0x2dbc (0)
Child SP       IP Call Site
0053ec9c 761a2a9c [InlinedCallFrame: 0053ec9c] System.Windows.Forms.UnsafeNativeMethods.WaitMessage()
0053ec98 58a4d1ea System.Windows.Forms.Application+ComponentManager.System.Windows.Forms.UnsafeNativeMethods.IMsoComponentManager.FPushMessageLoop(IntPtr, Int32, Int32)
0053ed24 58a4cbee System.Windows.Forms.Application+ThreadContext.RunMessageLoopInner(Int32, System.Windows.Forms.ApplicationContext)
0053ed74 58a4ca60 System.Windows.Forms.Application+ThreadContext.RunMessageLoop(Int32, System.Windows.Forms.ApplicationContext)
0053eda0 58a35d59 System.Windows.Forms.Application.Run(System.Windows.Forms.Form)
UIHang.Program.Main() [D:\Source\FocusHang\UIHang\Program.cs @ 19]

which is a perfectly legal call stack and is by no way an indication of a hung UI thread with the id 0x2dbc. The deadlock check command for unmanaged locks !locks yielded no results and the managed counterpart !SyncBlk also showed nothing.

Dumping the other threads can be done in Windbg but when many threads are involved the Parallel Stacks window of Visual Studio is much better:

image

The other threads also look normal. By the way how would a not normal stack look like? If something has got stuck I simply check out the stacks with the longest stacktrace because these are usually which are actually doing more than waiting for things to happen. But as you can see from the picture above there are no long stacks involved.

The current dump shows nothing. What can we do? Get more dumps! These showed hangs happening in

  • user32.dll!PeekMessage
  • user32.dll!SetFocus
  • user32.dll!ShowWindow

but nowhere was a reason visible why they were hanging. The window manager of Windows inside the kernel is the win32k subsystem. If something is stuck at a deeper level then it is happening inside the kernel and user mode stacks are useless. Procdump (my favorite memory dump creation tool) can give you a peek inside the kernel by dumping not only the user mode part of the call stack but also the kernel stacks (this works on Windows 10 only as far as I know). If you have looked carefully at the Windbg output you will notice that the memory dump was performed with the -mk option (see Comment: in the Windbg window) which creates a second dump file besides the user mode dump

D:\UIHang>procdump -mk -ma UIHang.exe

ProcDump v9.0 – Sysinternals process dump utility
Copyright (C) 2009-2017 Mark Russinovich and Andrew Richards
Sysinternals – http://www.sysinternals.com

[16:56:31] Dump 1 initiated: D:\UIHang\UIHang.exe_180218_165631.dmp
[16:56:31] Dump 1 writing: Estimated dump file size is 177 MB.
[16:56:31] Dump 1 complete: 177 MB written in 0.3 seconds
[16:56:31] Dump 1 kernel: D:\UIHang\UIHang.exe_180218_165631.Kernel.dmp
[16:56:32] Dump count reached.

When you open the xxxx.Kernel.dmp file you can navigate to the user mode thread 0x2dbc from our stuck UI thread to see where the call stack continues in the kernel:

image

Sometimes you can learn something new by looking at the kernel side. In this case the Kernel waits for a new window messages in NtUserWaitMessage but it is still not clear why this call never wakes up. In that case it makes sense to examine the contents of the window message queue. Unfortunately that can only be done by MS support because the whole windowing stuff is not exposed in Windbg or any published Windbg extension that I am aware of. When sending data to someone else we should get as much evidence as possible. My current favorite data collection for such types of issues are

  • ETW Sample Profiling with 8kHz sample rate and Context Switch Tracing
  • Memory Dump of frozen process
  • Kernel Memory Dump

Full Kernel Memory dumps are a pain because the are huge. If you are on Windows 10 or Server 2016 there is the option to take a kernel memory dump of only the active memory (https://blogs.msdn.microsoft.com/clustering/2015/05/18/windows-server-2016-failover-cluster-troubleshooting-enhancements-active-dump/) which is great because this excludes the file system cache which usually many GB in size. To force the creation of a kernel dump which excludes the file system cache you can create a reg file with the contents below:

CrashOnCtrlScroll_ActiveMemory.reg

Windows Registry Editor Version 5.00

[HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\kbdhid\Parameters]
"CrashOnCtrlScroll"=dword:00000001

[HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\i8042prt\Parameters]
"CrashOnCtrlScroll"=dword:00000001

[HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\CrashControl]
"CrashDumpEnabled"=dword:00000001
"FilterPages"=dword:00000001

and import the reg file. After that you need to reboot the machine. When you now press

Right Ctrl + Scroll Lock + Scroll Lock

you get a sad face which contains as bug check code MANUALLY INITIATED CRASH which is just what we want. You can use the .reg file also on Windows 7 machines where you get a full memory dump because the FilterPages registry key is ignored on older Windows versions.

image

On my 16 GB machine I now get a small 2,2 GB dump file.

image

If you want to transfer large files you should compress the data as much as possible. 7z archives are in my experience about 20% smaller than regular .zip files at the expense of ca. x5 times longer compression times. You can use multithreaded compression with the LZMA2 switch which splits the data into blocks which can be compressed by multiple threads. If you resort back to LZMA then then the speedup is much less. If you are doing this on a server machine where users start working after you have taken a memory dump you should perhaps stick to the .zip format to compress on a single core and stay nicely in the background.

7z a -m0=LZMA2 c:\temp\Kernel.7z MEMORY.DMP

If you use LZMA2 or LZMA (with many different files it will also be highly parallel like LZMA within one file) 7z will use all cores it can get. The compressed file is then a 577MB file which can be much easier sent around. A quick look by Microsoft support revealed the the message queue on our main UI thread is stuck to wait for window messages on another thread with the thread id 1880. Lets check the dump file for that thread with e.g. Windbg

0:011> ~~[1880]s
eax=00000000 ebx=00000002 ecx=00000000 edx=00000000 esi=00000000 edi=0000050c
eip=7769e7ac esp=0866f7a8 ebp=0866f818 iopl=0         nv up ei pl nz na po nc
cs=0023  ss=002b  ds=002b  es=002b  fs=0053  gs=002b             efl=00000202
ntdll!NtWaitForSingleObject+0xc:
7769e7ac c20c00          ret     0Ch
0:011> k
 # ChildEBP RetAddr  
00 0866f7a4 7642ebf9 ntdll!NtWaitForSingleObject+0xc
01 0866f818 70e0d5bd KERNELBASE!WaitForSingleObjectEx+0x99
02 0866f87c 70e0d80e clr!CLRSemaphore::Wait+0xc0
03 0866f8b8 70e0d8a8 clr!ThreadpoolMgr::UnfairSemaphore::Wait+0x132
04 0866f924 70d6edf1 clr!ThreadpoolMgr::WorkerThreadStart+0x389
05 0866fa44 76568654 clr!Thread::intermediateThreadProc+0x55
06 0866fa58 77694a77 kernel32!BaseThreadInitThunk+0x24
07 0866faa0 77694a47 ntdll!__RtlUserThreadStart+0x2f
08 0866fab0 00000000 ntdll!_RtlUserThreadStart+0x1b

Hm. That is a .NET Threadool thread which waits for more work to come. There is no indication what did happen before on that thread. But since we have recorded ETW tracing we know thanks to high frequency sample profiling data with 8KHz a lot of the history of that thread. The default sampling rate of 1kHz is not sufficient when you are searching for cheap method calls which can have far reaching consequences. In this case we are not looking for a performance issue but we want to know which methods this thread has executed before it was waiting for more work.

Lets check out what thread 0x1880=6272 was doing in WPA:

image

It was executing a TPL task on that thread were the suspiciously named ctor of HangForm was called. But what is even more important that the thread was destroyed exactly 20s after it has executed the ctor of the form where Windows destroys the window of our hung from on thread exit (see Selection Duration of 20.011s). After that the UI was responsive again. At least that is the observation. But  that still does not explain why the main UI thread was stuck even if you create a window on another thread you can create multiple UI threads within an application without issues as long as you do not mix them up. WinForms is very vigilant in this respect and will always throw an InvalidOperationException of the form

InvalidOperationException: “Crossthread operation not valid: Control ‘<name>’ accessed from a thread other than the thread it was created on.

if you try bad things like that.

The only way I know how to connect window message pumps from different threads is to call AttachThreadInput. When we search the profiling data for that method we find this:

 |    |    |    |    |- UIHang.exe!UIHang.HangForm::StartUIOnOtherThread 0x0

 |    |    |    |    |    |- UIHang.exe!UIHang.HangForm::.ctor 0x0

 |    |    |    |    |    |- System.Windows.Forms.ni.dll!System.Windows.Forms.Control.Show()

 |    |    |    |    |    |- UIHang.exe!dynamicClass::IL_STUB_PInvoke 0x0

 |    |    |    |    |    |    |- user32.dll!SetParentStub

 |    |    |    |    |    |    |    win32u.dll!NtUserSetParent

 |    |    |    |    |    |    |    ntdll.dll!LdrInitializeThunk

 |    |    |    |    |    |    |    ntdll.dll!LdrpInitialize

 |    |    |    |    |    |    |    ntdll.dll!_LdrpInitialize

 |    |    |    |    |    |    |    wow64.dll!Wow64LdrpInitialize

 |    |    |    |    |    |    |    wow64.dll!RunCpuSimulation

 |    |    |    |    |    |    |    wow64cpu.dll!Thunk0Arg

 |    |    |    |    |    |    |    wow64cpu.dll!CpupSyscallStub

 |    |    |    |    |    |    |    ntoskrnl.exe!KiSystemServiceCopyEnd

 |    |    |    |    |    |    |    win32kfull.sys!NtUserSetParent

 |    |    |    |    |    |    |    win32kfull.sys!xxxSetParentWorker

 |    |    |    |    |    |    |    |- win32kfull.sys!xxxShowWindowEx

 |    |    |    |    |    |    |    |- win32kfull.sys!zzzAttachThreadInput

The window was created on another thread but it did not attach the thread input queue directly. Instead it was calling user32.dll!SetParent which did in the kernel in the win32k subsystem attach the window input queues by calling zzzAttachThreadInput. That is all happening on our non message pumping TPL Task thread which is the missing ingredient to finally understand why our main UI thread was blocked due to a programming error on a seemingly unrelated thread. We have from the zzzAttachThreadInput method even with 8kHz sampling rate only one stack trace where one still needs a bit of luck to see the root cause so nicely with ETW data.

More documentation about that Win32 behavior would be great. These details seem to be discussed on the Old New Thing Blog (Sharing an input queue takes what used to be asynchronous and makes it synchronous, like focus changes). A few more hints are shown at P41 https://www.slideshare.net/wvdang/five-things-every-win32-developer-should-know.  According to that you will attach thread input queues implicitly if you

  • Set a parent window (user32.dll!SetParent)
  • Set an Owner window (user32.dll!SetWindowLongPtr(win32window, GWLP_HWNDPARENT, formhandle)
  • Or install journal hooks  (user32.dll!SetWindowsHookEx with an JournalRecordProc)

Win32K ETW Tracing?

You can also enable tracing for the Win32K subsystem to track the window focus events by adding this ETW provider to your xperf command line

Microsoft-Windows-Win32k:0x42e3000:0xff

But if the window message pump is stuck the results of this ETW provider and the WPA Window In Focus chart can be misleading.

Conclusions

Window message queue issues are notoriously hard to debug because most relevant data is only available during live debugging while you still can query window states with e.g. Spy++. But if you only have a memory dump you will have a hard time to figure out what went wrong. A kernel dump would give you all information but since no public information is present how you can examine the contents a window message queue this must be left as an exercise for Microsoft support. If someone knows how to get e.g. thread affinity from a user mode memory dump from an HWND please leave a note below. Once again memory dumps and ETW tracing have helped to find the actual root cause. The memory dump helps to find stuck threads and strange data points. ETW helps you to find how you did get into that state once you know from the dump file where you need to look further.

This time I have learned that .NET Threadpool threads seem to be shut down 20s after they had no real work to do and that Windows will destroy window handles once the creating thread terminates. That can unblock a previously stuck UI as a side effect. If you want to play with the UIHang application you can find it here: https://1drv.ms/f/s!AhcFq7XO98yJgrklCE9_p4RuHoG0Mg

Be Careful Where You Put GC.SuppressFinalize

I had an interesting issue to debug which resulted in a race condition where the finalizer was being called while the object was still in use. If you know how .NET works this should ring some alarm bells since this should never happen. The finalizer is expected to run only when no one has a reference to the finalizable object anymore.

A simple reproducer is below. It creates 50K finalizable objects. Each object allocates 500 bytes of unmanaged memory which is released either by calling dispose on it on a dedicated thread or the finalizer thread will kill the rest during application shutdown.

using System;
using System.Linq;
using System.Runtime.InteropServices;
using System.Threading;
using System.Threading.Tasks;

class Program
{
    static void Main(string[] args)
    {
        // create 50K events
        var events = Enumerable.Range(1, 50 * 1000)
                                .Select(x => new Event())
                                .ToList();

        ManualResetEvent startEvent = new ManualResetEvent(false);

        Task.Factory.StartNew(() =>
        {
            startEvent.WaitOne();  // wait for event
            foreach (var ev in events) // dispose events
            {
                ev.Dispose();
            }
        });

        startEvent.Set(); // start disposing events
        Thread.Sleep(1);  // wait a bit and then exit
    }
}

public class Event : IDisposable
{
    internal IntPtr hGlobal;  // allocate some unmanaged memory

    public Event()
    {
        hGlobal = Marshal.AllocHGlobal(500);
    }

    ~Event()  // finalizer 
    {
        Dispose();
    }

    public void Dispose()
    {
        if( hGlobal !=  IntPtr.Zero) // check if memory is gone
        {
            Marshal.FreeHGlobal(hGlobal); // free it
            GC.SuppressFinalize(this); // Prevent finalizer from running it again
            hGlobal = IntPtr.Zero;
        }
    }
}

Looks good to you? Let it run:

image

Ups that should not happen. When trying to run the application under the VS debugger everything works on my machine™. No matter how hard I try it will never crash under the debugger. But if I start it without debugging it will crash every time.

Debug The Problem

When the application crashes without the debugger on a machine where VS is installed you will get a nice dialog

image

where you can click Debug. Then I choose to debug managed and unmanaged debugging

image

Because part of the issue has to do with the .NET Runtime we need managed and unmanaged debugging. It is therefore wise to enable Native and Managed debugging.

image

If you do not manually select both debugging engine VS will default to unmanaged debugging only where we will miss our managed stack frames which is not particularly helpful:

image

With the correct debugging engine we find that while the finalizer was called a heap corruption was reported:

image

While another thread is also disposing events

image

So what is the problem here? Could it be that the finalizer is disposing the same instance on which our TPL thread is still working? A concurrent double free sounds likely but by only using Visual Studio we cannot prove it. If a finalizer is called while we the object is still alive we would have found have a pretty serious GC bug. On the other hand if that would be the case many people would have complained.

Gather More Evidence

To analyze the crash with other tools it is good to save a memory dump from the crashing aplication. You can do this pretty easily with

D:\Source\FinalizerFun\bin\Release>procdump -ma -e -x . FinalizerFunNetFull.exe

ProcDump v9.0 – Sysinternals process dump utility
Copyright (C) 2009-2017 Mark Russinovich and Andrew Richards
Sysinternals – www.sysinternals.com

[21:13:50] Exception: 04242420
[21:13:52] Exception: 80000003.BREAKPOINT
[21:13:52] Exception: C0000374
[21:13:52] Unhandled: C0000374
[21:13:52] Dump 1 initiated: .\FinalizerFunNetFull.exe_180204_211352.dmp
[21:13:52] Dump 1 writing: Estimated dump file size is 83 MB.
[21:13:52] Dump 1 complete: 83 MB written in 0.1 seconds
[21:13:52] Dump count reached.

procdump is a command line tool to take memory dumps in many ways. This time we take a full memory dump -ma for unhandled exceptions -e where we execute a process -x and put the dump to the current directory . followed by the executable and optional command line arguments for the executable. The most difficult part is that I always forget that the first parameter after -x is not the executable and its arguments but the dump folder. If you try to capture a dump on first chance exceptions before it becomes unhandled you normally use -e 1 but for reasons not known to me this did never trigger the creation of a dump file. If all fails you can still take a memory dump while the “… has stopped working” dialog is shown with procdump for a given pid like “procdump -ma pid”.

You can open the memory dump with Visual Studio without problem by dragging and dropping the .dmp file from the explorer into VS

image

Press Debug with Mixed to see managed and unmanaged code. Many people shy away from memory dumps. But if you dare to open them the debugging experience is the same as it would be for a live process which is stuck at a breakpoint. The only difference is that you cannot continue execution. VS will show your source code and the crashing thread just like it would happen during a live debugging session:

image

VS has great memory dump support (since ca. VS2012/2013 around).  If you have an automated build system it is possible to get full source code debugging  for your released application. The feature is called Source Server support. For TFS builds it is a simple config switch of your build. With git things are more involved https://shonnlyga.wordpress.com/2016/05/28/source-server-with-git-repository/. If you have Source Indexed builds you definitely want to enable Source Server support for the debugger to get live and memory dump debugging without the need to download the source files.  In Debug –  Options

image

check all items below Enable source server support. Unfortunately VS 2017 has broken Source Server Support which is tracked here: https://developercommunity.visualstudio.com/content/problem/169309/debugger-cant-create-folder-structure-when-trying.html

It is working on VS 2013, 2015 or 2017 15.6 (still beta). As workaround you can copy srcsrv.dll from an earlier VS edition to the VS2017 one to get source server support back again.

No Not Windbg!

We have reached a dead end with Visual Studio. It is time to admit that the nice GUI based tools although powerful are not always the most helpful ones when you want to completely understand an issue. First we need to download Windbg for which MS has put up a page https://developer.microsoft.com/en-us/windows/hardware/download-windbg. This will point you to the Windows SDK page

image

from where you can download the Windows SDK installer. If the installer wont start you have a newer version of the Win 10 SDK already installed. In that case you can download the latest SDK installer from  https://developer.microsoft.com/en-us/windows/downloads/windows-10-sdk.

When you start the installer you need to press next a few times to get to the list of features you want to install. Check Debugging Tools for Windows and press Install.

image

Now you will find the 32 bit version of Windbg in

“C:\Program Files (x86)\Windows Kits\10\Debuggers\x86\windbg.exe”

and the 64 bit version at

“C:\Program Files (x86)\Windows Kits\10\Debuggers\x64\windbg.exe”

Start the correct Windbg version (x86,x64) and load the crash dump file.

image

Since it is a managed process we need to load to a managed debugging extensions named sos.dll. This is done with the Windbg command

.loadby sos clr

If you get a failure of the form

0:005> .loadby sos clr
The call to LoadLibrary(C:\Windows\Microsoft.NET\Framework\v4.0.30319\sos) failed, Win32 error 0n193
    “%1 is not a valid Win32 application.
Please check your debugger configuration and/or network access.

then you did load the wrong Windbg because you did open a 32 bit dump with the 64 bit version of Windbg. Things get easier with the upcoming new Windbg which is in Beta where is only one debugger which will load the right debugger.
The cryptic command tells the debugger to load the sos.dll from the same directory where the .NET Runtime dll clr.dll is located. If you wish you can also fully qualify the name like

.load C:\Windows\Microsoft.NET\Framework\v4.0.30319\sos.dll

The 64 bit framework dll is located at

.load C:\Windows\Microsoft.NET\Framework64\v4.0.30319\sos.dll

If you dare to use ” for the path then you need to adhere to the C style escape rules where you need to use \\ to get a \. If you analyze a memory dump on another machine with a different .NET Framework version installed some of the SOS commands might not work or sos.dll refuses to be loaded. In that case you can check out my OneDrive folder https://1drv.ms/f/s!AhcFq7XO98yJgoMwuPd7LNioVKAp_A which contains a pretty up to date list of nearly all .NET Framework sos dlls. You need to extend the symbol path to the downloaded sos dlls (.sympath+ c:\mscordackwksDownloadDir) and then load it via the full path. Things become easier in the future if Windbg automatically loads the right sos.dll from the symbol server which seems now to be in place.

We have a managed debugging extension loaded. Now what? First we test if the extension works by executing the !Threads command

0:000> !Threads
c0000005 Exception in C:\Windows\Microsoft.NET\Framework\v4.0.30319\sos.Threads debugger extension.
      PC: 0b13b8e3  VA: 00000000  R/W: 0  Parameter: ed04c8b4
0:000> !Threads
ThreadCount:      4
UnstartedThread:  0
BackgroundThread: 4
PendingThread:    0
DeadThread:       0
Hosted Runtime:   no
                                                                         Lock  
       ID OSID ThreadOBJ    State GC Mode     GC Alloc Context  Domain   Count Apt Exception
   0    1 2594 02959160   2022220 Preemptive  046E6CF8:00000000 02952d00 0     MTA 
   5    2 1a08 02966f30     2b220 Preemptive  046F3CDC:00000000 02952d00 0     MTA (Finalizer) System.BadImageFormatException 046ebff4
   9    3 4300 06d3f690   3021220 Preemptive  046E829C:00000000 02952d00 0     MTA (Threadpool Worker) 
  11    4 2cec 06d41e78   1029220 Preemptive  046EA1E4:00000000 02952d00 0     MTA (Threadpool Worker) 

For some reason the first time I execute the command I get an exception. But it works the second time. This is happening to me since years on many different machines. I have no idea what the bug is but it should be fixed someday. We know that we have 4 threads and one thread did throw a BadImageFormatException. Lets examine that thread. The first column are the Windbg thread numbers given by Windbg to switch easier between threads. The command to switch to a specific thread 5 where our exception lives is

~5s

Then we can execute the sos command to dump the managed thread stack with

0:005> !ClrStack
OS Thread Id: 0x1a08 (5)
Child SP       IP Call Site
0676f888 7748ed3c [HelperMethodFrame: 0676f888] System.Runtime.InteropServices.Marshal.ThrowExceptionForHRInternal(Int32, IntPtr)
0676f8fc 70d0065e System.Runtime.InteropServices.Marshal.FreeHGlobal(IntPtr) [f:\dd\ndp\clr\src\BCL\system\runtime\interopservices\marshal.cs @ 1211]
0676f908 0291116a Event.Dispose() [D:\Source\vc17\FinalizerFuncNetFull\FinalizerFuncNetFull\Program.cs @ 51]
0676f914 029111a9 Event.Finalize() [D:\Source\vc17\FinalizerFuncNetFull\FinalizerFuncNetFull\Program.cs @ 43]
0676fb10 714e63f2 [DebuggerU2MCatchHandlerFrame: 0676fb10] 

We know that thread number 5 is the finalizer thread and we see that it is indeed calling into Event.Finalize where our exception happens. So far we did not get more information than we could get from the much easier to use Visual Studio debugger. Now lets check on which event object the finalizer was called. For that we can use a heuristic command named !dso which is the short form of Dump Stack Objects.

0:005> !dso
OS Thread Id: 0x1a08 (5)
ESP/REG  Object   Name
0676F5A8 046ebff4 System.BadImageFormatException
0676F698 046ebff4 System.BadImageFormatException
0676F6AC 046ebff4 System.BadImageFormatException
0676F6D0 046ebff4 System.BadImageFormatException
0676F6FC 046ebff4 System.BadImageFormatException
0676F710 046ebff4 System.BadImageFormatException
0676F718 046ebff4 System.BadImageFormatException
0676F71C 046ebff4 System.BadImageFormatException
0676F7BC 046ebff4 System.BadImageFormatException
0676F7FC 046ebff4 System.BadImageFormatException
0676F8FC 046507c0 Event
0676F958 046507c0 Event
0676F98C 046507c0 Event
0676F998 046507c0 Event
0676F9A8 046507c0 Event
0676F9B0 046507c0 Event
0676F9C0 046507c0 Event

The command is rather dumb and dumps the same object reference several times where it was located as pointer on the thread stack. There is actually a much better extension out there for that which is called netext (https://github.com/rodneyviana/netext/tree/master/Binaries). To “install” the extension you can copy it to the Windbg default extension folder which allows you to load the dll with no directory qualifier on my machine to

  • C:\Program Files (x86)\Windows Kits\10\Debuggers\x86\winext
  • C:\Program Files (x86)\Windows Kits\10\Debuggers\x64\winext

Now we can load it

0:000> .load netext
netext version 2.1.19.5000 Feb  6 2018
License and usage can be seen here: !whelp license
Check Latest version: !wupdate
For help, type !whelp (or in WinDBG run: ‘.browse !whelp’)
Questions and Feedback:
https://github.com/rodneyviana/netext/issues
Copyright (c) 2014-2015 Rodney Viana (
http://blogs.msdn.com/b/rodneyviana)
Type: !windex -tree or ~*e!wstack to get started

0:005> !wstack

Listing objects from: 0676b000 to 06770000 from thread: 5 [1a08]

046ebff4 701d13c4   0  0         92 System.BadImageFormatException
046507c0 028b6260   0  0         12 Event

2 unique object(s) found in 104 bytes

to get a much less cluttered output. This extension is pure gold because it allows you to write LINQ style debugger queries to e.g. dump all object instances which derive from a common base class. It has extended support for WCF connections, sockets and APS.NET specific things.

From the dump we know that the event 046507c0 did cause an exception in the unmanaged heap. Was someone else working with this object? Visual Studio is of no help here but we can use the !GCRoot command to find out who references this object from somewhere else:

0:005> !GCRoot 046507c0
Thread 1a08:
    0676f908 0291116a Event.Dispose() [D:\Source\vc17\FinalizerFuncNetFull\FinalizerFuncNetFull\Program.cs @ 51]
        esi: 
            ->  046507c0 Event

Thread 4300:
    08edf790 0291116a Event.Dispose() [D:\Source\vc17\FinalizerFuncNetFull\FinalizerFuncNetFull\Program.cs @ 51]
        esi: 
            ->  046507c0 Event

    08edf79c 02911108 Program+<>c__DisplayClass0_0.<Main>b__1() [D:\Source\vc17\FinalizerFuncNetFull\FinalizerFuncNetFull\Program.cs @ 23]
        ebp+2c: 08edf7ac
            ->  046324b4 System.Collections.Generic.List`1[[Event, FinalizerFuncNetFull]]
            ->  05655530 Event[]
            ->  046507c0 Event

    08edf79c 02911108 Program+<>c__DisplayClass0_0.<Main>b__1() [D:\Source\vc17\FinalizerFuncNetFull\FinalizerFuncNetFull\Program.cs @ 23]
        ebp+28: 08edf7b0
            ->  046507c0 Event

Found 4 unique roots (run '!GCRoot -all' to see all roots).

The finalizer thread 1a08 was expected but what is thread 4300 doing with our object? Lets switch to that thread. We can use either the thread number of the OS thread id with the even more cryptic command

0:005> ~~[4300]s
eax=00000000 ebx=00000001 ecx=00000000 edx=00000000 esi=00000001 edi=00000001
eip=7748ed3c esp=08edf2b8 ebp=08edf448 iopl=0         nv up ei pl nz na pe nc
cs=0023  ss=002b  ds=002b  es=002b  fs=0053  gs=002b             efl=00000206
ntdll!NtWaitForMultipleObjects+0xc:
7748ed3c c21400          ret     14h
0:009> !ClrStack
OS Thread Id: 0x4300 (9)
Child SP       IP Call Site
08edf754 7748ed3c [InlinedCallFrame: 08edf754] 
08edf750 7013bb80 DomainNeutralILStubClass.IL_STUB_PInvoke(IntPtr)
08edf754 7010d8b9 [InlinedCallFrame: 08edf754] Microsoft.Win32.Win32Native.LocalFree(IntPtr)
08edf784 7010d8b9 System.Runtime.InteropServices.Marshal.FreeHGlobal(IntPtr) [f:\dd\ndp\clr\src\BCL\system\runtime\interopservices\marshal.cs @ 1212]
08edf790 0291116a Event.Dispose() [D:\Source\vc17\FinalizerFuncNetFull\FinalizerFuncNetFull\Program.cs @ 51]
08edf79c 02911108 Program+c__DisplayClass0_0.b__1() [D:\Source\vc17\FinalizerFuncNetFull\FinalizerFuncNetFull\Program.cs @ 23]
08edf7e0 70097328 System.Threading.Tasks.Task.InnerInvoke() [f:\dd\ndp\clr\src\BCL\system\threading\Tasks\Task.cs @ 2884]
08edf7ec 70096ed0 System.Threading.Tasks.Task.Execute() [f:\dd\ndp\clr\src\BCL\system\threading\Tasks\Task.cs @ 2498]
08edf810 700972fa System.Threading.Tasks.Task.ExecutionContextCallback(System.Object) [f:\dd\ndp\clr\src\BCL\system\threading\Tasks\Task.cs @ 2861]
08edf814 7010bcd5 System.Threading.ExecutionContext.RunInternal(System.Threading.ExecutionContext, System.Threading.ContextCallback, System.Object, Boolean) [f:\dd\ndp\clr\src\BCL\system\threading\executioncontext.cs @ 954]
08edf880 7010bbe6 System.Threading.ExecutionContext.Run(System.Threading.ExecutionContext, System.Threading.ContextCallback, System.Object, Boolean) [f:\dd\ndp\clr\src\BCL\system\threading\executioncontext.cs @ 902]
08edf894 70097178 System.Threading.Tasks.Task.ExecuteWithThreadLocal(System.Threading.Tasks.Task ByRef) [f:\dd\ndp\clr\src\BCL\system\threading\Tasks\Task.cs @ 2827]
08edf8f8 7009704d System.Threading.Tasks.Task.ExecuteEntry(Boolean) [f:\dd\ndp\clr\src\BCL\system\threading\Tasks\Task.cs @ 2767]
08edf908 70096fcc System.Threading.Tasks.Task.System.Threading.IThreadPoolWorkItem.ExecuteWorkItem() [f:\dd\ndp\clr\src\BCL\system\threading\Tasks\Task.cs @ 2704]
08edf90c 700e87f2 System.Threading.ThreadPoolWorkQueue.Dispatch() [f:\dd\ndp\clr\src\BCL\system\threading\threadpool.cs @ 820]
08edf95c 700e865a System.Threading._ThreadPoolWaitCallback.PerformWaitCallback() [f:\dd\ndp\clr\src\BCL\system\threading\threadpool.cs @ 1161]
08edfb80 7143eb16 [DebuggerU2MCatchHandlerFrame: 08edfb80] 

Ahh that is our TPL thread which is also freeing the object. The call stack shows that we have caught it in action while it was still calling Marshal.FreeHGlobal the finalizer did finalize it right away! That is pretty serious since that must not happen. To see the full picture we need a mixed mode stack with no hidden stack frames like Visual Studio is showing us. For mixed mode stacks there is another Windbg extension best suited. It is called sosex (http://www.stevestechspot.com/)

0:009> .load sosex
This dump has no SOSEX heap index.
The heap index makes searching for references and roots much faster.
To create a heap index, run !bhi
0:009> !mk
Thread 9:
        SP       IP
00:U 08edf2b8 7748ed3c ntdll!NtWaitForMultipleObjects+0xc
01:U 08edf2bc 753f1293 KERNELBASE!WaitForMultipleObjectsEx+0x103
02:U 08edf450 714dff96 clr!WaitForMultipleObjectsEx_SO_TOLERANT+0x3c
03:U 08edf4a0 714dfcd8 clr!Thread::DoAppropriateWaitWorker+0x237
04:U 08edf52c 714dfdc9 clr!Thread::DoAppropriateWait+0x64
05:U 08edf598 714dff3c clr!CLREventBase::WaitEx+0x128
06:U 08edf5e4 71560152 clr!CLREventBase::Wait+0x1a
07:U 08edf5fc 714fe9dc clr!WaitForEndOfShutdown_OneIteration+0x81
08:U 08edf670 714fea29 clr!WaitForEndOfShutdown+0x1b
09:U 08edf67c 714fcd76 clr!Thread::RareDisablePreemptiveGC+0x52f
0a:U 08edf6c8 714e8374 clr!JIT_RareDisableHelper+0x24
0b:M 08edf74c 7013bb95 DomainNeutralILStubClass.IL_STUB_PInvoke(IntPtr)
0c:M 08edf750 7013bb80 DomainNeutralILStubClass.IL_STUB_PInvoke(IntPtr)
0d:M 08edf784 7010d8b9 System.Runtime.InteropServices.Marshal.FreeHGlobal(IntPtr)(+0xe IL,+0x19 Native) [f:\dd\ndp\clr\src\BCL\system\runtime\interopservices\marshal.cs @ 1212,17]
0e:M 08edf790 0291116a Event.Dispose()(+0x1d IL,+0x12 Native) [D:\Source\vc17\FinalizerFuncNetFull\FinalizerFuncNetFull\Program.cs @ 51,13]
0f:M 08edf79c 02911108 Program+<>c__DisplayClass0_0.<Main>b__1()(+0x21 IL,+0x70 Native) [D:\Source\vc17\FinalizerFuncNetFull\FinalizerFuncNetFull\Program.cs @ 23,17]

There we see that the thread did call into unmanaged code to free the heap memory but the CLR will not let it run managed code anymore because we are shutting down the process. There are some gotchas related to managed application shutdown where I did write a long time (12 years now) ago here: https://www.codeproject.com/Articles/16164/Managed-Application-Shutdown. Most things are still the same. The key takeaway is that when managed application shutdown is initiated the .NET Runtime ensures that

  • managed code calling into unmanaged code never returns (clr!WaitForEndOfShutdown)
  • All managed threads are suspended except for the finalizer thread

0:009> ~
#  0  Id: aa0.2594 Suspend: 1 Teb: 002d2000 Unfrozen
   1  Id: aa0.18e0 Suspend: 1 Teb: 002d5000 Unfrozen
   2  Id: aa0.3ac4 Suspend: 1 Teb: 002d8000 Unfrozen
   3  Id: aa0.30c0 Suspend: 1 Teb: 002db000 Unfrozen
   4  Id: aa0.1d34 Suspend: 1 Teb: 002de000 Unfrozen
  5  Id: aa0.1a08 Suspend: 0 Teb: 002e1000 Unfrozen
   6  Id: aa0.2954 Suspend: 1 Teb: 002e4000 Unfrozen
   7  Id: aa0.3cf4 Suspend: 1 Teb: 002e7000 Unfrozen
   8  Id: aa0.3d2c Suspend: 1 Teb: 002ea000 Unfrozen
.  9  Id: aa0.4300 Suspend: 1 Teb: 002ed000 Unfrozen
  10  Id: aa0.4224 Suspend: 1 Teb: 002f0000 Unfrozen
  11  Id: aa0.2cec Suspend: 1 Teb: 002f3000 Unfrozen

  • All finalizeable objects are declared as finalizable
  • Only the finalizer thread is allowed to run to finalize all now considered garbage objects

The problem with that approach is that there is an inherent race condition where a not yet completed Dispose call is calling into unmanaged code then the finalizer will try to call the unmanaged cleanup call a second time.

Is This A Problem?

Well lets check who in the .NET Framework calls GC.SuppressFinalize

image

There are quite a few classes in the Base Class Library which implement finalizers this way. GC.SuppressFinalize is always called last which is a time bomb waiting to crash on you at the worst possible time and killing e.g. your UI while you are closing everything. Lets try an experiment by changing our code to create Brushes instead of events:

// create 50K events
var events = Enumerable.Range(1, 50 * 1000)
                        .Select(x => new SolidBrush(Color.AliceBlue))
                        .ToList();

When I let it run I get a nice AccessViolationException which some of us certainly have sporadically seen but have been left back wondering why that exception did happen to them:

image

To be fair. Not all classed listed above are susceptible to that race condition. Some classes already check if a shutdown is running and do in that case nothing:

~OverlappedCache()
{
   if (!NclUtilities.HasShutdownStarted)
  {
       this.InternalFree();
  }
}

The Fix

There are several ways to get around that. The easiest is to move the GC.SuppressFinalize call before the Dispose call which will prevent the finalizer from running during shutdown if a Dispose call is already executing. If an exception escapes from the Dispose call it will not be tried by the finalizer a second time which sounds like a good deal for most resources.

public void Dispose()
{
    if( hGlobal !=  IntPtr.Zero) // check if memory is gone
    {
        GC.SuppressFinalize(this); // Prevent finalizer from running it again
        Marshal.FreeHGlobal(hGlobal); // free it            
        hGlobal = IntPtr.Zero;
    }
}

Another way is to check if a shutdown or an AppDomain unload is happening right now:

    if (!Environment.HasShutdownStarted && !AppDomain.CurrentDomain.IsFinalizingForUnload())

.NET Core on the other hand does not suffer from that issue because a .NET Core application has no final finalizer call which prevents that race condition entirely. Now go and check your finalizers to make your application correctly shutting down.

Update 1

As requested by Steve I present a fixed safe version:

In 2018 you should not write a finalizer at all. The basic Dispose(bool bDisposing) pattern is from a time where we had no SafeHandles. Today I would write my Event class entirely without a finalizer but the unmanaged resource/s are self contained by finalizable Safehandles. A typical wrapper would look like the one below which owns the memory pointer:

sealed class SafeNativeMemoryHandle : SafeHandleZeroOrMinusOneIsInvalid
{
    public SafeNativeMemoryHandle(int size):base(true)
    {
        SetHandle(Marshal.AllocHGlobal(size));
    }

    protected override bool ReleaseHandle()
    {
        if (this.handle != IntPtr.Zero)
        {
            Marshal.FreeHGlobal(this.handle);
            this.handle = IntPtr.Zero;
            return true;
        }
        return false;
    }
}

With that infrastructure in place we can improve the event class to the much easier version which will never leak any memory although it contains no finalizer at all:

/// <summary>
/// SafeEvent class needs no finalizer because unmanaged resources
/// are managed by the SafeNativeMemoryHandle which is the only class which needs a finalizer.
/// </summary>
public class SafeEvent : IDisposable
{
    internal SafeNativeMemoryHandle hGlobal;  // allocate some unmanaged memory

    public SafeEvent()
    {
        hGlobal = new SafeNativeMemoryHandle(500);
    }

    public void Dispose()
    {
        hGlobal.Dispose();
        hGlobal = null;
    }
}

You can also create event hierarchies by making the Dispose method virtual without fear to leak any handles from its derived classes. Each class which contains unmanaged resources should contain its own self cleaning members and you are done as long as there are no dependencies between them.

Crashing VS2017 15.5.2

I was just playing with the new readonly struct features of C# 7.2. To check if things got faster I first declared a new struct

    readonly struct FastPoint
    {
        public int X { get; set; }
        public int Y { get; set; }
    }

But Visual Studio will then complain

Program.cs(12,20,12,21): error CS8341: Auto-implemented instance properties in readonly structs must be readonly.
Program.cs(13,20,13,21): error CS8341: Auto-implemented instance properties in readonly structs must be readonly.

Ok. No problem lets make the setter private. But the error stays. Perhaps I need to add some modifier to the get property. Lets try readonly

    public int X { readonly get; private set; }

This results in

error CS0106: The modifier ‘readonly’ is not valid for this item

Ok. Now I am desperate. Lets try ref readonly.  Hah something happens:

image

But not for the better. VS eats up all the memory and if you try to compile it will transfer the leak into PerfWatson2.exe as well.

image

Ok that was not it. The final solution was to remove the setter completely. Interestingly you can still set the property although it has no declared setter.

    readonly struct FastPoint
    {
        public int X { get; }
        public int Y { get; }

        public FastPoint(int x, int y)
        {
            X = x;
            Y = y;
        }
    }

This seems to be a C# 6 feature I was until now not aware of. Problem solved. But wait what was the ever increasing memory of the compiler?

From the call stacks we can deduce quite a bit

image

Roslyn is parsing a property declaration and has found ref which is a valid token. Now some memory is allocated for the token but later is treated as wrong token. That in itself would not be too bad but it seems that the parser seems to rewind and then tries parsing the same wrong property declaration again which results in infinite memory consumption. I have reported the issue here

https://developercommunity.visualstudio.com/content/problem/168027/typing-ref-readonly-at-the-wrong-places-will-crash.html

which will hopefully be fixed. The error message is ok in hindsight but it did confuse me the first time. If you want to play with the newest C# features you need to open the build properties tab, press Advanced and then you can select the e.g. C# latest minor version to always use the latest C# version.

image

Lets hope you are not hitting new memory leaks as fast as I did.

The Case Of NGen.exe Needing 50 GB Of Memory

This is an old bug which seems to be in the .NET Framework since a long time but since it is highly sporadic it was not found until now. I have got reports that on some machines NGen.exe did use all of the computers memory which did lead in Task Manager to this pattern:

clip_image002

The biggest process on that machine was always Ngen.exe and everything was very slow. This tells me that NGen did not recover from time to time from its high memory consumption but that it did allocate like crazy until the machine had no physical memory anymore. When an application uses all memory the OS will page out all memory to the hard disk when no physical memory is left. After writing many GB of data to the hard disk NGen can continue to allocate more memory until no physical memory is left and the OS will write all memory to the page file again. This will continue until Ngen.exe finally hits the commit limit which is the sum of Physical Memory + Page File Size which results in an Out Of Memory error. Only then the process will terminate.

When NGen did go crazy like this MS support suggested to delete the registry key Computer\HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\.NETFramework\v2.0.50727\NGenService\Roots and its decendants. That fixed the issue but it remained mysterious why this was happening and what exactly did break. When you delete the registry key NGen will build up its Ngen root dll cache automatically.  Inside the registry keys was no obvious garbage data visible and the issue remained mysterious. But finally I have got my hands on a machine where the issue was still present which allowed me to take more evidence data.

What Do We Know?

  • ngen install somedll.dll or ngen createpdb somedll.ni.dll causes NGen.exe to consume many GB of memory
  • NGen breaks due to corrupted registry keys
  • After deleting the registry key below \HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\.NETFramework\v2.0.50727\NGenService\Roots NGen will build up the registry structure by itself which “fixes” the issue

What data should we get? Part of it is based on experience and the other part is more psychological. If you hand over a bug report to someone else you should anticipate that he/she is not familiar with your favorite debugging tool (e.g. Windbg). Filing a bug report with random findings is easy. Filing a bug report which enables the support personnel  to get down to the real root cause is much harder. When I have a nice reproducible bug which I can repeat easily as often as I want I tend to get all data I can get. When file/registry issues are involved I would get some or all of the things below.

Full Scale Data Capturing

  • Capture a procmon trace which will show all accessed registry keys and files
    • That is easy to do and provides a general understanding which registry keys are accessed
  • Dump the affected files/registry keys
    • E.g. export the registry hive \HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\.NETFramework\v2.0.50727\NGenService from regedit so someone can take a look at the accessed data
    • That is much easier than to crawl though a memory dump to reconstruct the accessed registry/file contents
  • Capture several memory dumps while the problem builds up
    • procdump is a great tool to take several dumps in a row once a specific condition has been reached
  • Capture ETW Traces to get the full picture
    • It potentially shows the evolution of the problem.  But this depends highly on the skill set of the support engineer if that data is useful to them
  • Capture a Time Travel Trace
    • This gives much detail but it can be time consuming to analyze a multi GB trace. If you or the support engineer do not know exactly where to look you will only produce a multi GB random data file which is not used at all

Getting the right data which somebody else can work with is tricky since you do not know with which tools someone else is most comfortable with. Normally you will start with the easiest data capturing method and based on the analysis of the data you did get you will need to decide what else is missing to get the full picture. That usually will involve capturing more data with a different tool.  But if you capture everything from procmon tracing up to a time travel trace the chances are good that you can reduce the time until the investigation leads to somewhere from weeks down to minutes. Lets start with the easy data gathering approach first:

Getting A Procmon Trace

Procmon is a SysInternals Utility which can record all process starts along with all file and registry accesses. When you start it you can add a filter for the process of interest. In our case it is ngen.exe.

image

Since Procmon records system all events the resulting memory needed by this tool can become quite large. To prevent that is is most of the time better to discard all not interesting events from memory by checking the checkbox of File – Drop Filtered Events

image

That is important if you need to wait for an incident to run for hours. You should always get the latest version of procmon since from time to time some memory leaks or other things are fixed which could hinder a successful data collection. The gathered data can then be saved as PML file which can be read into the tool again on a different machine. To save the data you can choose a nice file name which describes the problem. Naming is hard but be precise what the trace actually contains. In a few weeks even you do not know what that file was for.

image

Under the hood Procmon uses ETW to gather the data. What does that mean? For every file/registry access and process start/dll load you will get a full call stack if you click on the event properties which can tell you already as much as a memory dump:

image

In our NGen case we find that NGen deserialized the native image roots from a registry list. That is a very powerful capability but you need to set the symbol server and the path to dbghelp.dll from a Windbg installation (x64 usually) to get valid call stacks. There is one caveat: Procmon cannot decode stack traces from managed code which makes this otherwise great tool severely limited mainly for unmanaged stack trace analysis.

Dump The Registry Keys

Who said that data collection is difficult? When we look at the NGen registry keys which were accessed we find a large list of all NGenned dlls “cached” in the registry. This is done for performance reasons. The .NET Framework had always a soft spot for the Registry. NGen does it and the GAC also. That is much faster to read than to traverse over 4000 directories only for the 64 bit NGenned dlls.

image

Since the corruption is data dependent we can simply export the whole NGenService tree into a text file which can hopefully help to diagnose the data corruption.

image

The resulting text file was over 200 MB in size. It is unlikely that you will find the root cause by looking at a 200 MB text file line by line. We need more clues where we need to look at.

Can We Already Solve?

Pattern identification is a very important skill you need to develop if you want to analyze an issue. One if not the most powerful analysis method is differential analysis. Usually you have a good case and a bad case which you can compare and see where the behavior starts to diverge. But it also works in the other way to find common patterns. The presence or the absence of a deviation can both be a useful hint. It is a good idea to capture the data not only once but several times to be able to find stable patterns in the data.

By looking at the procmon registry trace we can first filter only for the querying of registry contents of the Roots node

image

There we find that the last accessed registry key is always the same one. It is a binary registry key named ImageList. But wait. The call stack of that event is not particularly enlightening but it is a strong hint that either that was the last registry key it did read and then one of the previously read registry keys did contain invalid data or that this registry key is the one which is corrupted. Lets check the contents of the ImageList value:

image

Hm. Not sure if that is the problem. Lets get more data.

Capture Memory Dumps

There are many ways to capture memory dumps but the most flexible tool besides DebugDiag is procdump. It is a simple command line tool which can trigger the creation of a memory dump in very sophisticated ways. In our case it is straightforward. We want to start NGen and then take 3 dumps with 1s in between because the memory leaks is growing very fast.

C:\Windows\assembly\NativeImages_v4.0.30319_64\System\0c9bec7e4e969db233900a4588c91656>procdump -s 1 -n 3 -ma -x c:\temp ngen.exe createpdb system.ni.dll c:\temp

ProcDump v9.0 – Sysinternals process dump utility
Copyright (C) 2009-2017 Mark Russinovich and Andrew Richards
Sysinternals – http://www.sysinternals.com

Process:               ngen.exe (14168)
CPU threshold:         n/a
Performance counter:   n/a
Commit threshold:      n/a
Threshold seconds:     1
Hung window check:     Disabled
Log debug strings:     Disabled
Exception monitor:     Disabled
Exception filter:      [Includes]
*
[Excludes]
Terminate monitor:     Disabled
Cloning type:          Disabled
Concurrent limit:      n/a
Avoid outage:          n/a
Number of dumps:       3
Dump folder:           c:\temp\
Dump filename/mask:    PROCESSNAME_YYMMDD_HHMMSS
Queue to WER:          Disabled
Kill after dump:       Disabled

Press Ctrl-C to end monitoring without terminating the process.

Microsoft (R) CLR Native Image Generator – Version 4.7.2556.0
Copyright (c) Microsoft Corporation.  All rights reserved.
[23:16:42] Timed:
[23:16:42] Dump 1 initiated: c:\temp\ngen.exe_171212_231642.dmp
[23:16:42] Dump 1 writing: Estimated dump file size is 1418 MB.
[23:16:44] Dump 1 complete: 1419 MB written in 2.4 seconds
[23:16:46] Timed:
[23:16:46] Dump 2 initiated: c:\temp\ngen.exe_171212_231646.dmp
[23:16:47] Dump 2 writing: Estimated dump file size is 4144 MB.
[23:17:42] Dump 2 complete: 4145 MB written in 55.8 seconds
[23:17:44] Timed:
[23:17:44] Dump 3 initiated: c:\temp\ngen.exe_171212_231744.dmp

The command line parts are -s 1 to wait one second between each dump, -n 3 to take three dumps before it exits, -ma to take a full memory dump, -x expects as first argument the dump folder and all other arguments are the executable and its command line arguments. If you look at the command line parameters you will find a lot more. The output a little frightening at first but there is a secret switch (procdump -? -e) which will print a lot of useful examples how procdump is meant to be used. Actually this switch is not secret but nearly no one will read the large command line help until the end which is the reason I spell it out explicitly.

We can load the dump file into the new Windbg which will us give automatically a nice call stack window where NGen was just allocating memory:

image

That information should be sufficient for any support guy to drill down to the root cause. To make sense of the call stack you need local variables which are not part of the public symbols of MS. For us outsiders that is as far as we can analyze the problem. Really? Lets have a look at the method names. NGen deserializes a Root Array of native image roots from the registry. While it is deserializing a specific root object it deserializes something with a method BinaryDeSerializeLogicalImageList. That sounds familiar to the binary registry node ImageList from our registry dump. When we only could know the registry key it was just deserializing. This involves a little bit poking the in the dark. I would expect that the stack between DeSerialize and BinaryDeSerializeLogicalImageList contains hopefully somewhere the registry key name.

With the k command we get the call stack and the current stack pointers

0:000> k
 # Child-SP          RetAddr           Call Site
00 000000c9`5f2fdcc0 00007ffe`327a8912 ntdll!RtlpLowFragHeapAllocFromContext+0x2a
01 000000c9`5f2fdda0 00007ffe`05eebde6 ntdll!RtlpAllocateHeapInternal+0xf2
02 000000c9`5f2fde60 00007ffe`05eec700 mscorsvc!operator new+0x30
03 000000c9`5f2fde90 00007ffe`05eed445 mscorsvc!ArrayOfPointers::CreateAndAppendNode+0x2c
04 000000c9`5f2fded0 00007ffe`05eed7f1 mscorsvc!Configuration::BinaryDeSerializeLogicalImageList+0xcd
05 000000c9`5f2fe060 00007ffe`05eeffb0 mscorsvc!Configuration::DeSerialize+0x206
06 000000c9`5f2fe300 00007ffe`05ee81b2 mscorsvc!Root::DeSerialize+0x379
07 000000c9`5f2fe630 00007ffe`05eecd98 mscorsvc!RootList::DeSerializeRoot+0x9c
08 000000c9`5f2fe690 00007ffe`05f0b69c mscorsvc!RootList::GetRootArray+0x1a6
09 000000c9`5f2fe960 00007ffe`05f0bb79 mscorsvc!CCorSvcMgr::GetLogicalImageForRootedNI+0xd4
0a 000000c9`5f2fec60 00007ff6`aef17dd7 mscorsvc!CCorSvcMgr::CreatePdb2+0x229
0b 000000c9`5f2ff1c0 00007ff6`aef11f32 ngen!NGenParser::ProcessNewCommandLineOptionsHelper+0x99d
0c 000000c9`5f2ff5d0 00007ff6`aef11d54 ngen!IsNewCommandLine+0x196
0d 000000c9`5f2ff730 00007ff6`aef1276a ngen!trymain+0x19c
0e 000000c9`5f2ffd90 00007ff6`aef126f8 ngen!wmain+0x4e
0f 000000c9`5f2ffe20 00007ffe`30221fe4 ngen!BaseHolder,&Delete,2>,0,&CompareDefault,2>::~BaseHolder,&Delete,2>,0,&CompareDefault,2>+0x2a6
10 000000c9`5f2ffe50 00007ffe`327eef91 kernel32!BaseThreadInitThunk+0x14
11 000000c9`5f2ffe80 00000000`00000000 ntdll!RtlUserThreadStart+0x21

The brute force method is to dump the stack from start to end with

0:000> db c9`5f2fde90  c9`5f2ff1c0

000000c9`5f2fe710  18 e7 2f 5f c9 00 00 00-43 00 3a 00 2f 00 41 00  ../_....C.:./.A.
000000c9`5f2fe720  6e 00 79 00 4e 00 61 00-6d 00 65 00 57 00 69 00  n.y.N.a.m.e.W.i.
000000c9`5f2fe730  6c 00 6c 00 44 00 6f 00-2e 00 64 00 6c 00 6c 00  l.l.D.o...d.l.l.
000000c9`5f2fe740  00 00 74 00 75 00 62 00-73 00 2e 00 49 00 6e 00  ..t.u.b.s...I.n.
000000c9`5f2fe750  74 00 65 00 72 00 6f 00-70 00 2c 00 20 00 56 00  t.e.r.o.p.,. .V.
000000c9`5f2fe760  65 00 72 00 73 00 69 00-6f 00 6e 00 3d 00 31 00  e.r.s.i.o.n.=.1.
000000c9`5f2fe770  30 00 2e 00 30 00 2e 00-30 00 2e 00 30 00 2c 00  0...0...0...0.,.
000000c9`5f2fe780  20 00 43 00 75 00 6c 00-74 00 75 00 72 00 65 00   .C.u.l.t.u.r.e.
000000c9`5f2fe790  3d 00 4e 00 65 00 75 00-74 00 72 00 61 00 6c 00  =.N.e.u.t.r.a.l.
000000c9`5f2fe7a0  2c 00 20 00 50 00 75 00-62 00 6c 00 69 00 63 00  ,. .P.u.b.l.i.c.
000000c9`5f2fe7b0  4b 00 65 00 79 00 54 00-6f 00 6b 00 65 00 6e 00  K.e.y.T.o.k.e.n.
000000c9`5f2fe7c0  3d 00 33 00 31 00 62 00-66 00 33 00 38 00 35 00  =.3.1.b.f.3.8.5.
000000c9`5f2fe7d0  36 00 61 00 64 00 33 00-36 00 34 00 65 00 33 00  6.a.d.3.6.4.e.3.
000000c9`5f2fe7e0  35 00 2c 00 20 00 70 00-72 00 6f 00 63 00 65 00  5.,. .p.r.o.c.e.
000000c9`5f2fe7f0  73 00 73 00 6f 00 72 00-41 00 72 00 63 00 68 00  s.s.o.r.A.r.c.h.
000000c9`5f2fe800  69 00 74 00 65 00 63 00-74 00 75 00 72 00 65 00  i.t.e.c.t.u.r.e.
000000c9`5f2fe810  3d 00 61 00 6d 00 64 00-36 00 34 00 00 00 00 00  =.a.m.d.6.4.....

where we find the registry key which is currently being worked on:

0:000> du 000000c9`5f2fe718
000000c9`5f2fe718  "C:/AnyNameWillDo.dll"

It looks like the ImageList of this dll is corrupted which did cause NGen to go into an infinite loop. A deeper look at the surrounding registry keys from the registry export revealed that another registry key of the previous dll was also corrupted. This is really strange and I have no idea how NGen could manage to corrupt two unrelated registry keys RuntimeVersion (string) and ImageList  (binary).

Capture ETW Traces

Based on our previous investigations we should get data about memory allocation, CPU consumption and accessed registry keys which should give us a good understanding how the problem evolves over time. To capture ETW data you need normally to download and install the Windows Performance Toolkit which is part of the Windows SDK. But since Windows 10 the command line only tool named wpr.exe is part of Windows itself. That can be important if you are working on a machine which is locked down with e.g. Device Guard and you cannot install new software easily and you cannot execute not Authenticode signed binaries which rules many home grown data collection tools out. Normally I use ETWController (http://etwcontroler.codeplex.com/) which enables me to capture mouse and keyboard interactions along with screenshots which has proven to be invaluable many times. But on a locked down machine one needs to use the tools which you can start.

C:\WINDOWS\system32>wpr -start CPU -start Registry -start VirtualAllocation -start GeneralProfile

… Ngen …. 

C:\WINDOWS\system32>wpr -stop c:\temp\NgenGoneCrazy.etl

After loading the ETL file into WPA and a little working out the important metrics like CPU, Allocation and Registry accesses we get this one:

image

    We find that practically all CPU is spent in allocating memory while the method BinaryDeSerializeLogicalImageList  was executed. The VirtualAlloc graph shows a frightening allocation rate of 1,4 GB/s which is the most massive memory leak I have seen since a long time. The last graph shows that the huge allocation rate starts once the ImageList of the dll C:/AnyNameWillDo.dll was read. After that no more registry keys were read which is strong indicator that this registry key is the one knocking NGen out.

    After realizing that it was easy to come up with a minimal registry file which will bring NGen down

    NgenCorrupt.reg

    Windows Registry Editor Version 5.00
    
    [HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\.NETFramework\v2.0.50727\NGenService\Roots\C:/AnyNameWillDo.dll]
    [HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\.NETFramework\v2.0.50727\NGenService\Roots\C:/AnyNameWillDo.dll\0]
    "ImageList"=hex:00,69,00,6f
    

    When you import that reg file and you execute

    ngen.exe createpdb system.ni.dll c:\temp

    in the directory where the native image of System.ni.dll is located NGen will explode. But beware that at least on Windows 10 my machine did freeze and never recover when all physical memory was allocated. It seems that older Windows editions (e.g. Server 2008 R2) deal with such rapidly allocating applications in a better way which lets you at least terminate the application once it has gotten all physical memory.

    Conclusions

    After have drilled down that far it is pretty clear where NGen did break although it remains a mystery how the registry keys were corrupted. The case is currently being investigated at Microsoft which will hopefully result in a more robust NGen which will ignore bogus registry entries and delete them which should cause the recreation of the missing NGen root entries some time later. The data sent to MS are memory dumps, and ETL Trace, procmon trace and the reg file to corrupt the registry on a test machine.

    image

    Troubleshooting is not magic although many people assume magic happening here. It is pretty straightforward to capture the relevant data with some experience. Analyzing the captured data is indeed a complex undertaking and requires a lot of experience. The goal of this blog post is to help other people dealing with failures to understand what data is needed and why. Capturing the data is much easier and faster than to analyze it. If you capture the right data you will make the work of others trying to help you a lot easier.

    I always tell people that it makes no sense to assign one guy of a team as troubleshooter and send him to a Windbg/ETW training. First of all if someone is assigned to a task he dislikes he will never be good at it. Second it needs a lot of practice and experience to be able to drill down this deep. If you are trying to analyze such an issue once every few months you will not have the necessary skills to use the tools correctly.  If you want to bring a member of your team at a good level to troubleshoot hard issues he/she must be willing to look at a memory dump nearly every day. If no one raises his hand for this task you can spare the money for the Windbg/ETW training. But it makes sense to bring all team members to a level where everyone understands what data is needed to let some specialized guys to look into the issues in an efficient manner by providing enough and the right data to successfully nail the root cause. As a positive side effect more people will get used to these tools and some guys will like to drill deeper. These are the ones you should send to a Windbg/ETW training.

    That’s all for today. Remember: Great tools are useless. Tools become great when people use them.