Why Is Batman Arkham Knight So Slow?

The latest Batman PC game had after the first release a bad reputation for being buggy and very slow. Due to some bad outsourcing and project management decisions the PC version did not meet the required quality gates. This quality and performance disaster did force Warner Bros  to cancel the PC sales for three month until a much more improved version was released which did meet customer expectations with regards to performance. The outcome was that the unpatched PC version was sold for 50% off and I could get a very good game for a very low price. I did buy the unpatched version, did throw away the unpatched DVDs,  and downloaded the “real” v1.0 from Steam with the accompanying product code.

After being aware of potentially bad performance in the game I kept an eye on it but I found not any rough spots anymore. The only thing that bugged me was when I switched away from the game to do something different and then switching back after ca. 30 minutes the first few seconds of the game graphics was hanging for several seconds. That sounds like a good reason to check performance with my favorite performance recording tool ETWController. Since it records also the screenshots I could easily track down in the profiling data what was happening while the game was hanging.

image

Since I get also a HTML report I can exactly see where in the timeline I have continued the game, although I do not get screenshots of the in game graphics.

image

After I stopped the recording I can directly open the ETL file and navigate the point of slowness in the game by selecting the time region after Screenshot 5.

image

When the game hangs it is usually CPU, Disk, memory or the network. So lets check what is going on.

By looking at the CPU consumption we see that the game has a several seconds near zero CPU activity between 10-13s which correlates with the observed hang.

image

While checking disk IO during the low CPU activity phase I find a lot of page file related IO. When the game resumes it needs to access a lot of its memory which was previously written into the page file this results in a growth of the games working set.

image

In the graph above I see that I have two page files (on drive C and D) where I wait 3,6s for 55 MB from drive D and 500ms for 103MB from drive C. That makes sense since my C drive is a SSD

image

whereas my D drive is a regular hard drive

image

That explains why reading from the page file on the D drive is 7 times slower although only 50% of the data is read from it! Random read access is the worst case scenario for a regular hard disk whereas my SSD can cope with that read pattern easily.

I do not even remember that I have enabled on D a page file but I think I did it some years ago where SSDs had still write durability issues just to be on the safe side. As it turns out it was a bad decision and as an easy fix I disabled my page file on drive D. The sudden hangs when switching back to the game were gone after that change.

Whose Fault Is It?

That was the easy part. But you can go much deeper to really see where the latencies are coming from. Hard faults which involve disk IO are served in the kernel by the method ntoskrnl.exe!MiIssueHardFault which takes care to wait for the pending disc IO of the page file. If we look at which processes did wait for page fault IOs and visualize the total CPU and wait times we get this timeline

image

  1. We can see that the hard faults nicely correlate with the Disk activity in the graph below.
  2. We find that the Arkham Knight game waits only for 90ms for hard page faults (see Waits(us)_Sum column).
  3. But the System process waits for over 10s (summed across all threads) for page in activity.

If we look who is causing this page in activity we find DirectX in the Windows kernel to cause quite some IO due to page faults.

ntoskrnl.exe!MiIssueHardFault

ntoskrnl.exe!MmAccessFault

|- ntoskrnl.exe!KiPageFault

|- ntoskrnl.exe!MiProbeLeafFrame

|    ntoskrnl.exe!MmProbeAndLockPages

|    |- dxgmms2.sys!VIDMM_MDL_RANGE::Lock

|    |    dxgmms2.sys!VIDMM_RECYCLE_HEAP_PHYSICAL_VIEW::LockRange

|    |    dxgmms2.sys!VIDMM_RECYCLE_RANGE::Lock

|    |    dxgmms2.sys!VIDMM_RECYCLE_MULTIRANGE::Lock

|    |    dxgmms2.sys!VIDMM_RECYCLE_HEAP_MGR::ProbeAndLockAllocation

|    |    dxgmms2.sys!VIDMM_GLOBAL::ProbeAndLockAllocation

|    |    |- dxgmms2.sys!VIDMM_SYSMEM_SEGMENT::LockAllocationRange

|    |    |    |- dxgmms2.sys!VIDMM_MEMORY_SEGMENT::CommitResource

|    |    |    |    dxgmms2.sys!VIDMM_GLOBAL::PageInOneAllocation

|    |    |    |    dxgmms2.sys!VIDMM_GLOBAL::ProcessDeferredCommand

|    |    |    |    dxgmms2.sys!VIDMM_WORKER_THREAD::Run

|    |    |    |    ntoskrnl.exe!PspSystemThreadStartup

|    |    |    |    ntoskrnl.exe!KiStartSystemThread

It is not even the game directly but DirectX needs to page in its buffers from previously paged out memory which takes several seconds from my slow spinning hard disk. It is possible to track this further down into the wait calls of the game but that should be proof enough to blame disk IO of DirectX buffers for the observed sluggish gaming experience.

Conclusions

It was not Batman’s fault that he was slow but my misconfigured secondary page file on my spinning hard disk. Removing the page file from the spinning disk did solve my performance problem.

Another observation is that Windows seems to write data out to the first responding device which hosts a page file. That is nice to get as much data as possible in an efficient manner into the page file but the real test comes when the data is read back again. I think the current page out algorithm in Windows is inefficient when it can choose from several devices to write its page data to it. It should balance the written data with respect to the random access servicing time of each device.  In effect SSDs with its very fast response times should be favored over slow hard disks. This makes it possible to have several page files on devices with vastly different response times but Windows should prefer to write most of the data onto the device with the fastest random access time to make hard page faults pause time as short as possible.

This once again shows that assumptions about bad performance based on former evidence (the game is slow …) will lead to wrong conclusions about current performance issues.

Performance problems are much like quantum mechanics: The problem does not exist until you have measured it. Some people come up with smart tests to verify this or that assumption. But it is usually easier to use a profiling tool to get down to the root cause with a more structured approach.

When Known .NET Bugs Bite You

The most interesting type of bugs do not occur during regression testing but when real people use the software. Things become nasty if it happens only after several hours of usage when sometimes the whole application freezes for minutes. If you have a bigger machine with e.g. 40 cores and most of them are busy and you have dozens of processes, things can become challenging to track down, when something goes wrong . Attaching a debugger does not help since it not clear which process is causing the problem and even if you could you would cause issues for already working users on that machine. Using plain ETW tracing is not so easy because the amount of generated data can exceed 10 GB/min. The default Windows Performance Recorder profiles are not an option because they will simply generate too much data. If you have ever tried to open 6+ GB ETL file in WPA it can take 30 minutes until it opens. If you drag and drop around some columns you have to wait some minutes until the UI settles again.

Getting ETW Data The Right Way

Taking data with xperf offers more fined grained control at the command line, but it has also its challenges. If you have e.g. a 2 GB user and a 2 GB kernel session which trace into a circular in memory buffer you can dump the contents with

xperf -flush "NT Kernel Logger" -f kernel.etl 

xperf -flush "UserSession" -f user.etl

xperf -merge user.etl kernel.etl merged.etl 

The result looks strange:

image

For some reason flushing the kernel session takes a very long time (up to 15 minutes) on a busy server. In effect you will have e.g. the first 20s of kernel data, a 6 minutes gap and then the user mode session data which is long after the problem did occur. To work around that you can either flush the user session first or you use the xperf command to flush both sessions at the same time:

xperf -flush "UserSession" -f user.etl -flush "NT Kernel Logger" -f kernel.etl

If you want to profile a machine where you do not know where the issues are coming from you should get at first some “long” term tracing. If you do not record context switch events but you basically resort back to profiling events then you can already get quite far. The default sampling rate is one kHz (a thousand samples/second)  which is good for many cases but too much for long sessions. When you expect much CPU activity then you can try to take only every 10ms a sample which will get you much farther. To limit the amount of traced data you have to be very careful from which events you want to get the call stacks. Usually the stack traces are 10 times larger than the actual event data. If you want to create a dedicated trace configuration to spot a very specific issue you need to know which ETW providers cost you how much. Especially the size of the ETL file is relevant. WPA has a very nice overview under System Configuration – Trace Statistics which shows you exactly that 

image

You can paste the data also into Excel to get a more detailed understanding what is costing you how much.

image

In essence you should enable stack walking only for the events you really care about if you want to keep the profiling file size small. Stack walks cost you a lot of storage (up to 10 times more than the actual event payload). A great thing is that you can configure ETW to omit stack walking for a specific events from an ETW provider with tracelog. This works since Windows 8.1 and is sometimes necessary.

To enable e.g. stack walking for only the CLR Exception event you can use this tracelog call for your user mode session named DotNetSession. The number e13c0d23-ccbc-4e12-931b-d9cc2eee27e4  is the guid for the provider .NET Common Language Runtime because tracelog does not accept pretty ETW provider names. Only guids prefixed with a # make it happy.

tracelog -enableex DotNetSession -guid #e13c0d23-ccbc-4e12-931b-d9cc2eee27e4 -flags 0x8094 -stackwalkFilter -in 1 80 

If you want to disable the highest volume events of Microsoft-Windows-Dwm-Core you can configure a list of events which should not traced at all:

tracelog -enableex DWMSession -guid #9e9bba3c-2e38-40cb-99f4-9e8281425164 -flags 0x1 -level 4 -EventIdFilter -out 23 2 9 6 244 8 204 7 14 13 110 19 137 21 111 57 58 66 67 68 112 113 192 193

That line will keep the DWM events which are relevant to the WPA DWM Frame Details graph. Tracelog is part of the Windows SDK but originally it was only part of the Driver SDK. That explains at least partially why it is the only tool that allows to declare ETW event and stackwalk filters. If you filter something in or out you need to pass as first number the number of events you will pass to the filter.

tracelog … -stackwalkFilter -in 1 80

means enable stackwalks for one event with event id 80

tracelog … -EventIdFilter -out 23 2 9 …

will filter out from all enabled events 23 events where the event ids are directly following after the number. You can execute such tracelog statements also on operating systems < Windows 8.1. In that case the event id filter clauses are simply ignored. In my opinion xperf should support that out of the box and I should not have to use another tool to fully configure lightweight ETW trace sessions.

Analyzing ETW Data

After fiddling with the ETW providers and setting up filters for the most common providers I was able to get a snapshot at the time when the server was slow. The UI was not reacting for minutes and I found this interesting call stack consuming one core for nearly three minutes!

image

When I drill deeper I did see that Gen1 collections were consuming enormous amounts of CPU in the method WKS::allocate_in_older_generation.

image

When you have enabled the .NET ETW provider with the keyword 0x1 which tracks GC events and you use my GC Activity Region file you can easily see that some Gen1 collections are taking up to 7s. If such a condition occurs in several different processes at the same time it is no wonder why the server looks largely unresponsive although all other metrics like CPU, Memory, Disc IO and the network  are ok.

Gen1GCTimes

When I find an interesting call stack it is time to look what the internet can tell me about allocate_in_older_generation. This directly points me to Stackoverflow where someone else had this issue already http://stackoverflow.com/questions/32048094/net-4-6-and-gc-freeze which brings me to https://blogs.msdn.microsoft.com/maoni/2015/08/12/gen2-free-list-changes-in-clr-4-6-gc/ that mentions a .NET 4.6 GC bug. There the .NET GC developer nicely explains what is going on:

… The symptom for this bug is that you are seeing GC taking a lot longer than 4.5 – if you collect ETW events this will show that gen1 GCs are taking a lot longer. …

Based on the data above and that the server was running .NET 4.6.96.0 which was a .NET 4.6 version this explains why sometimes a funny allocation pattern in processes that had already allocated > 2GB of memory the Gen1 GC pause times would sometimes jump from several ms up to seconds. Getting  1000 times slower at random time certainly qualifies as a lot longer. It would even be justified to use more drastic words how much longer this sometimes takes. But ok the bug is known and fixed. If you install .NET 4.6.1 or the upcoming .NET 4.6.2 version you will never see this issue again. It was quite tough to get the right data at the right time but I am getting better at these things while looking at the complete system at once. I am not sure if you could really have found this issue with normal tools if the issue manifests in random processes at random times. If you only concentrate at one part of the system at one time you will most likely fail to identify such systemic issues. When you update your server after some month by doing regular maintenance work the issue would have vanished leaving you back with no real explanation why the long standing issue has suddenly disappeared. Or you would have identified at least a .NET Framework update in retrospect as the fix you were searching for so long.

The Non Contracting Code Contracts

What do you do when you get an impossible exception of the form

System.InvalidCastException: Object of type 'MyApp.Type[]' cannot be converted 
                to type 'System.Collections.Generic.IEnumerable`1[MyApp.Type]'.
System.RuntimeType.TryChangeType(…)
System.RuntimeType.CheckValue(…)
System.Reflection.MethodBase.CheckArguments(…)
System.Reflection.RuntimeMethodInfo.InvokeArgumentsCheck(…)
...

where basically an array of type X cannot be casted to IEnumerable<X>.  Any LINQ affine programmer will do that 20 times a day. What the heck is going on here? It looks like the CLR type system has got corrupted by some bad  guy. It could be anything from a spurious memory corruption due to cosmic rays or some code on this or another thread did corrupt memory somehow. Random bit flips are not so uncommon if you know that cosmic muons hit the earth surface at a rate of ca. 60 000 muons/(h*m^2). It is only a matter of time until some bits in your memory gets randomly another value.

This issue surfaced on some machines quite reproducible so we can put back exotic theories about possible sources of memory corruption back into the bookshelf. First of all I needed to get a memory dump of the process in question. This is much easier if you know how you can “attach” procdump to one process and let it create a dump on any first chance exception with a specific type or message. That comes in handy if you want to create a dump not when a process crashes but when an internally catched exception occurs and you want to inspect the application state at this point in time.

The Sysinternals tool Procdump can do it from the command line without any extra installation. The -e 1 will instruct it to dump on first chance exceptions and -f will do a substring filter of the shown message in the console by procdump.

C:\>procdump  -ma -e 1 -f "InvalidCast" Contractor.exe

ProcDump v7.1 - Writes process dump files
Copyright (C) 2009-2014 Mark Russinovich
Sysinternals - www.sysinternals.com
With contributions from Andrew Richards

Process:               Contractor.exe (12656)
CPU threshold:         n/a
Performance counter:   n/a
Commit threshold:      n/a
Threshold seconds:     10
Hung window check:     Disabled
Log debug strings:     Disabled
Exception monitor:     First Chance+Unhandled
Exception filter:      *InvalidCast*
Terminate monitor:     Disabled
Cloning type:          Disabled
Concurrent limit:      n/a
Avoid outage:          n/a
Number of dumps:       1
Dump folder:           C:\
Dump filename/mask:    PROCESSNAME_YYMMDD_HHMMSS


Press Ctrl-C to end monitoring without terminating the process.

CLR Version: v4.0.30319

[19:48:12] Exception: E0434F4D.System.InvalidCastException ("Object of type 'MyApp.Type[]' cannot be converted to type 'System.Collections.Generic.IEnumerable`1[MyApp.Type]'.")
[19:48:12] Dump 1 initiated: C:\Contractor.exe_160718_194812.dmp
[19:48:12] Dump 1 writing: Estimated dump file size is 50 MB.
[19:48:12] Dump 1 complete: 50 MB written in 0.1 seconds
[19:48:13] The process has exited.
[19:48:13] Dump count reached.
If you have a more complex process creation setup, e.g. you create many instances of the same exe then you cannot use the simple executable name filter. Procdump accepts then only the process id of an already running process which you now need to determine from somewhere else. DebugDiag would be my tool of choice which allows you to configure rules for any process executable regardless if they are already running or will be started later. There you can also configure a dump trigger when a specific exception with a specific message.
To do this start the DebugDiag 2  Collection tool and select Crash
image
 
After selecting a process you can configure e.g. a full dump for each InvalidCastException
 
image
 
When you have activated the Crash rule you will get a nice report how many dumps already have been collected since the rule is active.
image
 
Then you will find in the configured directory the dump files along with the DebugDiag logs.
image

Once I had the dumps I did look into it but I could not make much sense out of the TypeCastException. Without the exact CLR source code it is very hard to make sense of the highly optimized data structures in the CLR. At that point I did reach out to Microsoft to look into the issue. Since the dump did also not provide enough clues I did pack a virtual machine together and transfer it directly to MS support via their upload tool. The support engineer was not able to get the VM running which seems to happen quite frequently if you upload 50+ GB files over the internet. The solution was to send me via mail an USB hard drive, upload the VM onto the disk and send it back to MS. that worked out and I heard nothing from MS support for some time.

It seems that my issue was routed through various hands until it reached one of CoreCLR devs which have debugged certainly many similar issues. When I was finally suspecting to never hear about that issue again I got a mail back that they have found the issue.

The problem is that the CLR type system got confused about invalid assembly references which have invalid Flags set. The assembly reference flag set in the IL code must be set to zero but in my case it was set to 0x70 which is normally a C# compiler internal tracking code to mark reference assemblies. In the final code emitting phase the C# compiler will zero out the assembly reference flags and all is fine. But for some reason one of my assemblies still had set 0x70 for all reference assemblies.

If you wonder what reference assemblies are: These are the assemblies the .NET Framework consist of in various versions. If you link against a specific .NET Framework version like 4.6.2, you “link” against the assemblies located in these directories to ensure that can call only APIs for the currently configured .NET target framework.

image

I was told that with the .NET 4.5.2 C# compiler an issue was reported which could trigger that behavior. If the newer Roslyn based C# 6 compiler suffers still from this issue is not known to me. That sounded like an arcane and spurious compiler bug. For some reason I was asked a few days later by MS if we modify the generated assemblies. It turns out some targets use Code Contracts to enforce run time checks which need assembly rewriting. After doing a query for all assembly references I found that ALL Code contracts rewritten assemblies have invalid Assembly reference flags with 0x70 set. This is still true with newest Code Contracts version which is currently 1.9.10714.2.

If you try to check the Assembly references via Reflection you will find that it will always tell you that the assembly references fields in the references assemblies are zero. This is even true if you load them with Assembly.LoadForReflectionOnlyFrom. Even Ildasm will not show you the fields of the Assembly references. Windbg? Nope.

I had to resort back to my old pet project ApiChange and patch the –ShowReferences command to print besides the referenced assembly also the Flags field of the Assembly Reference.


C:\>apichange -sr D:\bin\SubContractor.dll
SubContractor.dll ->
 System, Version=4.0.0.0,   Culture=neutral,
                            PublicKeyToken=b77a5c561934e089 0x70
 mscorlib, Version=4.0.0.0, Culture=neutral, 
                            PublicKeyToken=b77a5c561934e089 0x70

There I could finally see that my assembly had invalid assembly references. How can I fix it? That is easy. Turn off Perform Runtime Contract Checking!

image

Now you get correct assembly references as the ECMA spec requires them.


C:\>apichange -sr D:\bin\SubContractor.dll
SubContractor.dll ->
 mscorlib, Version=4.0.0.0, Culture=neutral, 
            PublicKeyToken=b77a5c561934e089 0x0
 System, Version=4.0.0.0,   Culture=neutral, 
            PublicKeyToken=b77a5c561934e089 0x0

The source of the spurious InvalidCastExceptions is found but why does it happen so infrequently? I have tried to create a simple reproducer but I was not able to force the cast error. It seems to have something to do with the machine, memory layout, number of loaded types, IL layout, moon phase, …. to trigger it. If something goes wrong it seems that in one assembly with correct assembly references to the Reference assemblies a cast is tried from

mscorlib         ;  IEnumerable<T>  to 
mscorlib,Ref=0x70;  IEnumerable<T>  

where the type identity with generic types and their arguments gets mixed up in the process which then correctly tells me that seemingly identical types are not identical because they differ in their full type expansion by their assembly reference.

I was never a big fan of Code Contracts because of the way how they were implemented. Rewriting existing assemblies can and will cause unforeseen issues in production code. If I can I want to keep my C# and JIT compiler errors and not add Code Contract bugs on top of it. Do not get me wrong. I really like the idea of contract checking. But such checks should be visible in the source code and or at least compiled directly by a Code Contracts plugin by the now much more open Roslyn Compiler.

As it is now Code Contracts considerably increase compilation time and if you enable static code checks you easily can get builds which take several hours which is a way too hefty price for some additional checks.

If you now want to remove Code Contracts you need to remove from your source code all Contract.Requires<Exn>(…) checks because these will assert in non rewritten Release builds as well which is certainly not what you want. As it stands now Code Contracts violate the contract of generating valid assembly references. The assembly rewriter of Code Contracts  generates assemblies with invalid IL code which can trigger under specific circumstances impossible runtime errors in the CLR.

New Beta of Windows Performance Toolkit

With the release of the first Windows Anniversary SDK Beta also a new version of Windows Performance Toolkit was shipped. If you want to search for the changes here are the most significant ones. Not sure if the version number of the beta will change but it seems to target 10.0.14366.1000.

image

Stack Tags for Context Switch Events

Now you can assign tags for wait call stacks which is a big help if you want to have tags why your application is waiting for something.  Here is sample snippet of common sources of waits.

<Tag Name="Waits">
    <Tag Name="Thread Sleep">
         <Entrypoint Module="ntdll.dll" Method="NtDelayExecution*"/>
    </Tag>
    <Tag Name="IO Completion Port Wait">
         <Entrypoint Module="ntoskrnl.exe" Method="IoRemoveIoCompletion*"/>
    </Tag>
    <Tag Name="CLR Wait" Priority="-1">
         <Entrypoint Module="clr.dll" Method="Thread::DoAppropriateWait*"/>
    </Tag>
    <Tag Name=".NET Thread Join">
         <Entrypoint Module="clr.dll" Method="Thread::JoinEx*"/>
    </Tag>
    <Tag Name="Socket">
         <Tag Name="Socket Receive Wait">
               <Entrypoint Module="mswsock.dll" Method="WSPRecv*"/>
         </Tag>
         <Tag Name="Socket Send Wait">
               <Entrypoint Module="mswsock.dll" Method="WSPSend*"/>
         </Tag>
         <Tag Name="Socket Select Wait">
               <Entrypoint Module="ws2_32.dll" Method="select*"/>
         </Tag>
    </Tag>
    <Tag Name="Garbage Collector Wait">
         <Entrypoint Module="clr.dll" Method="WKS::GCHeap::WaitUntilGCComplete*"/>
    </Tag>
</Tag>

That feature is actually already part of the Windows 10 Update 1 WPA (10.0.10586.15, th2_release.151119-1817) but I have come just over that feature now. That makes hang analysis a whole lot easier. But a really new feature are Flame Graphs which Bruce Dawson wanted since a long time built into WPA. I have written some small tool to generate them also some time ago Visualize Your Callstacks Via Flame Graphs but it never got much traction.

image

If you want to see e.g. why your threads are blocking you get in the context switch view now a nice view what are the blocking reasons and which thread did unblock your threads most often. You can stack two Wait views over each other so you can drill down to the relevant time where something is hanging and you can get to conclusions much faster now.

image

Another nice feature is that if you select a region in the graph and hold down the Ctrl key you can select multiple regions at one time and highlight them which is useful if you frequently need to zoom in into different regions and you move between them. The current flame graphs cannot really flame call stacks because the resulting flames become so small you have no chance to select them. Zooming with Ctrl and the mouse wheel only zooms into the time region which is for a flame graph perhaps not what I want. I would like to have a zoom where I can make parts of the graph readable again.

Symbol loading has got significantly faster and it seems also to crash less often which is quite surprising for a beta.

My Presets Window for managing presets

Another new feature is the My Presets Window which is useful if you work with custom WPA profiles. From there you can select from different loaded profiles the graph in your current view and simply add it.

image

 

image

 

Reference Set Tracing

WPR now also supports Reference Set analysis along with a new graph in WPA. This basically traces every page access and release in the system which allows you to track exactly how your working set evolved over time and why.

image

Since page in/out operations happen very frequently this is a high volume provider. With xperf you can enable it with the keyword REFSET. The xperf documentation about it only tells you

REFSET         : Support footprint analysis

but until now no tool was publicly available to decode the events in a meaningful manner. I still do not understand how everything works together there but it looks powerful to deeply understand when something is paged in or out.

image

TCP/IP Graph

WPA has for a long time known nothing about the network. That is changing now a bit.

image

ACK Delays are nice but as far as I understand TCP the application is free to send the ACK until the server has got the data ready. If you see high ACK delay times you cannot point directly to the network but you still need to investigate the server if something was happening there as well. For network issues I still like Wireshark much more because it understands not only raw TCP  but also the higher protocols. For a network delay analysis TCP retransmit events are the most important ones. The next thing to look at are the packet round trip times (RTT) where Wireshark does an incredible good job at it. I have seen some ETW events in the kernel which have a SRTT (Sample RTT) field but I do not know how significant that actually is.

 

Load files from Zip/CAB

image

You no longer need to extract the etl files but WPA can open them directly which is great if you deal with compressed files a lot.

The Bad

  • The new WPT no longer works on Windows 7 so beware.
  • For WPR/UI I have mixed feelings because it is not really configurable and records too much. The recorded amount of data exceeds easily one GB on a not busy machine if I enable CPU Disk, File and Reference Set tracing. The beta version also records for the Disk profile Storeport traces which are only useful if you suspect bugs in your hard disk firmware or special hard disk drivers. If I enable context switch tracing with call stacks I usually do not need for every file/disk operation the call stacks since I will find it anyway in the context switch traces at that time. If VirtualAlloc tracing is enabled the stack traces for the free calls are recorded which are seldom necessary because double frees are most often not the issue. Memory leaks are much more common. For these the allocation call stacks are the only relevant ones.
  • The improvements to the ETW infrastructure with Windows 8.1 which support filtering of specific events or to record stack traces only for some specific events have not made it into WPR nor xperf. I really would like to see more feature parity between what the kernel supports and what xperf allows me to configure to reduce the sometimes very high ETW event rates down to something manageable.
  • Currently xperf can start only two kernel trace sessions (NT Kernel Logger and Circular Kernel Context Logger) but not a generic kernel tracing session where one can have up to 8 since Windows 8. Besides this I am not sure if setting the profiling frequency is even possible to set for a specific kernel session.

Conclusions

The new version has many improvements which can help a lot to get new insights in your system in ways that were not possible before. Flame Graphs look nice and I hope that the final version makes it possible to zoom in somehow.

The WPA viewer is really a great tool and has gotten a lot of added features which shows that more and more people are using it.

Show GDI Handles by Type in Windbg

What are GDI Handles? Every GDI (Graphics Device Interface) object has a handle which is associated to one specific process. If you create UI resources (not windows) like

  • Fonts
  • Bitmaps
  • Device Contexts (DC)
  • Pens
  • Regions

then you are using GDI resources. There is a hard cap on 10K GDI objects per process. If you have a memory leak in window related code then the chances are high that your process will not crash due to an out of memory condition but because a generic GDI or GDI+ error did occur. If this happens after a many hour stress test you will have a hard time to figure out what went wrong, if all you have got is a memory dump with an exception like this:

System.Runtime.InteropServices.ExternalException was unhandled by user code
Message="A generic error occurred in GDI+."
Source="System.Drawing"
ErrorCode=-2147467259
StackTrace:
   at System.Drawing.Image.Save(Stream stream, ImageCodecInfo encoder, EncoderParameters    encoderParams)
   at System.Drawing.Image.Save(Stream stream, ImageFormat format)

To make things more complex .NET tries hard to reuse existing objects and most often it avoids GDI and GDI+ entirely. You can create more than 10K Bitmaps in a Windows Forms or WPF application without problems. Not every object on your managed heap is backed by GDI handle based object! But some of them are. This can be confusing sometimes.

GDI handle leaks are not easy to diagnose since there are not many tools out there that are able to display more than the total count like Task Manager with the GDI-Object count column.

image

Process Explorer can display pretty much anything but if you search for GDI objects it will let you down. Process Hacker (an open source Process Explorer clone with more features) on the other hand can display them via the context menu

image

The best tool I have found so far is GDIView from NirSoft. But it seems at least on Windows 8.1 to display sometimes much less handles than Task Manager shows. Do not entirely trust the reported numbers. It displays in the All GDI column the same value as Task Manager which should be pretty reliable. If the other columns add up the to the total reported value then you should be fine. Interestingly there exists no Performance counter for GDI objects which is quite odd since it is still an essential resource although MS seems to try hard to put GDI out of service. 

 

image

 

The problem with all of the above tools is that they do not work with memory dumps. Windbg had an extension which is since about 10 years no longer part of the Windbg package which did only work until Windows 2000 and not even on Windows XP.

So what can you do? First we need a GDI Handle leak. Bitmap.GetHbitmap will create a GDI backed bitmap and return you the handle to it. The following code creates therefore a leak of 1000 bitmap GDI objects:

        private void Button_Click(object sender, RoutedEventArgs e)
        {
            Bitmap bmp = new Bitmap(100, 100);
            for (int i = 0; i < 1000; i++)
            {
                bmp.GetHbitmap();
            }
        }

Ok when I put that into my WPF application I get a nice GDI leak when I press the magic button.

image

Now we need to search the internet for helpful resource how to dump GDI objects. After some searching I pretty much always land on some security related kernel hacking sites which are an interesting read on its own. These guys really know their stuff and I guess they have also some older copy of the Windows source code somewhere around. I have found https://blog.coresecurity.com/2015/09/28/abusing-gdi-for-ring0-exploit-primitives/ which talks about a fascinating way to abuse Bitmaps for kernel exploits which still seem to work today. But what is more important he also talks about that the kernel manages GDI handles for each process in a specific kernel region which is read only mapped back to the process address space.

That is important to know since we have a chance to look at GDI kernel structures in user mode process dumps. There is hope to retrieve information about GDI handles also in 2016. The next good thing is that the definition of GDICell structure seems to be still valid on Windows 8 (and on my Windows 10 home installation).

typedef struct {
  PVOID64 pKernelAddress; // 0x00
  USHORT wProcessId;      // 0x08
  USHORT wCount;          // 0x0a
  USHORT wUpper;          // 0x0c
  USHORT wType;           // 0x0e
  PVOID64 pUserAddress;   // 0x10
} GDICell;                // sizeof = 0x18

The post also tells us where we need to search for the GDI Handle table. It is part of the Process Environment Block (PEB) which Windbg can dump since a long time. But !peb will not display the GDI handle table. Instead we need to dump the structure with the ntdll symbols directly.

.process will print the PEB pointer value which we can then feed into dt

0:014> .process
Implicit process is now 00000000`00923000
0:014> dt ntdll!_PEB GdiSharedHandleTable 00923000
   +0x0f8 GdiSharedHandleTable : 0x00000000`013e0000 Void

Now we know the start of the GDI handle table. How big is it? The !address command can tell us that:

0:014> !address 13e0000

Usage:                  Other
Base Address:           00000000`013e0000
End Address:            00000000`01561000
Region Size:            00000000`00181000 (   1.504 MB)
State:                  00001000          MEM_COMMIT
Protect:                00000002          PAGE_READONLY
Type:                   00040000          MEM_MAPPED
Allocation Base:        00000000`013e0000
Allocation Protect:     00000002          PAGE_READONLY
Additional info:        GDI Shared Handle Table


Content source: 1 (target), length: 1000

It seems we are not the first ones to do that since under Additional info the name of the memory region has a really descriptive name. That looks promising. With the | command we get the current process id in hex and we are ready for a first try to search the GDI table for GDICell structures which should contain our current PID as part of the GDI handle table.

0:014> |
.  0    id: 35b8    attach    name: D:\GdiLeaker\bin\Debug\GdiLeaker.exe
0:014> s -w 13e0000 1561000 35b8
00000000`013f02f8  35b8 0000 1c05 4005 0000 0000 0000 0000  .5.....@........
00000000`013f3580  35b8 0000 de05 4005 0000 0000 0000 0000  .5.....@........
00000000`013f7948  35b8 0000 8005 4005 0000 0000 0000 0000  .5.....@........
00000000`013f7fd8  35b8 0000 c80a 400a 3370 00d4 0000 0000  .5.....@p3......
00000000`013facf0  35b8 0000 5e05 4005 0000 0000 0000 0000  .5...^.@........
00000000`013fb9c8  35b8 0000 0a01 4401 0000 00c8 0000 0000  .5.....D........
00000000`013fdac8  35b8 0000 9305 4005 0000 0000 0000 0000  .5.....@........
00000000`013fe428  35b8 0000 4905 4005 0000 0000 0000 0000  .5...I.@........
....

The number of printed lines look good. Since the kernel structures have also a wType member which should describe the GDI handle type we dig deeper until we find http://stackoverflow.com/questions/13905661/how-to-get-list-of-gdi-handles which tells us that 05 is the Bitmap GDI handle type. Since 6 bytes after the pid the type should follow we can check on our printed memory and sure there is mostly a 5 at this memory location. The rest of the word seems to be used for other information.

Lets create a Windbg script out of that and automate the complete process. Now attach to our GDI leaker and dump the GDI handle table before and after we have leaked 1000 bitmaps.

Important: If you are running on a 64 bit OS you need to attach the 64 bit Windbg even if you debug a 32 bit application!

0:013> $$>a<"D:\GdiDump\DumpGdi.txt"
GDI Handle Table 00000000013e0000 0000000001561000
GDI Handle Count      14
    DeviceContexts: 4
    Regions:        2
    Bitmaps:        2
    Palettes:       0
    Fonts:          3
    Brushes:        3
    Pens:           0
    Uncategorized:  0
0:013> g
0:014> $$>a<"D:\GdiDump\DumpGdi.txt"
GDI Handle Table 00000000013e0000 0000000001561000
GDI Handle Count      1021
    DeviceContexts: 8
    Regions:        3
    Bitmaps:        1003
    Palettes:       0
    Fonts:          3
    Brushes:        4
    Pens:           0
    Uncategorized:  0

Bingo! That looks pretty good. I have tried my script on various processes and the results are promising. This works also with full memory dumps which is a long missing piece of important information. Often enough no screenshot with Task Manager and the GDI handle count was taken so this information is lost if the failed process was terminated in the meantime. In some processes ca. 10-20% of the handle information seems to be missing. Since GDIView shows pretty much the same numbers I hope that this script will not be too far off for your crashed processes. Give it a try or post some additions if you know more about the internals of GDI.

 

Here is the code of DumpGDI.txt

$$ Run as: $$>a<DumpGdi.txt
$$ Written by Alois Kraus 2016
$$ uses pseudo registers r0-5 and r8-r14

r @$t1=0
r @$t8=0
r @$t9=0
r @$t10=0
r @$t11=0
r @$t12=0
r @$t13=0
r @$t14=0
$$ Increment count is 1 byte until we find a matching field with the current pid
r @$t4=1

r @$t0=$peb
$$ Get address of GDI handle table into t5
.foreach /pS 3 /ps 1 ( @$GdiSharedHandleTable { dt ntdll!_PEB GdiSharedHandleTable @$t0 } ) { r @$t5 = @$GdiSharedHandleTable }

$$ On first call !address produces more output. Do a warmup
.foreach /pS 50 ( @$myStartAddress {!address  @$t5} ) {  }


$$ Get start address of file mapping into t2
.foreach /pS 4 /ps 40 ( @$myStartAddress {!address  @$t5} ) { r @$t2 = @$myStartAddress }
$$ Get end address of file mapping into t3
.foreach /pS 7 /ps 40 ( @$myEndAddress {!address  @$t5} ) { r @$t3 = @$myEndAddress }
.printf "GDI Handle Table %p %p", @$t2, @$t3

.for(; @$t2 < @$t3; r @$t2 = @$t2 + @$t4) 
{
  $$ since we walk bytewise through potentially invalid memory we need first to check if it points to valid memory
  .if($vvalid(@$t2,4) == 1 ) 
  {
     $$ Check if pid matches
     .if (wo(@$t2) == @$tpid ) 
     { 
        $$ increase handle count stored in $t1 and increase step size by 0x18 because we know the cell structure GDICell has a size of 0x18 bytes.
        r @$t1 = @$t1+1
        r @$t4 = 0x18
        $$ Access wType of GDICELL and increment per GDI handle type
        .if (by(@$t2+6) == 0x1 )  { r @$t8 =  @$t8+1  }
        .if (by(@$t2+6) == 0x4 )  { r @$t9 =  @$t9+1  }
        .if (by(@$t2+6) == 0x5 )  { r @$t10 = @$t10+1 }
        .if (by(@$t2+6) == 0x8 )  { r @$t11 = @$t11+1 }
        .if (by(@$t2+6) == 0xa )  { r @$t12 = @$t12+1 }
        .if (by(@$t2+6) == 0x10 ) { r @$t13 = @$t13+1 }
        .if (by(@$t2+6) == 0x30 ) { r @$t14 = @$t14+1 }
     } 
  } 
}

.printf "\nGDI Handle Count      %d", @$t1
.printf "\n\tDeviceContexts: %d", @$t8
.printf "\n\tRegions:        %d", @$t9
.printf "\n\tBitmaps:        %d", @$t10
.printf "\n\tPalettes:       %d", @$t11
.printf "\n\tFonts:          %d", @$t12
.printf "\n\tBrushes:        %d", @$t13
.printf "\n\tPens:           %d", @$t14
.printf "\n\tUncategorized:  %d\n", @$t1-(@$t14+@$t13+@$t12+@$t11+@$t10+@$t9+@$t8)

When you edit the script with Notepad++ try Perl as scripting language. The syntax highlighter will do pretty good job on it. Windbg scripts do not look nice but they work and make dump analysis a lot easier. If you want to use Windbg efficiently I highly recommend you read more about .foreach, .printf, and pseudo registers.

If you e.g. search for a memory leak in a managed process you usually do !DumpHeap -stat -live to see all still reachable objects. If you omit -live then you see also all temporary objects which are not yet GCed which can also be interesting if there is a large difference between these two. Now you have identified a type and you click on the blue link in Windbg which will then print all object instances of that type.

Lets try that with FactoryRecord instances where we get the list of object addresses if we click in Windbg on the blue link of the !DumpHeap -stat command.

Statistics:
      MT    Count    TotalSize Class Name
6db871a0      114         6384 System.Configuration.FactoryRecord

If we dump one object instance we see that there is an interesting name inside it which could help to diagnose the context in which it was allocated

0:014> !DumpObj /d 02c98ca8
Name:        System.Configuration.FactoryRecord
MethodTable: 6db871a0
EEClass:     6db532f8
Size:        56(0x38) bytes
File:        C:\WINDOWS\Microsoft.Net\assembly\GAC_MSIL\System.Configuration\v4.0_4.0.0.0__b03f5f7f11d50a3a\System.Configuration.dll
Fields:
      MT    Field   Offset                 Type VT     Attr    Value Name
7257e918  4000249        4        System.String  0 instance 02c98c50 _configKey
7257e918  400024a        8        System.String  0 instance 02c984b0 _group
7257e918  400024b        c        System.String  0 instance 02c98ad4 _name
6db86f9c  400024c       2c ...SimpleBitVector32  1 instance 02c98cd4 _flags
7257e918  400024d       10        System.String  0 instance 02c98b08 _factoryTypeName
6dc0c698  400024e       20         System.Int32  1 instance      200 _allowDefinition
6dc0c6d0  400024f       24         System.Int32  1 instance      100 _allowExeDefinition
6db87168  4000250       30 ...errideModeSetting  1 instance 02c98cd8 _overrideModeDefault
7257e918  4000251       14        System.String  0 instance 02c86bd8 _filename
725807a0  4000252       28         System.Int32  1 instance      123 _lineNumber
7257ecb8  4000253       18        System.Object  0 instance 00000000 _factory
00000000  4000254       1c                       0 instance 00000000 _errors

Lets say we are after the group name. If we dump this object

0:014> !DumpObj /d 02c984b0
Name:        System.String
MethodTable: 7257e918
EEClass:     721bf344
Size:        50(0x32) bytes
File:        C:\WINDOWS\Microsoft.Net\assembly\GAC_32\mscorlib\v4.0_4.0.0.0__b77a5c561934e089\mscorlib.dll
String:      system.web/caching
Fields:
      MT    Field   Offset                 Type VT     Attr    Value Name
725807a0  4000243        4         System.Int32  1 instance       18 m_stringLength
7257f35c  4000244        8          System.Char  1 instance       73 m_firstChar
7257e918  4000248       40        System.String  0   shared   static Empty

then we see that this FactoryRecord has something to do with system.web/caching. Now we want to dump the group names of all FactoryRecord instances to get an overview.

.foreach( @$my { !DumpHeap -short -mt 6db871a0  } ) { r @$t0 = @$my; .printf "\n%mu", poi(@$t0+8)+8 }
...
system.transactions
system.transactions

system.web
system.web
system.web
system.web
system.web
system.web
system.web
system.web
system.web
system.web

...

The basic  idea is to dump the object addresses of all FactoryRecord instances. Since in .NET the Method Table (MT) pointer is the synonym for a type we use the MT pointer as argument to !DumpHeap -mt 6db871a0  to print only FactoryRecord instances. The additional -short limits the output to only object addresses which makes it a perfect fit for .foreach. 

For some reason you need to assign the local variable from the foreach loop @$my to a pseudo register @$t0 to make calculations possible. Oh and do not forget the spaces between @$t0 = @$my. The @ before the variable $xx is not strictly necessary but it speeds up the debugger evaluation engine a bit.

To get the pointer to the _group string which is 8 bytes after the FactoryRecord pointer we read it via poi(@$t0+8). Now we have the pointer to the string object. I know that .NET strings are null terminated UTF-16 strings. We can use therefore .printf “\n%mu” to print the string contents. On x64 the offset to the char array of a .NET string is 0xc and for x86 it is 8 which I have used above. These little shortcuts can make a dump analysis dramatically faster.

Although I like Visual Studio a lot, for more advanced things you need to use the power of Windbg. Give it a try when you analyze your next dump.