PDD Profiler Driven Development

There are many extreme software development strategies out there. PDD can mean Panic Driven, or Performance Driven Development. The first one should be avoided, the second is a MSR paper dealing with performance modelling techniques. Everyone writes tests for their software because they have proven to be useful, but why are so few people profiling their own code? Is it too difficult?

I have created a C# console project named FileWriter https://github.com/Alois-xx/FileWriter which writes 5000 100KB files (= 500MB) to a folder with different strategies. How do you decide which strategy is the most efficient and/or fastest one? As strategies we have

  • Single Threaded
  • Multi Threaded with TPL Tasks
  • Multi Threaded with Parallel.For
    • 2,3,4,5 Threads

The serial version is straightforward. It creates a n files and writes in each file a string until the file size is reached. The code is located at https://github.com/Alois-xx/FileWriter/blob/master/DataGenerator.cs, but the exact details are not important.

Lets measure how long it takes:

As expected the serial version is slowest and the parallel versions are faster. The fastest one is the one which uses 3 threads. Based on real world measurements we have found that 3 threads is the fastest solution. OK problem solved. Move on and implement the next feature. That was easy.

But, as you can imagine there is always a but: Are the numbers we have measured reasonable? Lets do some math. We write 500 MB with the parallel version in 11s. That gives a write speed of 45 MB/s. My mental performance model is that we should be able to max out a SATA SSD which can write 430 MB/s.

This is off nearly by a factor 10 what the SSD is able to deliver. Ups. Where did we loose performance? We can try out the Visual Studio CPU Profiler:

That shows that creating and writing to the file are the most expensive parts. But why? Drilling deeper will help to some extent, but we do not see where it did wait for disk writes because the CPU Profiler is only showing CPU consumption. We need another tool. Luckily there is the Concurrency Visualizer extension for free on the Visual Studio Marketplace, which can also show Thread blocking times.

After installation you can start your project from the Analyze menu with your current debugging settings.

We try to stay simple and start FileWriter with the single threaded strategy which is easiest to analyze

FileWriter -generate c:\temp serial

In the Utilization tab we get a nice summary of CPU consumption of the machine. In our case other processes were consuming a lot of CPU while we were running. White is free CPU, dark gray is System and light gray are all other processes. Green is our profiled process FileWriter.exe.

The first observation is that once we start writing files other processes become very busy which looks correlated. We come back to that later.

The Threads view shows where the Main Thread is blocked due to IO/Waiting/…

You can click with the mouse on the timeline of any thread to get in the lower view the corresponding stack trace. In this case we are delayed e.g. by 2,309 s by a call to WriteFile. For some reason the .NET 6.0 symbols could not be resolved but it might also be just a problem with my symbol lookup (there should be a full stack trace visible). Personally I find it difficult to work with this view, when other processes are consuming the majority of CPU. But since Concurrency Visualizer uses ETW there is nothing that can stop us to open the Concurrency Visualizer generated ETW files from the folder C:\Users\\Documents\Visual Studio 2022\ConcurrencyVisualizer\ with WPA.

We see that FileWriter (green) starts off good but then MsMpEng.exe (red = Defender Antivirus) is kicking in and probably delaying our File IO significantly. In the middle we see System (lila = Windows OS) doing strange things (Antivirus again?).

Do you remember that our SSD can write 430 MB/s? The Disk Usage view in WPA shows disk writes. Our process FileWriter writes 500 MB of data, but there are only 4 MB written in 53ms (Disk Service Time). Why are we not writing to the SSD? The answer is: The Operating System is collecting a bunch of writes and later flushes data to disk when there is time or the amount of written data exceeds some internal threshold. Based on actual data we need to revise our mental performance model:

We write to the OS write cache instead of the disk. We should be able to write GB/s! The actual disk writes are performed by the OS asynchronously.

Still we are at 45 MB/s. If the disk is not the problem, then we need to look where CPU is spent. Lets turn over to the gaps where the green line of FileWriter writing on one thread to disk vanishes. There seems to be some roadblock caused by the System process (Lila) which is the only active process during that time of meditation:

When we unfold the Checkpoint Volume stacktag which is part of my Overview Profile of ETWController we find that whenever NTFS is optimizing the Free Bitmap of the NTFS volume it is traversing a huge linked list in memory which blocks all File IO operations for the complete disk. Do you see the bluish parts in the graph which WPA shows where NTFS Checkpoint volume samples are found in the trace? Automatic highlighting of the current table selection in the graph is one of the best features of WPA. The root cause is that we delete the old files of the temp folder in FileWriter from a previous run which produces a lot of stale NTFS entries which, when many new files are created, at some point cause the OS to wreak havoc on file create/delete performance. I have seen dysfunctional machines which did have a high file churn (many files per day were created/deleted) where all file create/delete operations did last seconds instead of sub milliseconds. See a related post about that issue here: https://superuser.com/questions/1630206/fragmented-ntfs-hdd-slow-file-deletion-600ms-per-file. I did hit this issue several times when I cleaned up ETW folders which contain for each ETL file a large list of NGENPDB folders which the .ni.pdb files. At some point Explorer is stuck for many minutes to delete the last files. The next day and a defrag later (the NTFS tables still need defragmentation even for SSDs) things are fast again. With that data we need to revisit our mental performance model again:

Creating/Deleting large amount of files might become very slow depending on internal NTFS states which are in memory structures and manifest as high CPU consumption in the Kernel inside the NtfsCheckpointVolume method.

That view is already good, but we can do better if we record ETW data on our own and analyze it at scale.

How about a simple batch script that

  • Starts ETW Profiling
  • Executes your application
  • Retrieves the test execution time as return value from your application
  • Stops ETW Profiling
  • Write an ETW file name with the measured test duration and test case name
  • Extract ETW Data for further analysis

That is what the RunPerformanceTests.cmd script of the FileWriter project does. The tested application is FileWriter.exe which writes files with different strategies to a folder. When it exits it uses as exit code of the process the duration of the test case in ms. For more information refer to ProfileTest.cmd command line help.

FileWriter and Measurement Details

Read on if you are eager for details how the scripts generate ETW measurement data or go to the next headline.

RunPerformanceTests.cmd
call ProfileTest.cmd CreateSerial       FileWriter.exe -generate "%OutDir%" serial
call ProfileTest.cmd CreateParallel     FileWriter.exe -generate "%OutDir%" parallel
call ProfileTest.cmd CreateParallel2    FileWriter.exe -generate "%OutDir%" parallel -threads 2
...
call ProfileTest.cmd CreateTaskParallel FileWriter.exe -generate "%OutDir%" taskparallel

The most “complex” of ProfileTest.cmd is where several ETW profiles of the supplied MultiProfile.wprp are started. The actual start command in the ProfileTest.cmd script is

wpr -start MultiProfile!File -start MultiProfile.wprp!CSWitch -start MultiProfile.wprp!PMCLLC 

But with shell scripting with the correct escape characters it becomes this:

"!WPRLocation!" -start "!ScriptLocation!MultiProfile.wprp"^^!File -start "!ScriptLocation!MultiProfile.wprp"^^!CSwitch -start "!ScriptLocation!MultiProfile.wprp"^^!PMCLLC

This line starts WPR with File IO tracing, Context Switch tracing to see thread wait/ready times and some CPU counters for Last Level Cache misses. Disk and CPU sampling is already part of these profiles.

The PMCLLC profile can cause issues if you are using VMs like HyperV. To profile low level CPU features you need to uninstall all Windows OS HyperV Features to get Last Level Cache CPU tracing running.

You can check if you have access to your CPU counters via

C:\>wpr -pmcsources
Id  Name                        
--------------------------------
  0 Timer                       
  2 TotalIssues                 
  6 BranchInstructions          
 10 CacheMisses                 
 11 BranchMispredictions        
 19 TotalCycles                 
 25 UnhaltedCoreCycles          
 26 InstructionRetired          
 27 UnhaltedReferenceCycles     
 28 LLCReference                
 29 LLCMisses                   
 30 BranchInstructionRetired    
 31 BranchMispredictsRetired    
 32 LbrInserts                  
 33 InstructionsRetiredFixed    
 34 UnhaltedCoreCyclesFixed     
 35 UnhaltedReferenceCyclesFixed
 36 TimerFixed                  

If you get a list with just one entry named Timer then you will not get CPU counters, because you have installed one or several HyperV features. In theory this issue should not exist because this was fixed with Windows 10 RS 5 in 2019 (https://docs.microsoft.com/en-us/archive/blogs/vancem/perfview-hard-core-cpu-investigations-using-cpu-counters-on-windows-10), but I cannot get these CPU counters when HyperV is active in Windows 10 21H2.

The easiest fix is to remove the “!ScriptLocation!MultiProfile.wprp”^^!PMCLLC from the ProfileTest.cmd script, or you modify the MultiProfile.wprp to set in all HardwareCounter nodes

<HardwareCounter Id="HardwareCounters_EventCounters" Base="" Strict="true">

the value Strict=”false” to prevent WPR from complaining when it could not enable hardware counters. The MultiProfile.wprp profile uses nearly all features of WPR which are possible according to the not really documented xsd Schema to streamline ETW recording to your needs. The result of this “reverse” documenting is an annotated recording profile that can serve as base for your own recording profiles. Which profiles are inside it? You can query it from the command line with WPR:

D:\Source\FileWriter\bin\Release\net6.0>wpr -profiles MultiProfile.wprp

Microsoft Windows Performance Recorder Version 10.0.19041 (CoreSystem)
Copyright (c) 2019 Microsoft Corporation. All rights reserved.

   Default                   
               (CPU Samples/Disk/.NET Exceptions/Focus)
   CSwitch    +(CPU Samples/Disk/.NET Exceptions/Focus/Context Switch)
   MiniFilter +(CPU Samples/Disk/.NET Exceptions/Focus/MiniFilter)
   File       +(CPU Samples/Disk/.NET Exceptions/Focus/File IO)
   Network    +(CPU Samples/Disk/.NET Exceptions/Focus/Network)
   Sockets    +(CPU Samples/Disk/.NET Exceptions/Focus/Sockets)
   VirtualAlloc (Long Term)
   UserGDILeaks (Long Term)
   PMCSample  
     PMC Sampling for PMC Rollover + Default
   PMCBranch  
     PMC Cycles per Instruction and Branch data - Counting
   PMCLLC                      
     PMC Cycles per Instruction and LLC data - Counting
   LBR                         
     LBR - Last Branch Record Sampling

Or you can load the profile into WPRUI which is part of the Windows SDK where they will show up under the Custom Measurements node:

All profiles can be mixed (except Long Term) to record common things for unmanaged and .NET Code with the least amount of data. A good choice is the default profile which enables CPU sampling, some .NET events, Disk and Window Focus events. These settings are based on years of troubleshooting, and should be helpful recording settings also in your case. The supplied MS profiles are recording virtually everything which is often a problem that the trace sizes are huge (many GB) or, if memory recording is used, the history is one minute or even less on a busy system which has problems.

Generated Data

RunPerformanceTests.cmd saves the ETW data to the C:\Temp folder

After data generation it calls ExtractETW.cmd which is one call to ETWAnalyzer which loads several files in parallel and creates Json files from the ETW files

ETWAnalyzer -extract all -fd c:\temp\*Create*.etl -symserver MS -nooverwrite

This will generate JSON files in C:\Temp\Extract which can be queried with ETWAnalyzer without the need to load every ETL file into WPA.

As you can see the 500+ MB ETL files are reduced to ca. 10 MB which is a size reduction of a factor 50 while keeping the most interesting aspects. There are multiple json files per input file to enable fast loading. If you query with ETWAnalyzer e.g. just CPU data the other *Derived* files are not even loaded which keeps your queries fast.

Working With ETWAnalyzer and WPA on Measured Data

To get an overview if all executed tests suffer from a known issue you can query the data from the command line. To get the top CPU consumers in the Kernel not by method name but by stacktag you can use this query

EtwAnalyzer -dump CPU -ProcessName System -StackTags * -TopNMethods 2

I am sure you ask yourself: What is a stacktag? A stacktag is a descriptive name for key methods in a stacktrace of an ETW event. WPA and TraceProcessing can load a user configured xml file which assigns to a stacktrace a descriptive (stacktag) name, or Other if no matching stacktag rule could be found. The following XML fragment defines e.g. the above NTFS Checkpoint Volume stacktag:

  <Tag Name="NTFS Checkpoint Volume">
	<Entrypoint Module="Ntfs.sys" Method="NtfsCheckpointVolume*"/>
  </Tag>

See default.stacktags of ETWAnalyzer how it can be used to flag common issues like OS problems, AV Scanner Activity, Encryption Overhead, … with descriptive stacktags. If more than one stacktag matches the first matching stacktag, deepest in the stacktrace, wins. You can override this behavior if you set a priority value for a given tag.

Based on the 7 runs we find that the CheckpointVolume issue is part of every test which blocks File writes up to 6+ seconds. Other test runs do no suffer from this which makes comparison of your measured numbers questionable. Again we need to update our mental performance model:

Tests which are slower due to CheckpointVolume overhead should be excluded to make measurements comparable where this internal NTFS state delay is not occurring.

The exact number would be the sum of CPU+Wait, because this operation is single threaded we get exact numbers here. ETWAnalyzer can sum method/Stacktags CPU/Wait times with the -ShowTotal flag. If we add Method, the input lines are kept, if we use Process only process and file totals are shown.

EtwAnalyzer -dump CPU -ProcessName System -StackTags *checkpoint* -ShowTotal method

Now we know that file IO is delayed up to 7461 ms without the need to export the data to Excel. ETWAnalyzer allows you to export all data you see at the console to a CSV file. When you add -csv xxx.csv and you can aggregate/filter the data further as you need it. This is supported by all -Dump commands!

We can also add methods to the stacktags by adding -methods *checkpoint* to see where the stacktag has got its data from

EtwAnalyzer -dump CPU -ProcessName System -StackTags *checkpoint* -methods *checkpoint*
The method NtfsCheckpointVolume has 873 ms, but the stacktag has only 857 ms. This is because other stacktags can “steal” CPU sampling/wait data which is attributed to it. This is curse and feature rolled into one. It is great because if you add up all stacktags then you get the total CPU/Wait time for a given process across all threads. It is bad, because expensive things might be looking less expensive based on one stacktag because other stacktags have “stolen” some CPU/Wait time.

That is all nice, but what is our FileWriter process doing? Lets concentrate on the single threaded use case by filtering the files with -FileDir or -fd which is the shorthand notation for it to *serial*. You need to use * in the beginning, because the filter clause uses the full path name. This allows you to query with -recursive a complete directory tree where -fd *Serial*;!*WithAV* -recursive defines a file query starting from the current directory for all Serial files but it excludes folders or files which contain the name WithAV. To test which files are matching you can use -dump Testrun -fd xxxx -printfiles.

EtwAnalyzer -dump CPU -ProcessName FileWriter -stacktags * -Filedir *serial*

The test takes 12s but it spends 2,2s yet alone with Defender Activity. Other stacktags are not giving away a clear problem we could drill deeper. One interesting thing is that we spend 750ms in thread sleeps. A quick check in WPA shows

that these are coming from the .NET Runtime to recompile busy methods later on the fly. Stacktags are great to flag past issues and categorize activity, but they also tend to decompose expensive operations into several smaller ones. E.g. if you write to a file you might see WriteFile and Defender tags splitting the overall costs of a call to WriteFile. But what stands out that on my 8 Core machine a single threaded FileWriter fully utilizes my machine with Defender activity. We need to update our mental performance model again:

Writing to many small files on a single thread will be slowed down by the Virus Scanner a lot. It also generates a high load on the system which slows down everything else.

Since we did write the application we can go down to specific methods to see where the time is spent. To get an overview we use the top level view of WPA of our CreateFile method

In WPA we find that

  • CreateFiles
    • CPU (CPU Usage) 10653 ms
    • Wait Sum 672 ms
    • Ready Time Sum 1354 ms
      • This is the time a thread did wait for a CPU to become free when all CPUs were used

which adds up to 12,679 s which pretty closely (16ms off) matches the measured 12,695 s. It is always good to verify what you did measure. As you have seen before we have already quite often needed to change our mental performance model. The top CPU consumers in the CreateFiles method are

  • System.IO.StreamWriter::WriteLine
  • System.IO.Strategies.OSFileStreamStrategy::.ctor
  • System.IO.StreamWriter::Dispose
  • System.Runtime.CompilerServices.DefaultInterpolatedStringHandler::ToStringAndClear
  • System.Buffers.TlsOverPerCoreLockedStacksArrayPool`1[System.Char]::Rent
  • System.Runtime.CompilerServices.DefaultInterpolatedStringHandler::AppendFormatted
  • System.IO.Strategies.OSFileStreamStrategy::.ctor
  • System.IO.StreamWriter::WriteLine

What is surprising is that ca. 2s of our 12s are spent in creating interpolated strings. The innocent line

 string line = $"echo This is line {lineCount++}";

costs ca. 16% of the overall performance. Not huge, but still significant. Since we are trying to find out which file writing strategy is fastest this is OK because the overhead will be the same for all test cases.

We can get the same view as in WPA for our key methods with the following ETWAnalyzer query

EtwAnalyzer -dump CPU -ProcessName FileWriter -fd *serial* -methods *StreamWriter.WriteLine*;*OSFileStreamStrategy*ctor*;*StreamWriter.WriteLine*;*DefaultInterpolatedStringHandler*AppendFormatted*;*TlsOverPerCoreLockedStacksArrayPool*Rent;*DefaultInterpolatedStringHandler.ToStringAndClear;*StreamWriter.Dispose;*filewriter* -fld

You might have noticed that the method names in the list have :: between the class and method. ETWAnalyzer uses always . which is less typing and is in line with old .NET Framework code which used :: for JITed code and . for precompiled code.

Additionally I have smuggled -fld to the query. It is the shorthand for -FirstLastDuration or -fld which shows the difference between the Last-First time the method did show up in CPU Sampling or Context switch data. Because we know that our test did only measure DataGenerator.CreateFile calls we see in the Last-First column 12,695 s which is down to the ms our measured duration! This option can be of great value if you want to measure parallel asynchronous activity wall clock time. At least for .NET code the generated state machine classes of the C# compiler contain central methods which are invoked during init and finalization of asynchronous activities which makes it easy to “measure” the total duration even if you have no additional trace points at hand. You can add to -fld s s or -fld local local or -fld utc utc to also show the first and last time in

  • Trace Time (seconds)
  • Local Time (customer time in the time zone the machine was running)
  • UTC Time

See command line help for further options. ETWAnalyzer has an advanced notation of time which can be formatted in the way you need it.

The WPA times for Wait/Ready are the same in ETWAnalyzer, but not CPU. The reason is that to exactly judge the method CPU consumption you need to look in WPA into CPU Usage (Sampled) which shows per method CPU data, which is more accurate. ETWAnalyzer merges this automatically for you.

CPU usage (Precise) Trap

The view CPU usage (Precise) in WPA shows Context switch data which is whenever your application called a blocking OS call like Read/WriteFile/WaitForSingleObject, … your thread is switched off the CPU which is called context switch. You will see in this graph therefore only (simplification!) methods which were calling a blocking OS method. If you have e.g. a busy for loop in method int Calculate() and the next called method is Console.WriteLine, like this,

int result = Calculate();
Console.WriteLine(result)

to print the result then you will see in CPU usage (Precise) all CPU attributed to Console.WriteLine because that method was causing the next blocking OS call. All other non blocking methods called before Console.WriteLine are invisible in this view. To get per method data you need to use the CPU data of CPU sampling which gets data in a different way to much better measure CPU consumption at method level. Is Context Switch data wrong? No. The per thread CPU consumption is exact because this is a trace of the OS Scheduler when he moves threads on/off a CPU. But you need to be careful to not interpret the method level CPU data as true CPU consumption of that method.

CPU Usage (Sampled) Trap

Lets compare both CPU views (Sampled) vs (Precise)

StreamWriter.Dispose method consumes 887 ms in Sampling while in Context Switch view we get 1253 ms. In this case the difference is 41%! With regards to CPU consumption you should use the Sampled view for methods to get the most meaningful value.

Every WPA table can be configured to add a Count column. Many people fell into the trap to interpret the CPU Sampling Count column as number of method calls, while it actually just counts the number of sampling events. CPU sampling works by stopping all running threads 1000 (default) times per second, take a full stacktrace and write that data to ETW buffers. If method A() did show up 1000 times then it gets 1000*1ms CPU attributed. CPU sampling has statistical bias and can ignore e.g. method C() if it is executing just before, or after, the CPU sampling event did take a stack trace.

ETWAnalyzer Wait/Ready Times

The method list shown by ETWAnalyzer -Dump CPU -methods xxx is a summation across all threads in a process. This serves two purposes

  • Readability
    • If each thread would be printed the result would become unreadable
  • File Size
    • The resulting extracted Json file would become huge

This works well for single threaded workloads, but what about 100 threads waiting for something? WPA sums all threads together if you remove the thread grouping. ETWAnalyzer has chosen a different route. It only sums all non overlapping Wait/Ready times. This means if more than one thread is waiting, the wait is counted only once.

The Wait/Ready times for methods for multi threaded methods are the sum of all non overlapping Wait/Ready times for all threads. If e.g. a method was blocked on many threads you will see as maximum wait time your ETW recording time and not some arbitrary high number which no one can reason about. It is still complicated to judge parallel applications because if you have heavy thread over subscription you will see both, Wait and Ready, times to reach your ETW recording time. This is a clear indication that you have too many threads and you should use less of them.

Make Measurements Better

We have found at least two fatal flaws in our test

  • Anti Virus Scanner is consuming all CPU even in single threaded test
    • Add an exclude rule to the folder via PowerShell
      • Add-MpPreference -ExclusionPath “C:\temp\Test”
    • To remove use
      • Remove-MpPreference -ExclusionPath “C:\temp\Test”
  • Some test runs suffer from large delays due to CheckpointVolume calls which introduces seconds of latency which is not caused by our application
    • Change Script to delete temp files before test is started to give OS a chance to get into a well defined file system state

When we do this and we repeat the tests a few times we get for the single threaded case:

Nearly a factor 3 faster for changing not a single line of code is pretty good. The very high numbers are coming from overloading the system when something else is running in the background. The 12s numbers are for enabled Defender while < 10s are with a Defender exclusion rule but various NtfsCheckpoint delays. When we align everything correctly then we end up with 7s for the single threaded case which is 71 MB/s of write rate. The best value is now 3,1s for 4 threads with 161 MB/s.

What else can we find? When looking at variations we find also another influencing factor during file creation:

We have a single threaded test run with 9556 ms and a faster one with 7066 ms. Besides the CheckpointVolume issue we loose another 1,3 s due to reading MFT and paged out data. The faster test case (Trace #2) reads just 0,3 ms from the SSD, while we read in the slower Run (Trace #1) 1,3s from disk. The amount of data is small (8 MB), but for some reason the SSD was handing out the data slowly. Yes SSDs are fast, but sometimes, when they are busy with internal reorg things, they can become slow. SSD performance is a topic for another blog post.

We need to update our mental performance model

File System Cache state differences can lead to wait times due to Hard faults. Check method MiIssueHardFault if the tests have comparable numbers.

Etwanalyzer -dump CPU -pn Filewriter -methods MiIssueHardFault
That are another 2 s of difference which we can attribute to File System cache state and paged out data.

Measuring CPU Efficiency

We know that 4 threads are fastest after removing random system noise from our measured data. But is is it also the most efficient one? We can dump the CPUs Performance Monitoring Counters (PMC) to console with

ETWAnalyzer -dump PMC -pn FileWriter

The most fundamental metric is CPI which is Cycles/Instructions. If you execute the the same use case again you should execute the same numbers of instructions. When you go for multithreading you need to take locks and coordinate threads which costs you extra instructions. The most energy efficient solution is therefore the single threaded solution, although it is slower.

To get an overview you can export data into a CSV file from the current directory and the initial measurements we did start with. The following query will dump the Performance Monitoring Counters (PMC) to a CSV file. You can use multiple -fd queries to select the data you need:

ETWAnalyzer -dump PMC -pn FileWriter -fd ..\Take_Test1 -fd .  -csv PMC_Summary.csv

The CSV file contains the columns Test Time in ms and CPI to make it easy to correlate the measured numbers with something else e.g. CPI in this case. This pattern is followed for all -Dump commands of ETWAnalyzer. From this data we can plot Test Timing vs CPI this is Cycles per Instruction into a Pivot Table:

The initial test where we had

  • Defender
  • CheckpointVolume

issues was not only much slower, but CPU efficiency wise much worse: 1,2 vs 0,69 (smaller is better). This proves that even if Defender would be highly efficient on all other cores and not block our code at all we still would do much worse (73%), because even when we do exactly the same, CPU caches and other internal CPU resources, are slowing us down. With this data you can prove why you are slower even when everything else is the same.

Conclusions

If you have read that far I am impressed. That is a dense article with a lot of information. I hope I have convinced you that profiling your system with ETW and not only your code for strategic test cases is an important asset in automated regression testing.

Credits: NASA, ESA, CSA, and STScI (https://www.nasa.gov/image-feature/goddard/2022/nasa-s-webb-delivers-deepest-infrared-image-of-universe-yet)

If you look only at your code you will miss the universe of things that are moving around in the operating system and other processes. You will end up with fluctuating tests which no one can explain. We are doing the same thing 10 times and we still get outliers! The key is not only to understand your own code, but also the system in which it is executing. Both need attention and coordination to achieve the best performance.

To enable that I have written ETWAnalyzer to help you to speed up your own ETW analysis. If you go for ETW you are talking about a lot of data which needs a fair amount of automation to become manageable. Loading hundreds of tests into WPA is not feasible, but with ETWAnalyzer mass data extraction and analysis never was easier. The list of supported queries is growing. If you miss something file an issue so we can take a look what the community is needing.

To come back to the original question. Is profiling too difficult? If you are using only UI based tools then the overhead to do this frequently it is definitely too much. But with the scripting approach shown by FileWriter to record data and extract it with ETWAnalyzer, to make the data queryable, is giving you a lot of opportunities. ETWAnalyzer comes with an object model for the extracted Json files. You can query for known issues directly and flag past issues within seconds without the need to load the profiling data into a graphical viewer to get an overview.

How Buffered IO Can Ruin Performance

Paging can cause bad interactive performance. This happens quite often but very little content exists how you can diagnose and fix paging issues. It is time to change that (a bit). I present you here a deep dive into how paging really works for some workloads and how that caused a severe interactive performance issue.

The Observation

It was reported that a software version did significantly worse than its predecessors when the system was under heavy load. Measurements did prove that hard page faults were the issue.

A more detailed analysis showed that the big page out rate of nearly 500MB/s was caused by explicit working set trims. After removing these explicit EmptyWorkingSet calls performance went back to normal. The problem remained though why EmptyWorkingSet was suddenly a problem because these calls were there since a long time.

How Paging Really Works

To make such measurements on Windows 10 I give away some secrets. Some of you might have found the MM-Agent powershell cmdlet already. If you look e.g. at Windows Server 2012 Memory Management improvements then you will find some interesting switches which are also present in Windows 10 Anniversary. You can configure the Windows Memory Management via powershell.

PS C:\WINDOWS\system32> Get-MMAgent

ApplicationLaunchPrefetching : True
ApplicationPreLaunch         : True
MaxOperationAPIFiles         : 256
MemoryCompression            : False
OperationAPI                 : True
PageCombining                : True
PSComputerName               :

On Windows 10 Anniversary the switches MemoryCompression and PageCombining are enabled by default. If you are blaming memory compression for bad performance you you can switch off MemoryCompression by calling

Disable-MMAgent -mc

To enable it again you can call

Enable-MMAgent -mc

The MemoryCompression switch seems not to be documented so far. If MemoryCompression is already enabled and you want to turn it off you need to reboot. If you have disabled memory compression successfully you should see in task manager as Compressed value zero all the time. If you enable memory compression the setting will take immediate effect (no reboot necessary).

Now lets perform a little experiment what happens when we trim the working set of two GB process with my CppEater when memory compression is disabled.

C:\>CppEater.exe 2000

Below is the screenshot when CppEater did flush its working set. This caused some memory to be left over in the Modified list because the memory manager did not see the need to completely flush all of the 2 GB out into the page file.

image

When the memory was flushed you see an IO spike on disk D where my page file resides.

image

So far so expected. Below is a diagram which shows order of operations happening there.

image

In ETW traces this looks like this:

  • When the memory is flushed we have two GB of data in the modified list.
  • Later the OS flushes two GB of data into the page file.

image 

From that picture it is clear that when we then touch the paged out memory again we will need to read two GB of data from the page file. Now lets perform the experiment and see what happens:

image

Our active memory usage rises again by two GB but what is strange that there is zero disk activity when we access the paged out memory. From our mental model we should see two GB of disk IO on the D drive. Lets have a look at ETW traces while page file writing is happening.image

While we write two GB of data to the page file the Standby list increases during that time by the same amount. That means that although the data is persistently written out to the page file we still have all of the written data in the file system cache in the form of the standby list. Since the contents of the page file are still in the file system cache (Standby List) we see no hard page faults when we try to access our paged out data again.

That is a good working hypothesis which we can test now. We only need to access files via buffered IO reads and do not close the file handles in between which will flush the file system cache. Then we should see our CppEater process hitting the hard disk to read its page file contents.

Below is a small application that reads the windows installer cache which is ca. 30 GB in size which is more than enough to flush any existing file system cache contents:

static int Main(string[] args)
{

    var streams = new List<FileStream>();
    byte[] buffer = new byte[256 * 1024 * 1024];
    foreach (var f in Directory.EnumerateFiles(@"C:\windows\installer", "*.*"))
    {
        try
        {
            var file = new FileStream(f, FileMode.Open, FileAccess.Read);
            streams.Add(file);
            while (file.Read(buffer, 0, buffer.Length) > 0)
            {

            }
        }
        catch (Exception)
        { }
    }

    return 0;
}

With a flushed file system cache CppEater.exe has no secret hiding place for its paged out memory anymore. Now we see the expected two GB of hard disk reads minus the still not flushed out modified memory.

image

The picture what happens when data is written to the page file is lacking an important detail: It misses out the fact that the Modified list transitions to the Standby list which is just another name for the file system cache.

image

The Explanation For Bad Performance

Now we have all the missing pieces together. The initial assumption that flushing the working set cause the OS to write the process memory into the page file is correct, but only half of the story. When the page file data is written the memory from the modified list becomes part of the file system cache. When the pages are later accessed again it depends on the current state of the file system cache if we see soft or hard faults with dramatic effects for the observed performance. The bad performing software version did cause more buffered reads than before. That reads did push the cached page file data out of the file system cache. The still happening page faults were no longer cheap soft page faults but hard page faults. That explains the dramatic effects on interactive performance. The added buffered IO reads did surface the misconception that flushing the working set is a cheap operation. Flushing the working set and soft faulting it back again is only cheap if the machine is not under memory pressure. If the memory condition becomes tight or the file system cache gets flushed you will see the real costs of hard page faults. If you still need to access that memory in a fast way the best thing to do is to not flush it. Otherwise you might be seeing random hard page faults even if the machine has still plenty of free memory due to completely unrelated file system activity! This is true for Windows Server 2008 and 2012. With Windows Server 2016 which will also employ memory compression just like Windows 10 things change a bit.

Windows 10 Paging

With memory compression we need to change our picture again. The modified list is not flushed out to disk but compressed and then added to the working set of the Memory Compression process which now acts as cache. Since the page file contents are no longer shared with the standby list we will not see this behavior on Windows 10 or Server 2016 machines with enabled memory compression.

image

When we execute the same use case under Windows Server 2016 where we flush the file system cache with enabled memory compression we will see

image

that the memory from the MemCompression process stays cached and it is semi hard faulted back into the CppEater process in 3s which is much faster than the previous 10s when the page faults were hitting the hard disk. It is therefore a good idea for most workloads to keep memory compression enabled. It not only compresses the memory but the cached pagefile contents are no longer subject to standby list pollution which should make the system performance much more predictable than it was before.

Conclusions

Windows has many hidden caches which make slow operations (like hard page faults) fast again. But at the worst point in time these caches are no longer there and you will experience the uncached bad performance. It is interesting that RamMap does not show the page file as biggest standby list consumer on machines with no memory compression enabled. To prevent such hard to find errors in the first place you should measure what things cost with detailed (for me it is ETW) profiling and then act on the measured data. If someone has a great idea to make things faster you should always ask him for the detailed profiling data. Pure timing measurements can be misleading. If a use case has become 30% faster but you use x3 more memory and x2 CPU is this optimization still a great idea?

Windows 10 Memory Compression And More

If you expect performance gains by using a specific API you must measure on the actual target operating system under a realistic load. One such API where many myths are heard about is SetProcessWorkingSetSize which can be used to trim the current working set of a process. Since a long time you can also use use EmptyWorkingSet which does the same job with a simpler API call. One usage scenario might be that our process is not used for some time so it might be a good idea to page out its memory to make room for other processes which need the physical memory more urgently. The interesting question is: Is this a good idea? To answer that question one needs to understand how memory management works in detail. To begin our journey into the inner workings of the operating system we need to know:

How Is Memory Allocated?

For that we look under the covers how memory allocation works from an operating system perspective and the application developer view. From a developers points of view memory is only a new xxxx away. But what happens when you do that? The answer to that question depends highly on the used programming language and their implementation. For simplicity I stick here to C/C++.

Below is the code for a small program named CppEater.exe that allocates and accesses memory several times before and after it gives back its memory to the OS by trimming its working set.

  1. Allocate 2 GB of data with new.
  2. Touch the first byte of every memory page .
  3. Touch the first byte of every memory page .
  4. Touch all bytes (2GB).
  5. Trim working set (call EmptyWorkingSet).
  6. Wait for 15s.
  7. Touch the first byte of every memory page .
  8. Touch all bytes (2 GB).

// CppEater.cpp source. Update:  Full source is at https://1drv.ms/f/s!AhcFq7XO98yJgcg2Ko4dmxFUmQcAKA

#include <stdio.h>
#include <Windows.h>
#include <Psapi.h>
#include <chrono>
#include <vector>

template<typename T> void touch(T *pData, int dataCount, const char *pScenario, bool bFull = false)
{
    auto start = std::chrono::high_resolution_clock::now();    
    int pageIncrement = bFull ? 1 : 4096 / sizeof(T);
    for (int i = 0; i < dataCount; i += pageIncrement) // touch all memory or only one integer every 4K
    {
        pData[i] = i;
    }
    auto stop = std::chrono::high_resolution_clock::now();
    auto durationInMs = std::chrono::duration_cast<std::chrono::milliseconds>(stop - start);
    char pChars[512];
    sprintf(pChars, "%s: %dms, %.3f ms/MB, AllTouched=%d", pScenario,
         durationInMs, 1.0*durationInMs / ( 1.0 * dataCount * sizeof(T) / (1024 * 1024)), bFull);
    printf("\n");
    printf(pChars);    
    ::Sleep(500);
}

int main()
{
    const int BytesToAllocate = 2 * 1000 * 1024 * 1024;
    const int NumberOfIntegersToAllocate = BytesToAllocate / sizeof(int);
    auto bytes = new int[BytesToAllocate/sizeof(int)];                     // Allocate 2 GB
    touch(bytes, NumberOfIntegersToAllocate, "First page touch");          // touch only first byte in every 4K page
    touch(bytes, NumberOfIntegersToAllocate, "Second page touch");         // touch only first byte in every 4K page
    touch(bytes, NumberOfIntegersToAllocate, "All bytes touch", true);     // touch all bytes

    ::EmptyWorkingSet(::GetCurrentProcess());                               // Force pageout and wait until the OS calms down
    ::Sleep(15000);

    touch(bytes, NumberOfIntegersToAllocate, "After Empty");               // touch only first byte in every 4K page
    touch(bytes, NumberOfIntegersToAllocate, "Second After Empty", true);  // touch all bytes
    return 0;
}

When you record the activity of this program with ETW you find a 2 GB of allocation through the C/C++ heap manager which will end up in VirtualAlloc. VirtualAlloc is to my knowledge the most basic API to request memory from Windows. All heap segment allocations for the C/C++/C# heap go through this API.

|    |    |- CppEater.exe!main

|    |    |    |- CppEater.exe!operator new

|    |    |    |    ucrtbase.dll!malloc

|    |    |    |    ntdll.dll!RtlpAllocateHeapInternal

|    |    |    |    ntdll.dll!RtlpAllocateHeap

|    |    |    |    ntdll.dll!NtAllocateVirtualMemory             2 GB alloc with MEM_COMMIT|MEM_RESERVE

 

The only difference between the C/C++/C# heap managers is that the heap memory segments are differently managed depending on the target language. In C/C++ objects cannot be moved by the heap manager so it is important to prevent heap fragmentation with sophisticated allocation balancing algorithms. The managed (.NET) heap is controlled by the garbage collector which can move objects around to compact the heap but it has to be careful to not fragment the heap as well with pinned objects. Once the heap manager has got its big block of memory it will satisfy all following allocation requests from the heap segment/s. When all of them are full or too fragmented the heap manager will need another round of memory via VirtualAlloc.

It is possible to track down managed and unmanaged memory leaks with VirtualAlloc ETW tracing. That works if the leak is large enough to trigger a heap segment re/allocation. But you need to be careful how you interpret the “leaking” stack because it can also happen that some leak causes allocations in the heap but the heap segment re/allocation happens due to large temporary objects which are not the root cause.

Committed/Private Memory

Committing memory is quite fast and completes in ca. 1ms for a new[2GB] memory allocation request with VirtualAlloc. Was that really everything? When you commit memory you will see no increase in your working set. If you put a sleep after the new operator and you look at the process with your Task Manager then you see this:

image

The 2 GB commit is there but your working set is still only 2 MB. That means from an operating system point of view your application has got only the promise to get so much memory but since you did not yet access any of the committed memory via e.g. a pointer access the OS still had no need to allocate a new 4 K memory page, zero it out and soft fault it into your process working set where you were accessing the memory the first time.  There is quite a lot going on under the covers to make your process address space look flat where no other process can interfere with you and your allocated memory. In reality you have a virtual address space for each process where the OS is responsible to swap physical memory in/out of  your process as it sees fit.  The working set is in a first approximation the RAM your application has allocated in your physical RAM modules on your mainboard. Some things like DLLs are the same in all processes which can be shared by multiple processes. That is the reason why you have a Working Set, Working Set Shared and a Working Set Private column in task manager. If you would add up all working sets of a fully utilized machine it would exceed the installed memory by far because shared memory is counted multiple times. That makes the exact calculation of your actually used memory of an application quite difficult.

An easier metric is to sum up all committed memory which is by definition local to your process. This gives you an upper bound of the physical memory usage. It is an upper bound because some pages have never been touched and are therefore not assigned to any physical memory pages. Besides that large parts of your application might be already sitting in the page file which then also does not consume much physical memory. Process Explorer and Process Hacker have no column named Committed memory. As far as I can tell Committed memory is same as Private Bytes reported by these tools. Just in case you wonder why there is no Committed Bytes column in Process Explorer and Process Hacker. The definition of Process Hacker is quite easy. Private Bytes is the memory which can go to the page file. This excludes things like file mappings which can be read from disc again from the original file and not the page file as long the file mapping was not created with copy on write semantics.

Memory Black Hole – Page File Allocated Memory

That looks like the working set must always be smaller than the commit size which is not always the case. A special case are page file allocated memory mapped files. These do not count to your committed memory. If you touch the memory of a page file backed file mapping then it goes all into your working set.  See below for a 2 GB working set where we have only 4 MB of committed memory!

image

The code to produce such a strange process was to allocate a file mapping from the page file by specifying INVALID_HANDLE_VALUE as file handle. If you touch this memory then you have a large shared working set but zero committed memory. Below is a screenshot of the code which produced that allocation:

image

If you look into this process with VMMap from SysInternals which is a really great tool to look into any process and its contained memory you will find that VMMap and Task manager seem to disagree about the committed memory because page file backed memory is counted as committed by VMMap but not by the the Task Manager. Be careful at which numbers you look at.

image

If you want to look at your system you can use RAMMap also from SysInternals. It will show page file allocated file mapping as Shareable just as VMMap which gives you a good hint if you have somewhere leaked GB of page file baked memory.

image

ETW Reference Set and Resident Set Tracing

There is a bunch of ETW providers available which can help you to find all call stacks which touch committed memory the first time which then causes the OS to soft page fault physical memory into your working set. In ETW lingo this is called Reference Set. These ETW providers work best with Windows 8 and later. There you can see  the call stack for every allocation and page access in the system. This can show you every first page access but it can’t show the hard page faults due to paged out memory (you can use the Hard Faults graph in WPA for that). Although Reference Set tracing is interesting it has a very high overhead and produces many events. For every soft page fault which adds memory to any processes working set an ETW event with the memory type

  • PFMappedSection
  • PageTable
  • PagedPool
  • CopyOnWriteImage
  • KernelStack
  • VirtualAlloc
  • Win32Heap
  • UserStack

and the call stack which did cause the soft page fault can be recorded. That results in many million of events with a lot of big stack traces. On the plus side it helps a lot if you want to exactly know who did touch this memory but the resulting ETW files are quite big and take a long time to parse. The Reference Set graph was added with the Windows 10 Anniversary SDK to WPA.

Reference Set tracing can be enabled with WPR/UI or xperf where the equivalent xperf command line is

xperf -on  PROC_THREAD+LOADER+HARD_FAULTS+MEMORY+MEMINFO+VAMAP+SESSION+VIRT_ALLOC+FOOTPRINT+REFSET+MEMINFO_WS -stackwalk PageAccess+PageAccessEx+PageRelease+PageRangeAccess+PageRangeRelease+PagefileMappedSectionCreate+PagefileMappedSectionDelete+VirtualAlloc

or you can use the predefined xperf kernel group

xperf -on ReferenceSet

image

There is another profile in WPR which is called Resident Set. This is creating a snapshot when the trace session ends of all processes where the memory is allocated. Function wise it like opening VMMap for all processes at a specific point in time. The ETW events used are largely the same as with Reference Set minus the call stacks which reduces the traced data considerably. If you enable Reference Set tracing you also get Resident Set tracing since it is a superset of Resident Set. Although WPA displays the resident set of a specific time point it needs the ETW events belonging to the actual allocations to be able to assign every memory page its owning process. Otherwise you will only know that large portions of your active memory consists of page file allocated memory but you do not know which process it belongs to.

The xperf command line for Resident Set tracing is

xperf -on PROC_THREAD+LOADER+HARD_FAULTS+MEMORY+MEMINFO+VAMAP+SESSION+VIRT_ALLOC+DISK_IO

or you can use the xperf kernel group

xperf -on ResidentSet

Of these many providers the MEMORY ETW provider is the by far most expensive one. It records every page access and release. But without it you will not get any Reference/Resident set graphs in WPA. I am not sure if really all events are necessary but the high amount of data generated by this provider allows only to record very short durations of a busy machine where many allocations and page accesses happen. VirtualAlloc allocated memory is the only exception which can always be attributed to a specific process to the special Page Category VirtualAlloc_PreTrace.

image

Memory Black Hole – Large Pages

There are more things where Task Manager is lying to you. Have a look at this process

image

How much physical memory is it consuming? 68 KB? Lets look at our free memory while CppEater is running:

image

CppEater causing a rise of In use shown in Task Manager from 5.9GB to 7.9GB . Our small application is eating 2 GB of memory but we cannot see this memory attributed to our process! When the system is working wrong we need to look at our memory at system level. For this task RAMMap is the tool to use.

image

Here we see that someone has allocated 2 GB of memory in large pages. I hear you saying: Large what? It is common wisdom that on Intel CPUs the natural page size is 4KB. But for some server workloads it made sense to use larger pages. Usually large pages are 2 MB in size or multiples of that. Some Xeon CPUs even support up to 1 GiB large pages. Why should this arcane know how be useful? Well there is one quite common process which employs large pages quite heavily. It is Microsoft SQL Server. If you happen to see an innocent small sqlserver.exe and for some reason after your 24h load test you miss several GB of memory but no process seem to have allocated it the chances are high that SQL server has allocated some large pages which look small in Task Manager.

There is a tool to see how much memory SQL server really uses: VMMap

image

Large pages manifest itself as Locked WS in VMMap which exactly shows our lost 2 GB of memory. Is this ridiculously small value in Task Manager a bug? Well sort of yes. Trust no tool to 100%. Every number you will ever see with regards to memory consumption is wrong or a lie to some extent. Even Process Explorer shows these nonsensical values. I can only speculate how this small value is calculated. It could be that task manager simply calculates WS Pages * 4096bytes/Page = Working Set bytes.

A specialty of large pages is that they are never paged out and allocated immediately  when you call VirtualAlloc. This is how SQL server can grab large amounts of physical memory with one API call and then plays operating system with the handed out memory by itself. Here is the sample code to allocate memory with large pages. Large parts are only ceremony because you need to get the Lock pages in memory privilege before you can successfully call VirtualAlloc with MEM_LARGE_PAGES.

int * LargePageAlloc(unsigned int numberOfBytesToAllocate)
{
    printf("\nLarge Page allocator used.");
    ETWSetMark("Large Page allocator");

    HANDLE proc_h = OpenProcess(PROCESS_ALL_ACCESS, FALSE, GetCurrentProcessId());

    HANDLE hToken;
    OpenProcessToken(proc_h, TOKEN_ADJUST_PRIVILEGES, &hToken);
    CloseHandle(proc_h);

    LUID luid;
    ::LookupPrivilegeValue(0, SE_LOCK_MEMORY_NAME, &luid);

    TOKEN_PRIVILEGES tp;

    tp.PrivilegeCount = 1;
    tp.Privileges[0].Luid = luid;
    tp.Privileges[0].Attributes = SE_PRIVILEGE_ENABLED;

    auto status = AdjustTokenPrivileges(hToken, FALSE, &tp, sizeof(tp), (PTOKEN_PRIVILEGES)NULL, 0);
    CloseHandle(hToken);

    if (status != TRUE)
    {
        printf("\nSeLockMemoryPrivilege could not be aquired. LastError: %d. Use secpol.msc to assign to your user account the SeLockMemoryPrivilege privilege.", ::GetLastError());
        return 0;
    }

    auto *p = (int *) VirtualAlloc(NULL, numberOfBytesToAllocate, MEM_COMMIT | MEM_RESERVE | MEM_LARGE_PAGES, PAGE_READWRITE);

    if (::GetLastError() == 1314)
    {
        printf("\nNo Privilege held. Use secpol.msc to assign to your user account the SeLockMemoryPrivilege privilege.");
    }

    return p;
}

To try the sample out you need to add your user or group the Lock pages in memory privilege with secpol.msc and logout and login again to make the changes active.

image

When you suspect SQL server memory issues you should check out http://searchsqlserver.techtarget.com/feature/Built-in-tools-troubleshoot-SQL-Server-memory-usage which contains plenty of good advice how you can troubleshoot unexpected SQL server memory usage.

One of the first things you should do is to configure the minimum and maximum memory you want SQL server to have. See https://technet.microsoft.com/en-us/library/ms180797(v=sql.105).aspx for more information. Otherwise SQL server will sooner or later use up all of your memory. If you run out of physical memory a low memory condition event is raised which can cause SQL server to flush its caches completely which can cause query timeouts. It is therefore important to set reasonable values for min/max SQL server memory and test them before going into production.

If you are after a SQL Server memory leak you can query the SQL server diagnostics tables directly like this:

SELECT
  count (*),
  sum(pages_in_bytes)/1024.0/1024.00 'Mem in MB',
  type
  FROM [master].[sys].[dm_os_memory_objects]
  group by type 
  order by  sum(pages_in_bytes) DESC

to query your SQL server for allocated objects which can give you a hint what was being leaked. There are quite some memory leaks known of SQL server. It makes therefore a lot of sense to stay latest at the SQL server patch level if you tend to loose some memory after weeks of operation to some unknown memory black holes. When you look at a real SQL server process

image

you can pretty safely assume that its working set is a lie. Instead you can use the commit size as a good approximation of its current physical memory usage because nearly all SQL server memory is stored in large pages which cannot be paged out. This is only true if SQL server has been granted the privilege to lock pages.

Windows 10 Memory Compression

It is time to come back to the original headline. One of the big improvements of Windows 10 was that it compresses memory before writing it into the page file. With the Anniversary edition of Windows 10 this feature is now more visible in the task manager where the amount of compressed memory is now also displayed.

image

Another change of the Anniversary update is that originally the System process did own all of the compressed pages. MS had decided that too many users were confused by the large memory footprint of the System process because it holds all of the compressed memory in its working set. Now another hidden process owns all of the compressed memory which shows up in Process Explorer/Hacker under the name Memory Compression. It is a child of the System process. In ETW traces it is called MemCompression. These caches are therefore not visible in the process list of task manager except for the (Compressed) number in the overview which tells you how much working set the Memory Compression process currently has.

The compression and decompression is performed single threaded in the Memory Compression process. When we look at our original program of CppEater and let it run with ETW tracing enabled  we see that the first page touch at every 4 KB for 2 GB of memory takes 366ms. This was measured in Release/x64 on Windows 10 Anniversary on my Intel I7-4770K @ 3,5 GHz.

First page touch:   366ms, 0.183 ms/MB, AllTouched=0
Second page touch:   26ms, 0.013 ms/MB, AllTouched=0
All bytes touch:    314ms, 0.157 ms/MB, AllTouched=1
Flushing working set
After Empty:       4538ms, 2.269 ms/MB, AllTouched=0
Second After Empty: 314ms, 0.157 ms/MB, AllTouched=1

When you look at the details you find that the first page access is slow because of soft page faults. The second and third memory accesses are then fast and are only dominated by the CppEater process itself. When the working set is trimmed we find that the MemCompression process is doing a lot of single threaded work which takes ca. 6,3s for 2 GB memory. The compression speed is therefore 320MB/s which is not bad. After the memory has been  compressed and CppEater has slept some time it is time to touch our memory again. This time the memory is causing hard faults which are not hard in the usual way where we need to read memory from the disk but this time we need to decompress the previously compressed memory. That takes 4,5s which results in a decompression rate of 440MB/s which shows that decompression is nearly 30% faster than compression.

It is justified to call it a hard page fault since we are 12 times slower (=4538ms/366ms) on first page access when we hard fault back compressed pages.

image

The evolution of the Working Set of CppEater and the MemCompression process can nicely be seen by Virtual Memory Snapshots where we see that MemCompression gets all the memory from the modified private pages which are caused by calling EmptyWorkingSet in our process and puts it into its own working set after it has been compressed the memory. There we see also that MemCompression ends up at 1.7GB of memory which means that for an integer array which consists of 1,2,3,4,…. the compression rate is 0,85 which is okish. The compression is of course much better if large blocks with identical values can be compressed so this is really a stress test for the compressor to compress not completely random data.

image

You can show the relative CPU costs of each operation nicely with Flame Graphs with the same column configuration.

image

The number for first page access (soft page faults) are pretty much consistent with the ones Bruce did measure at https://randomascii.wordpress.com/2014/12/10/hidden-costs-of-memory-allocation/ where he measured 175 μs/MB for soft page faults. In reality you will always see a mixture of soft and hard page faults and now also semi hard faults due to memory compression which makes the already quite complex world of memory management even more complicated. But these semi hard faults from compressed memory are still much cheaper than to read the data back from the page file.

When Does it Page?

So far all operations were in memory but when does the page file come into the game? I have made some experiments on my machine and others. Paging (mainly) sets in when you have no physical memory left over. When the Active List (that is in kernel lingo all used memory except for caches) reaches the installed physical memory of your machine something dramatic must happen. The used physical memory is not identical with committed memory. If committed memory was never accessed by your application it can stay “virtual” and needs no physical memory backing. Is it a good idea to commit all of your physical memory? If you look at Task Manager on my 16 GB machine then you see I can commit over 19 GB of memory and still have 1,9 GB available because not all committed memory pages were accessed.

image

While I allocate and touch more and more memory the amount of Available and Cached memory goes to zero. What does that mean? It is the file system cache which you just have flushed out from memory. We all know that the machine reacts very slowly after it has booted and the hard disk never seems to come to rest. The reason is that after a boot all dlls need to be loaded from disk because the file system cache is still not yet populated. If you want to do your users a favor: Never allocate all of the physical memory but leave 15-20% for the file system cache. Your users will thank you that you did not ruin the interactive performance too much.

If you still insist to allocate all physical memory then bad things will happen:

image

When no more memory is available the OS decides that all processes are bad guys and tries to flush out everything into the Modified list (brown region in Memory Utilization graph). That is memory which is pending to be written to the page file. That makes sense if one or more processes are never accessing that memory anymore due to huge memory leaks. That way the OS can place the memory leak into the page file and continue working for a much longer time. At this time large amounts data are written into the page file. When that happens the system becomes unusable: The UI hangs, a simple printf call will take over six seconds and you experience a sudden system hang. The system is not really hanging, but it is quite busy with reading and writing data to and from the page file at that point in time. These high response times of hard disks are the main reason why SSDs are a much better choice for your page file. It is the slow reading from the page file which is causing the large slowdowns, in this case 16,3 (see Disk Service Time column in Utilization by Disk graph which was filtered for the pagefile) can be greatly reduced by placing the page file on a SSD. The kernel tries since Windows 10 its best to compress the memory before writing it into the page file to reduce slow disk IO by ca. 40%.

When the system is paging the first memory touch times rise dramatically from ca. 240ms to 1800ms (see ETW Marks graph) which is over seven times slower because it takes time to write out old data to make room for new memory allocations.

Cross Process Private Page Sharing

While measuring the effects of memory compression I stumbled across upon an interesting effect. When you start e.g. 8 process where each of them allocates one GB of random data which was initialized with

    pData[i] = rand();

in a loop. Now each process has one GB of working set and we allocate 8 GB in total of physical memory. When we trim the working set of each process we should see 8 GB of memory in the MemCompression process as its working set because random data does not compress really well. But instead I did see only about 1 GB of data in the MemCompression process! How can windows compress random data to 1/8 to its original size? I think it can’t. Something else must be happening here. The numbers are only pseudo random and always generate the same sequence of values in all processes. We therefore have the identical random data in all processes in their own private memory.

Now lets change things a bit and initialize the random number generator with a different seed in each process at process start with

    LARGE_INTEGER lint;
    QueryPerformanceCounter(&lint);
    srand( lint.LowPart );

Now we truly generate the expected 8 GB of working set of the MemCompression process. It seems that Windows creates an internal hash table of all paged out pages (could be also a B-tree) and adds them to the MemCompression process only if the newly compressed pages was not already encountered. A very much similar technique is employed by VMs where it is called transparent memory sharing between VMs.

Below you see working set of the MemCompression process when CppEater the first time is started with no random seed where all duplicate pages can be combined. The green bars are the times when the working set was flushed until the CPU consumption of MemCompression was flat again. The second run CppEater was seeded with a unique seed for the random number generator. No page sharing can happen because all pages are different now and we see the expected eight fold increase in the MemCompression working when we force page out of the working sets of the CppEater instances.

image

It is interesting that this vastly different behavior can easily be seen in the kernel memory lists. The working set of MemCompression counts simply to the Active List. The only difference is that after the flush of the working set the Active List memory remains much larger than in the first case (blue bar).

Conclusions – Is EmptyWorkingSet Good?

In general it is (nearly) never a good idea to trim your own working set. If your mental model is that you are simply giving the physical memory back to the OS which then will write it dutifully into the page file is simply wrong. You are not releasing memory but you are increasing the Modified List buffers which are flushed out to disk from time to time. If you do this a lot it can cause so much disk write load together with some random access page in activity that your system will come to a sudden halt while you have still plenty of physical memory left. I have seen 128GB servers which never got above 80GB of active memory because the system was so busy with paging out memory by forced calls to EmptyWorkingSet that it already behaved like it had reached the physical memory limit with the usual large user visible delays caused by paging. Once we removed the offending calls to EmptyWorkingSet the user perceived performance and memory utilization has become much better.

Other people have also tried SetProcessWorkingSetSize to reduce the memory consumption but failed as well due to application responsiveness issues. It did take much longer to bring all the loose ends into an article than I did initially anticipate. If I have missed out important things or you have found an error please drop me a note and I will update the article.

New Beta of Windows Performance Toolkit

With the release of the first Windows Anniversary SDK Beta also a new version of Windows Performance Toolkit was shipped. If you want to search for the changes here are the most significant ones. Not sure if the version number of the beta will change but it seems to target 10.0.14366.1000.

image

Stack Tags for Context Switch Events

Now you can assign tags for wait call stacks which is a big help if you want to have tags why your application is waiting for something.  Here is sample snippet of common sources of waits.

<Tag Name="Waits">
    <Tag Name="Thread Sleep">
         <Entrypoint Module="ntdll.dll" Method="NtDelayExecution*"/>
    </Tag>
    <Tag Name="IO Completion Port Wait">
         <Entrypoint Module="ntoskrnl.exe" Method="IoRemoveIoCompletion*"/>
    </Tag>
    <Tag Name="CLR Wait" Priority="-1">
         <Entrypoint Module="clr.dll" Method="Thread::DoAppropriateWait*"/>
    </Tag>
    <Tag Name=".NET Thread Join">
         <Entrypoint Module="clr.dll" Method="Thread::JoinEx*"/>
    </Tag>
    <Tag Name="Socket">
         <Tag Name="Socket Receive Wait">
               <Entrypoint Module="mswsock.dll" Method="WSPRecv*"/>
         </Tag>
         <Tag Name="Socket Send Wait">
               <Entrypoint Module="mswsock.dll" Method="WSPSend*"/>
         </Tag>
         <Tag Name="Socket Select Wait">
               <Entrypoint Module="ws2_32.dll" Method="select*"/>
         </Tag>
    </Tag>
    <Tag Name="Garbage Collector Wait">
         <Entrypoint Module="clr.dll" Method="WKS::GCHeap::WaitUntilGCComplete*"/>
    </Tag>
</Tag>

That feature is actually already part of the Windows 10 Update 1 WPA (10.0.10586.15, th2_release.151119-1817) but I have come just over that feature now. That makes hang analysis a whole lot easier. But a really new feature are Flame Graphs which Bruce Dawson wanted since a long time built into WPA. I have written some small tool to generate them also some time ago Visualize Your Callstacks Via Flame Graphs but it never got much traction.

image

If you want to see e.g. why your threads are blocking you get in the context switch view now a nice view what are the blocking reasons and which thread did unblock your threads most often. You can stack two Wait views over each other so you can drill down to the relevant time where something is hanging and you can get to conclusions much faster now.

image

Another nice feature is that if you select a region in the graph and hold down the Ctrl key you can select multiple regions at one time and highlight them which is useful if you frequently need to zoom in into different regions and you move between them. The current flame graphs cannot really flame call stacks because the resulting flames become so small you have no chance to select them. Zooming with Ctrl and the mouse wheel only zooms into the time region which is for a flame graph perhaps not what I want. I would like to have a zoom where I can make parts of the graph readable again.

Symbol loading has got significantly faster and it seems also to crash less often which is quite surprising for a beta.

My Presets Window for managing presets

Another new feature is the My Presets Window which is useful if you work with custom WPA profiles. From there you can select from different loaded profiles the graph in your current view and simply add it.

image

 

image

 

Reference Set Tracing

WPR now also supports Reference Set analysis along with a new graph in WPA. This basically traces every page access and release in the system which allows you to track exactly how your working set evolved over time and why.

image

Since page in/out operations happen very frequently this is a high volume provider. With xperf you can enable it with the keyword REFSET. The xperf documentation about it only tells you

REFSET         : Support footprint analysis

but until now no tool was publicly available to decode the events in a meaningful manner. I still do not understand how everything works together there but it looks powerful to deeply understand when something is paged in or out.

image

TCP/IP Graph

WPA has for a long time known nothing about the network. That is changing now a bit.

image

ACK Delays are nice but as far as I understand TCP the application is free to send the ACK until the server has got the data ready. If you see high ACK delay times you cannot point directly to the network but you still need to investigate the server if something was happening there as well. For network issues I still like Wireshark much more because it understands not only raw TCP  but also the higher protocols. For a network delay analysis TCP retransmit events are the most important ones. The next thing to look at are the packet round trip times (RTT) where Wireshark does an incredible good job at it. I have seen some ETW events in the kernel which have a SRTT (Sample RTT) field but I do not know how significant that actually is.

 

Load files from Zip/CAB

image

You no longer need to extract the etl files but WPA can open them directly which is great if you deal with compressed files a lot.

The Bad

  • The new WPT no longer works on Windows 7 so beware.
  • For WPR/UI I have mixed feelings because it is not really configurable and records too much. The recorded amount of data exceeds easily one GB on a not busy machine if I enable CPU Disk, File and Reference Set tracing. The beta version also records for the Disk profile Storeport traces which are only useful if you suspect bugs in your hard disk firmware or special hard disk drivers. If I enable context switch tracing with call stacks I usually do not need for every file/disk operation the call stacks since I will find it anyway in the context switch traces at that time. If VirtualAlloc tracing is enabled the stack traces for the free calls are recorded which are seldom necessary because double frees are most often not the issue. Memory leaks are much more common. For these the allocation call stacks are the only relevant ones.
  • The improvements to the ETW infrastructure with Windows 8.1 which support filtering of specific events or to record stack traces only for some specific events have not made it into WPR nor xperf. I really would like to see more feature parity between what the kernel supports and what xperf allows me to configure to reduce the sometimes very high ETW event rates down to something manageable.
  • Currently xperf can start only two kernel trace sessions (NT Kernel Logger and Circular Kernel Context Logger) but not a generic kernel tracing session where one can have up to 8 since Windows 8. Besides this I am not sure if setting the profiling frequency is even possible to set for a specific kernel session.

Conclusions

The new version has many improvements which can help a lot to get new insights in your system in ways that were not possible before. Flame Graphs look nice and I hope that the final version makes it possible to zoom in somehow.

The WPA viewer is really a great tool and has gotten a lot of added features which shows that more and more people are using it.