January 26

NBench Testing – Memory Allocations

7  comments

Full source code is available here

This post describes how to create a memory test using the NBench framework. If you are interested in measuring performance check out the previous post on NBench Performance Testing – Code Throughput.

As an example, we are going to measure the amount of memory required by a Dictionary<int, int>.

Creating a test is very easy. You need to decorate the test method with the PerfBenchmark attribute using RunMode.Iterations and TestMode.Test. You then need to add a MemoryAssertion attribute to perform assertions against the total number of bytes allocated.

The following test try to add a single entry in the dictionary and assert that the total bytes allocated are less than 100. Considering that the dictionary needs some space to allocate some internal structures this test seems reasonable.

Dictionary Add memory NBench test done wrong

If we run this test using the NBench.Runner however we can notice that the test fail. The total memory used is 8192 bytes!

A failed NBench memory test

This happens because the operating system under the covers allocate memory in pages. In fact, 8192 bytes is exactly the size of a page (8K bytes).

Here is reported the NBench documentation about this issue.

 

Memory is typically allocated in pages, i.e. the OS might allocate 8kb when all you need is 12 bytes, that way it doesn’t have to constantly allocate lots of small segments of memory to the process, so to get best results it’s recommended that you write your memory benchmarks such that you allocate a large number of the objects you want to benchmark. That helps average out the noise produced by this allocation strategy on the part of the OS.

So we have now learnt that it is better to test the memory usage when we add a lot of elements in the dictionary instead of just one.

Let’s try to test the memory usage of a dictionary after adding a million elements.

I start defining some constants.

constants

NumberOfAdds is one million and MaxExpectedMemory is about 24 megabytes that is the maximum expected memory usage for this scenario. That is equivalent to the size of a million dictionary entries where an entry occupies about 20 bytes (4 bytes for the key, 4 bytes for the value and 12 bytes for internal data structures). We use 24 bytes to relax the memory requirements.

Let’s write the test.

A failing NBench memory test

populate

Unfortunately the test still fail. The dictionary uses more than 41 megabytes of memory that is higher than the 24 expected megabytes.

failed memory test resultInterestingly we can make the test pass, setting the initial capacity of the dictionary to the number of adds operations.

A passing NBench memory test

The test is now passing. The total amount of memory used is a bit more than 23 megabytes. This is impressive. We cut in half the memory usage by simply specifying the capacity for the dictionary! This is by the way a recommended practice if you know in advance the number of elements you are going to add to a dictionary (now you know why).

passing memory test result

Creating memory tests using NBench is simple and it helps you to learn how your code and third-party code allocates memory. The ability to write assertions is a great way to set a benchmark and avoid regressions.

If you don’t want to write assertions but you are only interesting in measuring memory allocations of a block of code, NBench let you do that using the MemoryMeasurement attribute and TestMode.Measurement. In this case, NBench will only report the memory usage in the log and it is up to you to use this data to visualize trends.

Memory measurements with NBench

At the time of writing, TotalBytesAllocated is the only memory metric available.

Why using capacity decrease memory usage?

This is outside the goal of the post but I guess you are curious to understand why using the capacity decrease memory usage.

The dictionary class use an array under the covers that automatically re-size when there is no space for new elements. The new size is calculated as the minimum prime number that is higher than twice the old size. This is a strategy implemented to reduce the number of internal (costly) re-sizes at the cost of using more memory than is actually needed (the framework can’t read your mind).

Here is the code in the Dictionary class that does that.

resizeexpandprimeget primeprimes

So the size of the internal array at the point of adding the last element will be one of those prime numbers. Probably 2009191 as it takes about 40 megabytes of space (2009191 * 20) to allocate that number of elements (this is by deduction as this information is not exposed by the Dictionary class for easy check at run-time).

If you provide an initial capacity instead, the internal array is immediately created so that no re-sizes will occur. The initial size of the internal array will be the minimum prime number that is higher than the provided capacity. This is 1162687 that with an average of 20 bytes per entry is exactly 23 megabytes of memory usage.

Initialize

Previous posts:

 

 


Tags


You may also like

  • {"email":"Email address invalid","url":"Website address invalid","required":"Required field missing"}

    Subscribe to our newsletter now!

    Get instant access to the Master C# 9 webinar to learn all the new exciting C# 9 features quickly.
    Get regular videos and news on C# and .NET and offers on products and services to master C#.

    We will collect, use and protect your data in accordance with our Privacy Policy.

    >