Skip to content

Diagnosing Low Test Run Rates

A common fuzzing performance metric is the number of tests run per second for a target binary, which is also called the “executions per second” in tools such as AFL and libFuzzer.

Info

A higher number of tests run per second is considered more performant than a lower amount of tests run per second.

The number of tests run per second for a target binary depends on the program, and is therefore program-specific; however, Mayhem will alert the user to potential performance issues by providing a range of color-coded thresholds to indicate target execution performance.

tests-run-per-second

Mayhem will indicate the performance thresholds for the Tests Run Per Second metric via the following colors:

  • Red: Tests run per second is less than or equal to 1.
  • Orange: Tests run per second is greater than 1 and less than 100.
  • Black: Tests run per second is greater than 100.

Improving Tests Run Per Second

There are many factors that influence the number of tests run per second, including the computational resources given to the Mayhem instance, such as CPU and RAM, and the inherent computational complexity for a given binary.

In particular, when a Mayhem target is under-performing and has a low tests run per second metric, users should perform the following diagnostic checks for their target binaries (if applicable):

Note

When Mayhem tests an application, it uses fuzzing to generate thousands (or even millions) of test cases that are sent as input to the compiled binary and is executed continually until the specified Mayhem run duration elapses. Therefore, code optimization changes to a target binary can have compounding effects on the number of tests run per second for a Mayhem run.

Remove sleep or wait Functions

Explicit use of sleep or wait functions within compiled applications will deliberately delay the application's execution process, thereby reducing the number of tests runs per second that Mayhem can perform.

Note

In fact, if your low test runs per second is caused by sleep or wait functions, you will need to first remove these calls, otherwise no other method of optimization indicated in this guide will work as these functions explicitly and deliberately delay execution times.

Therefore, our recommendation is to:

  1. Remove calls to sleep and wait functions in the target source code, as these delay execution.
  2. Re-compile the binary application.
  3. Re-execute the Mayhem run for the newly compiled target that no longer calls sleep and wait functions.

Tip

It may be helpful to have build settings that result in an executable optimized for fuzzing which disable/reduce sleep functionality.

Remove Non-essential Functionality/Extra Computation

If portions of a target's source code are unnecessary for fuzzing, users can create a test harness to reduce the testing scope and reduce the process execution time, which in turn increases the number of tests run per second.

Therefore, our recommendation is to:

  1. Determine the functions or parts of the source code that are relevant for fuzzing.
  2. Create a test harness to fuzz those specific functions or portions of the source code.
  3. Execute a Mayhem run using the test harness.

Tip

Using a libFuzzer harness can significantly reduce the process execution time and therefore increase the number of test runs per second. Check out the libFuzzer Harnessing Lab in our tutorials for more info!

Minimize Disk or Network I/O

An application that performs file I/O (input/output) to disk/network locations will wait until the operations have completed before proceeding further in its execution process. Therefore, instances in which file I/O are performed can cause longer than expected wait times and thereby reduce the number of test runs per second. For example, logging is a great candidate for removing in a build optimized for fuzzing.

Info

In particular, extra file I/O for a Mayhem target could cause the fuzzing container to restart due to exceeding in-memory tmpfs, which would create a slowdown and ultimately decrease the number of tests run per second. File contents besides the input file between fuzzing iterations are not reset, so extra file I/O could cause this to happen more frequently than is desirable.

Therefore, our recommendation is to:

  1. Minimize areas in the source code that perform disk or network I/O (where possible).
  2. Re-compile the binary application.
  3. Re-execute the Mayhem run for the newly compiled target with minimal disk or network I/O.

Tip

If you are running Mayhem on-premise and on your own hardware, the hardware itself may in fact be the limiting factor; local disk speed (HDD vs. SSD) or network speed (Mbps) may be contributing to longer wait times. Thus, if you suspect this may be an issue, contact your site administrator.

Conclusion

Here we've provided a few tips for diagnosing and increasing the number of tests run per second for your affected targets. The above list describes some of the more common issues when analyzing low test runs per second; however, for users whose targets are still slow running, more in-depth performance analysis of their targets may be needed.