Skip to content

Identifying Buggy Endpoints

Mayhem reports any issues it finds as well as the input that triggers them. In this chapter, we describe the different solutions you can use to see the issues Mayhem finds.

Web UI

Mayhem uploads findings from each API testing run to the ForAllSecure cloud service. Sign in to the UI at https://app.mayhem.security, where you can dig into the details of each run. This includes the details of any issues found, as well as per-endpoint statistics such as response code distribution and latency percentiles. Here's what the web UI looks like:

Web UI

Detailed Report in the Web UI

HTML Reports

Mayhem can generate static, local HTML reports using the --html argument to mapi run. These reports contain essentially the same information as their web UI counterparts. They can be saved as "build artifacts" in a CI system. Here's what an HTML report looks like:

HTML Report

Detailed Static HTML Report

JUnit Reports

Mayhem can generate industry standard, CI-friendly JUnit reports with the --junit argument to mapi run. These reports contain all of the detailed issues found—but not the additional statistical details from web and HTML reports!—in a machine-readable form, which most CI software can consume and render natively.

By default, only Issues with Error level severity will be included in the junit report. If you wish to include Warning level Issues in your report, you must specify the --warnaserror flag when calling mapi run ... --warnaserror --junit <test-output.xml>

SARIF Reports

Mayhem can also generate SARIF files with the --sarif argument to mapi run. SARIF files are a great way to upload issues to GitHub, if you happen to use GitHub. See our GitHub Integration section for more details.

sarif-github

Mayhem API testing issues in GitHub

SARIF files can also be loaded in VSCode using a SARIF extension from Microsoft:

sarif-vscode

Mayhem API testing issues in VSCode

Info

To give accurate file and line information for each issue, Mayhem requires your API to include stacktraces in the body of "500 Internal Server Error" responses. Only do that in dev & testing -- never in production!

Debugging tools

Of course, having Mayhem find an issue is, in some ways, just the start. Isolating, reproducing and ultimately fixing issues is hard work!

Here are a few approaches we hope will make it easier. (These are all completely independent of Mayhem.)

API Logs

The simplest solution is to look through your API logs when a 500 is triggered. Logs often include stack traces pinpointing which line of code triggered the issue. If you have centralized logging infrastructure, with ElasticSearch or Splunk for instance, you'll likely be able to filter the logs by response code to only see 500 responses.

The fuzzer-generated requests will contain the following headers, which can help you look through your logs and correlate log events with specific fuzzing inputs:

  • User-agent: mapi/X.X.X, where X.X.X represents the mapi CLI version.
  • X-Mapi-Program-UUID: <UUID>, where UUID corresponds to a fuzing input. This header allows you to correlate buggy requests from the mapi reports to your API logs.

Error Monitoring Tools

We believe that installing an error monitoring tool in your API is the best way to dig into bugs. They can monitor not only your running APIs, but also any other backend systems like other microservices, background workers, and so on. Those backend systems might error out without the API returning a 500 responses, and error monitoring tools can help you detect those issues triggered but undetected by the fuzzer.

Error monitoring tools detect errors, gather as much context as possible, group similar errors together, and make them available to you through their UI, which looks something like this for sentry.io:

Sentry UI

Plenty of third-party companies offer great error monitoring tools: - Sentry, which we use ourselves. - Rollbar - New Relic - AirBrake - BugSnag - ... and many others!

Odds are your organization is already using such a tool in production. You can leverage it for fuzzing to get deep insights into the bugs that the fuzzer triggered. Your error monitoring tool will now alert you when the fuzzer finds an issue. It will give you plenty of context like the stack trace to understand the issues. You will be alerted of those issues perhaps even before the bug gets deployed in production!