Monthly Archives: June 2013

New in Dradis Pro v1.7

Today we have pushed a new version of Dradis Professional Edition: Dradis Pro v1.7. This is the result of eight months of hard work, a bit longer than usual, but the release is packed with lots of handy improvements.

Here are some changes:

  • New Issue/Evidence architecture: read about why this is a big deal.
  • New all-in-one view (more below).
  • New “by host” and “by issue” reporting (more below).
  • New default project / report template: to make it easy for you to build on top of it.
  • New interface to import Issues from external sources.
  • New Qualys upload plugin.
  • Updated plugins
    • Burp upload
      • Generates Issue/Evidence
      • Is orders of magnitude faster.
      • Integrates with the Plugin Manager.
    • MediaWiki import is now compatible with versions 1.14 -> 1.21
    • Nessus upload generates Issue/Evidence
    • Nexpose upload generates Issue/Evidence
  • Updates and internal improvements:
    • Updated to Rails 3.2.13
    • Improved code block and table styling

All-in-one view

Notes, issues and attachments all in a single place:

A screenshot showing note contents, issues and attachments in one page

And an improved interface to import form external sources:

Screenshot f the new one-click importer

And of course, you also get Dradis’ Smart Refresh goodness:

More screenshots

“By host” and “By issue” reporting

We have discussed multiple times how providing a useful deliverable is part of what makes a pentest firm great. With this release of Dradis Pro we’re introducing even more flexibility to our reporting engine.

byhost-20

It is now possible to write an issue description once and associate it will multiple hosts. Then in your report you can either present each issue along with all the affected hosts (and associated evidence) or the other way round: a host-by-host summary where you least each host under scope along with all the issues that affect it.

byhost-09_small

This flexibility is what saves our users 2 hours of reporting time in every project.

Still not a Dradis Pro user?

These are some of the benefits you are missing out:

Read more about Dradis Pro’s time-saving features.

Writing a security report: the elements of a useful pentest deliverable

We have discussed that the security report produced at the end of the engagement is a key component in proving your worth to your current and future clients.

When crafting a pentest report not only you’ll have to think about what to include in the report (sections, contents, tables, stats) but you will also need to decide how to write it. Let’s review what it takes to create a useful pentest report.

We are not talking about the specifics or the differences in structure between the deliverable produced for different project types (e.g. VA vs. wifi vs. external infrastructure). We want to provide you with the high-level guiding principles that you can use to ensure that the final security report you produce and deliver to your clients is a useful one.

The recommendations in this piece are based on dozens of report templates that we’ve seen as part of our report customisation service for Dradis Pro as well as our experience in the industry.

The goal of the engagement

The security report produced after each engagement should have a clear goal. In turn, this goal needs to be aligned with the client’s high-level goals. In “Choosing an independent penetration testing firm” we saw how identifying the goals and requirements of an engagement is a real pain point for some clients but also an opportunity for the security firm to provide education and guidance to strengthen the partnership with their customers.

A typical goal as stated by the client could be: “our objective is to secure the information”. This can be a good starting point albeit somewhat naive in all except the most simple cases. These days systems are so complex that assessing the full environment is sometimes not realistically possible (due to time on budget constraints). A more concrete goal such as “make sure that traveller A can’t modify the itinerary or get access to traveler B’s information” would normally produce a better outcome.

However, for the sake of this post, let’s keep it simple and focus on the broader goal of “securing the information”. With that in mind, the goal of the security report needs to be to communicate the results of the test and provide the client with actionable advice they can use to achieve that goal. That’s right, we need to persuade our clients to act upon the results we provide them.

In order to help your client to meet their goals, the more you know about them and their internal structures and processes the better. Who commissioned the engagement? Why? Is there a hidden agenda? Familiarising yourself with their industry and domain-specific problems will also help to focus your efforts on the right places.

Finally, it is important to know the audience of the deliverable you are producing. This seems like an obvious statement, but there is more to it than meets the eye. Who do you think is going to be reading the report your firm has produced? Is this going to be limited to the developers building the solution (or the IT team managing the servers)? Unlikely. At the very least the development manager or the project lead for the environment will want to review the results. Depending on the size of the company, this person may not be as technical as the guys getting their hands dirty building the system. And maybe this person’s boss or their boss’ boss will be involved. If your results expose a risk that is big enough for the organisation, the report can go up the ladder, to the CSO, CTO or CEO.

One security report, multiple audiences

At the very least it is clear that we could have multiple audiences with different technical profiles taking an interest in your report. If there is a chance that your deliverable will end up making the rounds internally (and the truth is that this is always a possibility), the wrong approach to take is to either produce a completely technical document full of nitty-gritty details or, going to the other end of the spectrum, delivering a high-level overview with the summary of the highlights of the test apt for consumption by C-level execs but lacking on technical depth.

The easiest way to find the middle ground and provide a useful document for both the technically inclined and also to the business types among your readers is to clearly split the document into sections. Make these sections as independent and self-contained as possible. I like to imagine that different people in the audience will open the document and delete the section they are not interested in and they will still get their money’s worth of value on what remains.

Problems you don’t want to deal with

Before delving into what to include and how to structure it, there are two problems you don’t want to deal with during the reporting phase of the project: collation and coverage.

Collation

It is still quite common that a sizable amount of the reporting time allocated during a test is spent collating results from different team members.

As we saw in the “Why being on the same page matters?” post, there are steps you can take to minimise the amount of collation work needed such as the use of a collaboration tool during the engagement.

Reporting time shouldn’t be collation time. All information must be available to the report writer before the reporting time begins. And it must be available in a format that can be directly used in the report. If your processes currently don’t make this possible, please consider reviewing them as the benefits of having all the information promptly available to the report writer definitely outweigh the drawbacks involved in updating those processes.

Coverage

How good was the coverage attained during the testing phase of the engagement? Was no stone left unturned? Do you have both evidence of the issues you uncovered and proof of the areas that were tested but were implemented securely and thus didn’t yield any findings? If not, the task of writing the final report is going to be a challenging one.

We have already discussed how using testing methodologies can improve your consistency and maximise your coverage raising the quality bar across your projects. Following a standard methodology will ensure that you’d have gathered all the evidence you need to provide a solid picture of your work in the final deliverable. Otherwise, the temptation of going down the rabbit hole, chasing a bug that may or may not be there may become too strong. We’ve all been there, and there is nothing wrong with it, as long as it doesn’t consume too much time and enough time is left to cover all the areas of the assignment. If you fail to balance your efforts across the attack surface, this will be reflected in the report (i.e. you won’t be able to discuss the areas you didn’t cover) and it will reflect badly on your and your firms’ ability to meet your client’s expectations.

Security report sections

For the rest of this post, we will assume that you have been using a collaboration tool and are following a testing methodology during the testing phase and as a result, you’ve got all the results you need and have attained full coverage of the items under the scope of the engagement.

The goal of this post is not to provide a blow-by-blow breakdown of all the possible sections and structure you need to provide, there are some comprehensive resources on the net that go beyond what we could accomplish here (see Reporting – PTES or Writing a Penetration Testing Report). We want to focus on the overall structure and the reasons behind it as well as the approach and philosophy to follow when crafting the report to ensure you are producing a useful deliverable. At a very high level, the report content must be split between:

  • Executive summary
  • Technical details
  • Appendices

Executive summary

This is the most important section of the report and it is important not to kid ourselves into thinking otherwise. The project was commissioned not because an inherent desire to produce a technically secure environment but because there was a business need driving it. Call it risk management or strategy or marketing, it doesn’t matter. The business decided that a security review was necessary and our duty is to provide the business with a valuable summary.

The exec summary is probably the section that will be read by every single person going through the report, it is important to keep that in mind and to make sure that it is worded in a language that doesn’t require a lot of technical expertise to understand. Avoid talking about specific vulnerabilities (e.g. don’t mention cross-site request forgery) and focus on the impact these vulnerabilities have on the environment or its users. The fact that they are vulnerable to X is meaningless unless you also answer the question “so what?” and explain why they should care, why it is a bad thing and why they should be looking into mitigating the issue. As Guila says in the article, why give a presentation at all if you are not attempting to change the audience’s behaviors or attitudes?. And the security report is most definitely a presentation of your results to your client.

Don’t settle for just throwing in a bunch of low-end issues to the conclusions (e.g. “HTTPs content cached” or “ICMP timestamps enabled”) just to show that you uncovered something. If the environment was relatively secure and only low-impact findings were identified, just say so, your client will appreciate it.

Frame the discussion around the integrity, confidentiality, and availability of data stored, processed and transmitted by the application. Just covering the combination of these 6 concepts should give you more than enough content to create a decent summary (protip: meet the McCumber cube).

Apart from the project’s conclusions and recommendations, it is important that this section contains information about the scope of the test and that it highlights any caveats that arose during the engagement. Again this is to correctly frame the discussion and give the readers that may not be as familiar with the particular environment (e.g. CSO) the appropriate context.

In addition, it offers you protection should the client decide to challenge your results, approach or coverage attained. If a host or a given type of attack was requested by the client to be out of scope, this needs to be clearly stated. Along the same lines, if there were important issues affecting the delivery (e.g. the environment was offline for 12 hours) these have to be reflected. There is no need to go overboard on this either, if the application was offline for half an hour on the first day out of a five-day test and you don’t think this had an impact (e.g. you were able to do something else during that time or managed to attain full coverage throughout the rest of the test), there is no point in reflecting it on the report.

Technical details

This is the area that should be easier to craft from the tester’s perspective. There is not much to add here other than trying to keep your entries relevant to the current project. For instance, don’t include any MSDN references explaining how to do X in .NET when the application is written in Java. Or don’t link to the Apache site if all servers are using IIS.

I don’t want to get into the scoring system for the vulnerabilities because that could add a few thousand words to the post, just pick a system that works for you and your clients and try to be consistent. This is where having a report entry management system in place (*cough*, like VulnDB) can help maintain consistency of language and rating across projects and clients, especially for larger teams.

A final note on what to include on each finding: think about the re-test. If six months down the line, the client comes back and requests a re-test, would any of your colleagues be able to reproduce your findings using exclusively the information you have provided in the report? You may be on holiday or otherwise unavailable during the re-test. Have you provided enough information in the first place? Non-obvious things that usually trip you over are details about the user role you were logged in as when you found the issue or remembering to include the series of steps you followed from the login form to the POST request that exposed the issue. Certain issues will only be triggered if the right combination of events and steps is performed. Documenting the last step in the process doesn’t usually provide a solid enough base for a re-test.

Finally, remember that the purpose of the document is not to show how smart you are or how many SQLi techniques you used. Everything needs to be weighed and measured against the engagement goals and the business impact to the client. For instance, an unglamourous absence of account lockouts in the client’s public facing webapp is likely to have a bigger impact for their business and users than a technically brilliant hack that combined path traversal, command execution and SQLi in backend admin interface only reachable by IT administrators over a secure VPN link.

Appendices

The Appendices should contain the information that while not key to understand the results of the assessment would be useful for someone trying to gain a better insight into the process followed and the results obtained.

An often overlooked property of the Appendixes section is that it provides a window to the internal processes followed by the testing team in particular and the security firm in general. Putting a bit of effort into structuring this section and providing a more transparent view of those processes would help to increase the transparency of your operations and thus the trust your clients can place on you. The more you let them see what is going on behind the curtain the more they’ll be able to trust you, your team and your judgment.

In the majority of the cases, this additional or supporting information is limited to scan results or a hodgepodge of tool output. This is fine as it will help during the mitigation and re-test phases but there are other useful pieces of information that can be included. For instance, a breakdown of the methodology used by the team is something that you don’t see that often. I’m not talking about a boilerplate methodology blob (i.e. ‘this is what we normally do on infrastructure assessments’), but a real breakdown of the different areas assessed during this particular engagement along with the evidence gathered for each task in the list to either provide assurance about its security or reveal a flaw. This will show that your firm is not only able to talk the talk during the sales and pre-engagement phases but that your team, on a project-by-project basis, are walking the walk and following all the steps in the methodology. Providing your clients with this level of assurance is going to automatically set you ahead of the pack because not a lot of firms are capable (or willing) to do so.

tl; dr;

Understanding the project goals, realising that the security report you are crafting will have multiple audiences of different technical expertise, making sure that the deliverable reflects not only the issues uncovered but also documents the coverage attained and the process involved in doing so will go a long way towards producing a useful pentest deliverable. Couple that with enough technical information to provide the project team with sufficient knowledge on the issues uncovered, the best mitigations to apply and a means to verify their reviewed implementation and you will have succeeded in your role of trusted security advisor.

How can security testing firms add value to their clients?

Some time ago we discussed a handful of areas that clients should evaluate when choosing an independent penetration testing firm. However it is worth exploring the other side of this coin as well: how can security firms prove their prospect clients that they are the best security partner they will find?

The problem with internal processes

There are a number of things you can work on internally like using testing methodologies to ensure consistent project delivery and making sure that it’s easy for your testers to collaborate. As we saw in those articles, it is not always easy to prove the value of internal processes to prospect clients. In addition, every Tom, Dick and Harry will claim they follow some sort of methodology. Which is the problem with claims, anyone can make them. How do you prove your claims to your future clients?

Security testing of a hardened environment

A very interesting way to explore this topic is the hardened environment problem. Say you are chosen to perform a pentest in an environment that has been heavily hardened. After 3 or 4 days of testing your team comes back almost empty handed. A few minor issues here and there.

From your client’s perspective a report that only lists a handful of vulnerabilities can mean a few things:

  1. The environment was hardened and secure. Celebrations ensue.
  2. The testing team didn’t know what to do.
  3. The testing team did very little.

There is a subtle difference between 2 and 3. You can have an otherwise competent tester looking at an environment using technologies that he’s not familiar with. Time will be spent learning about the technologies and common attack vectors. In the end there will only be time to scratch the surface and identify any low hanging fruit. In addition, an inexperienced tester may not be able to recognise the subtle clues that indicate that a vulnerability exists.

In that situation, how can you put your client’s concerns to rest and assure them that sufficient coverage was attained?

Transparency

If you really want to become your client’s trusted security partner, what better way to do so than revealing what’s hiding behind the curtain?

Your clients will be better able to appreciate your service if they get to understand everything that is going on in the background. What actions and processes kick off after they give you the go ahead on a new engagement? This of course is scary, but this is not an all-or-nothing decision, there are degrees of transparency.

You need to provide them with proof that you’ve followed a testing methodology

Most of the times, clients are interested both in what was found (i.e. issues, findings) and what was covered. Was there enough time? What level of assurance can they draw from the results of this engagement? Reports written by less experienced testers make this problem more evident. They tend to focus on the findings, on the elegant hacks and the smart tricks. But they leave out the overall coverage. If an area was assessed and nothing interesting was found, it is unlikely that the area will get a mention in the report.

To be fair, I’ve seen a fair amount of reports as part of our report customisation service for Dradis Pro, very few have a section that provides a breakdown of the methodology that was used. A list of areas covered along with evidence and proof of why each of them was ticked off during the engagement. Saying that you follow a testing methodology as part of your sales pitch is one thing, providing auditable proof that you do is a very different story.

You need to show them what it means to them that your team can collaborate efficiently

I’ve worked in teams where if a client requested daily or even weekly status updates, it was a big deal. The unstructured approach to testing meant that in order to produce an interim deliverable a significant amount of time had to be invested on it. For the client this meant that by trying to be on top of things to make sure they were getting a good return on their investment they were being penalised with a waste of time and focus. The team was more worried about the interim reports than about providing sufficient coverage.

Daily reports shouldn’t have to be a burden. If everyone in the team is on the same page, sharing and writing up their findings as they go along, producing a daily report should be one click away.

We talked about how using a collaboration tool becomes handy when unexpected team changes occur in the being on the same page article. However I want to give you a concrete, real-world example. When my baby girl was born, I was in the middle of a test (second day out of five). Of course the company I was working for knew that we were due around those weeks and they graciously kept me on remote engagements. However we didn’t know exactly when it was going to happen, and when it did, I had to drop everything I was doing and focus on what was important. I was on the test on my own, but I was still using Dradis Pro to manage the project. What this meant is that when the time came, I was able to generate an interim report with one click and one of my colleagues was able to take over the project. All my notes, findings, tool output and progress was recorded. Handover happened over night and didn’t impact our testing window at all. When we explained to our client what was going on, they were sympathetic with me having to leave half way through but impressed that we were able to hand over the project with virtually no wasted time.

If you can show your clients that your internal processes allow you to react this swiftly, that they can have an interim report whenever they need one without impacting the coverage and that as a result the quality of the service they will receive from you will always be excellent (even on the face of unforeseeable circumstances) you would be a long way towards earning their trust.

Deliverables

That last area in which you can add value to your clients is the quality of your deliverables.

Traditionally the outcome of a successful security assessment would be a penetration testing report. A great pentest report will contain both a high-level overview of the results, mitigation advice, technical details and a breakdown of activities performed and results obtained during the engagement. The report will typically take the form of a Word or PDF document.

There will always be a need to provide results in a report form. Something that the business can read, understand and incorporate to their internal risk assessment framework. However, the more mature organisations that have accumulated years of experience dealing with IT security matters (e.g. financial institutions, big software vendors, etc.) are demanding more and more from their security partners. It is no longer enough to “read” about your issues and “learn” what mitigation techniques should be implemented. After the engagement is over, someone in their team (which could be a single person or multiple people across different departments) will have to:

  • Go through all the reported findings.
  • For each one, evaluate whether to accept all, part or none of the risks involved.
  • Incorporate the ones that require action into the internal issue tracking system.
  • Assign issue ratings in line with the company’s internal policy (e.g. “Urgent”, “Low priority”, etc.).
  • Follow up on the progress of each item.
  • Request a re-test or manually verify each issue (using the information provided in the original report).

It seems fair to say that *most* of the work around the project happens after the security vendor is long gone. These clients would benefit from a vendor that can go the extra mile, that can take the time to understand their internal process, their ratings and the issue tracking workflow and provide them with additional support.

Some of the bigger clients in the industry are already requiring their providers to use the client’s own reporting template and providing findings both as a long-form report and in an spreadsheet that can be programmatically processed. They can get away with it, because they are so big, the security vendor can’t afford to lose the account. However, I suspect this is the path the industry is following. More and more clients will need their vendors to provide more support and to work more closely with them through the assessment / remediation / re-test cycle.

Learning about the client’s internal processes or accommodating requests to use their particular template or provide the output in multiple formats involves some additional overhead for the pentesting firm. This is even more true if the security vendor is doing everything manually. The account manager has to keep track of the latest version of the template the client wants you to use. He needs to remind the test team every time that this test is different and that they need to use the client’s template (and the latest version of it) and that they need to provide their findings both in a document format and a spreadsheet. If there is a QA process (!) it will have to cover two separate documents with virtually the same content, etc. Multiply this by a few clients with specific needs and it can quickly become a nightmare.

On the other side of the spectrum, a firm that is already streamlining their delivery process with an extensible collaboration and reporting tool can accommodate this type of client requirements with virtually no effort. If your team is adding their findings as they go along and automatically generating most of the report, creating two separate documents (one report and one spreadsheet) is quite literally two clicks away. You will need to invest some time when on-boarding the client to understand their reporting requirements and the formats they need to extend your tool to support them. But once that initial investment is done, there is no significant overhead involved in each additional engagement delivered. When a change in the deliverable format is required, you adjust the tool’s export plugin and the team doesn’t even notice.

If the only thing you are providing to your clients is a pentest report listing all the findings, your are doing yourself a disservice. Let your clients know that you can provide them with your results in whatever format they need. However, make sure your backend processes and workflow are laid in such a way that accommodating requests for new deliverable formats doesn’t create additional overhead on a per-project basis or you will be burdening your team unnecessarily.

tl; dr;

Clients shopping around for security vendors sometimes need help to make the best decision for their business. The more transparent about your processes you become the easier they will find it to trust you.

Providing consistent and auditable results is the first step towards building up that trust. Show them how they will benefit from your robust internal processes.

And help them to manage the fallout of the engagement by providing your results in the format that is more valuable to them and their internal processes. Don’t limit your output to a single long-form report deliverable.