Using a collaboration tool: why being on the same page matters?

In this post I want to expand over the ideas discussed in The importance of collaboration during security testing focusing not so much in the splitting of tasks and work streams (that’s subject of another post) but on the magic that happens when all team members are on the same page sharing a clear picture of what is going on and on the role played by your collaboration tool.

Security testing often benefits from multiple people looking at the same target. The problem with one-person assessments is that, even if you follow a testing methodology you may miss stuff. This is because the way you approach testing, following the same patterns, looking for some telltale signals from the environment, you bring your background and intuition to the test. A fellow auditor would bring their own set of patterns and predefined expectations to the test. Combining the two approaches almost always produces interesting results.

As a rule of thumb, the more people trying to uncover issues, the better. Of course there is a limit to this rule (e.g. in a large enough test team, some testers would try to hide in the crowd and not pull their weight), but again that’s subject for another post.

In an ideal world

In an ideal world, everyone in the team will be on the same room throughout the duration of the test. They’ll be talking to each other, looking over each other’s shoulder when something interesting comes up and writing down everything they find, as they find on a whiteboard.

Both the economics and logistics of real world testing make this scenario highly unlikely except when some very specific circumstances are met. Testers are often based around the country (or the world), clients can’t afford the investment of putting together a larger testing team when a smaller one could get the job done, etc. Only in special circumstances, typically when the reputation of either the penetration testing firm (e.g. PCI ASV accreditation) or the client (e.g. new product launch) is at stake, the conditions are met to make this kind of effort. A strike team is put together to tear the system apart and testers and target are locked in a room (rules of Thunderdome apply).

The one man test

More often than not, security tests are performed by a single auditor. This is fine, provided the auditor has the right background for the job.

Even in this scenario it is fairly easy for things to slip through the cracks. You are focussed investigating a promising issue, then you notice a weird behaviour and if you don’t make a note there and then to look into it later, you will forget about it. The weird behaviour doesn’t get investigated. The test finishes and you even forget that you noticed it on the first place. Everybody looses.

Note-taking is a crucial skill. Noticing minor issues, adding them to the queue, triaging, and back to square one. Of course all this, while you follow a suitable testing methodology. The devil is in the details, and noticing the stuff that is not covered by the standard methodology is often the key to unlocking some of the more interesting bugs.

I would argue that even one-man teams would benefit from using a collaboration tool that lets them keep the big picture of the engagement (e.g. scope, progress, methodology, notes, findings, attachments, etc.). But in this case, I would even settle for a pen and a notebook. Just make sure nothing slips through the cracks! Of course, you’ll also need a cross-cut shredder to dispose of the paper once you are done 😉

Nevertheless, using a collaboration tool would enable you to share your interim findings with other stakeholders (account manager, client POC, etc.) and possibly reduce your reporting time.

The ever changing team

On the opposite side of the spectrum you have ‘fluid team‘ tests. We’ve all been there. On day one, you’ve got 2 testers that are going to be testing for 2 weeks, then in the afternoon, client requirements change, and now you have 3 testers working for 1 week. On day two, they change again and it’s back to 2 testers for 2 weeks, but your original team-mate has been pulled to perform some specialist testing only he’s capable of delivering. With every change in scope and team, you have to make sure everyone is brought to the same page or you won’t be able to make any progress.

If you’re keeping track of the project via email, you’ve got a problem. Every time a new tester joins the team, you’ve got to forward all the scoping emails, plus all the “I’ve found X” emails. Every time someone leaves the team, you have to chase them to send more emails with their latest findings. For the team leader, this is a waste of valuable time, it’s time that he can’t spend testing.

The alternative, using a collaboration tool, could make things a lot simpler. You receive scoping information and add it to the project. A new tester joins the team? Check the project information in the tool to find about the scope. Everyone adds their findings as they go along. Suddenly a team member becomes unavailable? No problem all their findings are already on the tool. A new team member joins half the way through? Check the project page to get up to speed in no time. Go through all the issues covered so far, check the methodology to find out what remains to be done, roll up your sleeves and start working.

The report

I have been arguing for the use of a collaboration tool during the engagement, to ensure everyone is on the same page. If everything has developed according to plan, the scoping information was available in the tool at the beginning of the test and everyone has been feeding their notes and findings as they went along. Now the test is over and it’s time to write the report. Whoever has to write it knows that all the information is on a single place, everything that’s needed: issues, evidence, screenshots, tool output, can be found there. If our report writer is savvy enough, he would have been keeping an eye on the project page to ensure that the information for all the issues found by each team member was complete, every i was dotted and every t was crossed. And all this can happen before the reporting time even starts.

Consider for a moment the alternative. There is no collaboration tool, progress is made via email (e.g. “Hey guys, look what I’ve found!”) and each member of the team is keeping a notes.txt file in their laptop. On the last day of the engagement, the report writer receives the notes from each tester: plain text files, word documents, .zip files with text and screenshots, etc. There will be a significant amount of time wasted collating results. Even if everyone provided all the information, there is still a requirement to re-format and re-style everything for the final report. If someone missed something, or if further evidence/details are required, it is almost certain that you won’t be able to get them. The system is firewalled off again, the test accounts no longer work, the person that found the issue on the first place is now doing a gig for the government in a bunker somewhere with zero connectivity, etc.

The amount of work required to feed a collaboration tool as you go along with complete information about the issues you uncover is insignificant compared to the task of manually collating the results of N testers (remember the ‘fluid team‘ problem?), reformatting and chasing around the missing bits and pieces.

Defining the requirements for a collaboration tool

We’ve covered a lot of ground in this post. Hopefully I’ve managed to highlight some of the merits of using a collaboration tool. The benefits that we would like to see in our solution would be:

  • Effective sharing: keep things organised, provide a big picture overview of the project: scope, coverage/progress, findings, notes, attachments, etc.
  • Flexible: you need to be able to extend the tool, adapt it to your needs and to the other tools and systems in your environment. “Silver bullet” solution that pretend to do everything for everyone out of the box, most likely won’t fit your needs. You need to be able to extend, modify and adapt your tool to fit your needs.
  • Capture all the data needed for the report. If the solution only lets you capture some of the information you’ll need for the report, you will be adding complexity to the workflow. Get all the information at once, while it’s fresh in the tester’s mind, add it to the solution, and forget about it until the report is due.
  • Ideally it should have report generation capabilities. Customisable report templates and the possibility of editing the report yourself after it is generated (e.g. Word vs. PDF) are also a plus.
  • Easy to adopt. To disrupt as little al possible the testers’ workflow, something that is easy to use, and cross-platform will go a long way towards adoption.

These are some of the guiding principles that we followed when we created and open sourced the Dradis Framework back in the day. More than 24,000 downloads later and after Dradis has been included in the BackTrack distro, featured in books like Grey Hat Hacking and Advanced Penetration Testing for Highly-Secure Environments it looks like we were into something.

These days, we continue to work hard to help our users collaborate more effectively and our clients to be more competitive.

Every manager or senior team member that tries to push for a collaboration tool in an environment where none is being used is bound to face a degree of push back. This post provides some of the counter arguments that can be used to fight such push back, however, fighting the push back is a complicated subject that calls for an entire post on its own.

One thought on “Using a collaboration tool: why being on the same page matters?

  1. Pingback: Back-office and collaboration tools: is creating your own worth it?

Leave a Reply

Your email address will not be published. Required fields are marked *