Category Archives: Security Practice

Posts in this category are about general topics on running a security team (either internal or external)

How can security testing firms add value to their clients?

Some time ago we discussed a handful of areas that clients should evaluate when choosing an independent penetration testing firm. However it is worth exploring the other side of this coin as well: how can security firms prove their prospect clients that they are the best security partner they will find?

The problem with internal processes

There are a number of things you can work on internally like using testing methodologies to ensure consistent project delivery and making sure that it’s easy for your testers to collaborate. As we saw in those articles, it is not always easy to prove the value of internal processes to prospect clients. In addition, every Tom, Dick and Harry will claim they follow some sort of methodology. Which is the problem with claims, anyone can make them. How do you prove your claims to your future clients?

Security testing of a hardened environment

A very interesting way to explore this topic is the hardened environment problem. Say you are chosen to perform a pentest in an environment that has been heavily hardened. After 3 or 4 days of testing your team comes back almost empty handed. A few minor issues here and there.

From your client’s perspective a report that only lists a handful of vulnerabilities can mean a few things:

  1. The environment was hardened and secure. Celebrations ensue.
  2. The testing team didn’t know what to do.
  3. The testing team did very little.

There is a subtle difference between 2 and 3. You can have an otherwise competent tester looking at an environment using technologies that he’s not familiar with. Time will be spent learning about the technologies and common attack vectors. In the end there will only be time to scratch the surface and identify any low hanging fruit. In addition, an inexperienced tester may not be able to recognise the subtle clues that indicate that a vulnerability exists.

In that situation, how can you put your client’s concerns to rest and assure them that sufficient coverage was attained?

Transparency

If you really want to become your client’s trusted security partner, what better way to do so than revealing what’s hiding behind the curtain?

Your clients will be better able to appreciate your service if they get to understand everything that is going on in the background. What actions and processes kick off after they give you the go ahead on a new engagement? This of course is scary, but this is not an all-or-nothing decision, there are degrees of transparency.

You need to provide them with proof that you’ve followed a testing methodology

Most of the times, clients are interested both in what was found (i.e. issues, findings) and what was covered. Was there enough time? What level of assurance can they draw from the results of this engagement? Reports written by less experienced testers make this problem more evident. They tend to focus on the findings, on the elegant hacks and the smart tricks. But they leave out the overall coverage. If an area was assessed and nothing interesting was found, it is unlikely that the area will get a mention in the report.

To be fair, I’ve seen a fair amount of reports as part of our report customisation service for Dradis Pro, very few have a section that provides a breakdown of the methodology that was used. A list of areas covered along with evidence and proof of why each of them was ticked off during the engagement. Saying that you follow a testing methodology as part of your sales pitch is one thing, providing auditable proof that you do is a very different story.

You need to show them what it means to them that your team can collaborate efficiently

I’ve worked in teams where if a client requested daily or even weekly status updates, it was a big deal. The unstructured approach to testing meant that in order to produce an interim deliverable a significant amount of time had to be invested on it. For the client this meant that by trying to be on top of things to make sure they were getting a good return on their investment they were being penalised with a waste of time and focus. The team was more worried about the interim reports than about providing sufficient coverage.

Daily reports shouldn’t have to be a burden. If everyone in the team is on the same page, sharing and writing up their findings as they go along, producing a daily report should be one click away.

We talked about how using a collaboration tool becomes handy when unexpected team changes occur in the being on the same page article. However I want to give you a concrete, real-world example. When my baby girl was born, I was in the middle of a test (second day out of five). Of course the company I was working for knew that we were due around those weeks and they graciously kept me on remote engagements. However we didn’t know exactly when it was going to happen, and when it did, I had to drop everything I was doing and focus on what was important. I was on the test on my own, but I was still using Dradis Pro to manage the project. What this meant is that when the time came, I was able to generate an interim report with one click and one of my colleagues was able to take over the project. All my notes, findings, tool output and progress was recorded. Handover happened over night and didn’t impact our testing window at all. When we explained to our client what was going on, they were sympathetic with me having to leave half way through but impressed that we were able to hand over the project with virtually no wasted time.

If you can show your clients that your internal processes allow you to react this swiftly, that they can have an interim report whenever they need one without impacting the coverage and that as a result the quality of the service they will receive from you will always be excellent (even on the face of unforeseeable circumstances) you would be a long way towards earning their trust.

Deliverables

That last area in which you can add value to your clients is the quality of your deliverables.

Traditionally the outcome of a successful security assessment would be a penetration testing report. A great pentest report will contain both a high-level overview of the results, mitigation advice, technical details and a breakdown of activities performed and results obtained during the engagement. The report will typically take the form of a Word or PDF document.

There will always be a need to provide results in a report form. Something that the business can read, understand and incorporate to their internal risk assessment framework. However, the more mature organisations that have accumulated years of experience dealing with IT security matters (e.g. financial institutions, big software vendors, etc.) are demanding more and more from their security partners. It is no longer enough to “read” about your issues and “learn” what mitigation techniques should be implemented. After the engagement is over, someone in their team (which could be a single person or multiple people across different departments) will have to:

  • Go through all the reported findings.
  • For each one, evaluate whether to accept all, part or none of the risks involved.
  • Incorporate the ones that require action into the internal issue tracking system.
  • Assign issue ratings in line with the company’s internal policy (e.g. “Urgent”, “Low priority”, etc.).
  • Follow up on the progress of each item.
  • Request a re-test or manually verify each issue (using the information provided in the original report).

It seems fair to say that *most* of the work around the project happens after the security vendor is long gone. These clients would benefit from a vendor that can go the extra mile, that can take the time to understand their internal process, their ratings and the issue tracking workflow and provide them with additional support.

Some of the bigger clients in the industry are already requiring their providers to use the client’s own reporting template and providing findings both as a long-form report and in an spreadsheet that can be programmatically processed. They can get away with it, because they are so big, the security vendor can’t afford to lose the account. However, I suspect this is the path the industry is following. More and more clients will need their vendors to provide more support and to work more closely with them through the assessment / remediation / re-test cycle.

Learning about the client’s internal processes or accommodating requests to use their particular template or provide the output in multiple formats involves some additional overhead for the pentesting firm. This is even more true if the security vendor is doing everything manually. The account manager has to keep track of the latest version of the template the client wants you to use. He needs to remind the test team every time that this test is different and that they need to use the client’s template (and the latest version of it) and that they need to provide their findings both in a document format and a spreadsheet. If there is a QA process (!) it will have to cover two separate documents with virtually the same content, etc. Multiply this by a few clients with specific needs and it can quickly become a nightmare.

On the other side of the spectrum, a firm that is already streamlining their delivery process with an extensible collaboration and reporting tool can accommodate this type of client requirements with virtually no effort. If your team is adding their findings as they go along and automatically generating most of the report, creating two separate documents (one report and one spreadsheet) is quite literally two clicks away. You will need to invest some time when on-boarding the client to understand their reporting requirements and the formats they need to extend your tool to support them. But once that initial investment is done, there is no significant overhead involved in each additional engagement delivered. When a change in the deliverable format is required, you adjust the tool’s export plugin and the team doesn’t even notice.

If the only thing you are providing to your clients is a pentest report listing all the findings, your are doing yourself a disservice. Let your clients know that you can provide them with your results in whatever format they need. However, make sure your backend processes and workflow are laid in such a way that accommodating requests for new deliverable formats doesn’t create additional overhead on a per-project basis or you will be burdening your team unnecessarily.

tl; dr;

Clients shopping around for security vendors sometimes need help to make the best decision for their business. The more transparent about your processes you become the easier they will find it to trust you.

Providing consistent and auditable results is the first step towards building up that trust. Show them how they will benefit from your robust internal processes.

And help them to manage the fallout of the engagement by providing your results in the format that is more valuable to them and their internal processes. Don’t limit your output to a single long-form report deliverable.

How to choose a software vendor

So, you’ve decided you need a software vendor to build an application for you. It might be an idea you have for a great new product, or a tool that needs to integrate with an existing system, or an application to automate a manual process in your organisation.

I assume you know a little about software but you do not have much experience in how applications are built and how software development teams operate; you would like to know how to pick a software team that will build a quality product, that will understand what you need and contribute to the success of your business.

To give you some background, I am a software engineer, have led software teams, outsourced projects to remote software teams and delivered work for remote clients. I would like to share a few ideas to help you judge if a software team is worth their weight in gold.

How well do they communicate?

You will be spending a fair bit of time with the team you pick. You’ll explain your idea to them, they’ll explain their approach to the project, It’s important that you understand each other, that you click, that you are on the same wavelength. When you explain your idea to them they should understand it and be able to contribute constructively to it. When they explain their implementation to you it should be in such a way that you can understand. Successful communication is critical to the success of your project – pick a team that you can communicate well with.

Do they understand your business domain?

It is incredibly valuable if the team that you engage with understands your business domain or the industry that you are in. Have they worked with clients in your industry before? Do they specialise in a specific domain and does it overlap with yours?

A software team that understands the context in which an application will be used, will spend less time trying to understand the problem space and more time focussing on the solution. You might have a very clear documented vision of your application but inevitably decisions need to be made during development by the development team. A software development team makes better decisions when they understand the business domain.

Do you like their existing work?

Past behaviour predicts future behaviour. You can tell a lot from a team based on work that they have delivered in the past. Most teams would be happy to share a portfolio with you of applications that they have built before.

Do they design clear and easy to understand user interfaces? Is it intuitive to use their applications? Do you get a sense that they have thought through every scenario in which the application might be used? Do they take pride in their work?

Are they willing to involve you in the process?

A good software team values continuous contribution from their client. Be wary of a team that vanishes for 3 months with your requirements with the promise that they’ll deliver the perfect product when they return.

Before any line of code is written they should involve you in their planning process or at least give you clear visibility on their planning. When development starts they should be able to show you their progress on a weekly basis and invite your feedback.

A good software team understands that requirements may change during a project and they embrace change, knowing that certain things become clearer only during development.

Do they have good software development practices?

Even if you are not very familiar with software development you can still make a judgement of a team’s engineering practices by asking a few simple questions.

Do they test their software? Good software development teams have diligent process through which they test their software. You can reasonably expect testing to be partially automated and partially manual. I would be careful if they do not do a fair bit of each. Most teams would be happy to discuss this.

Do they break their projects down into small testable chunks of work? Similar to manufacturing, it is best to break a software project into small well-defined pieces of work that can be tested from a user’s perspective. Successful teams have a well-defined process by which they define, implement and test pieces of work and measure their overall progress (protip: why can’t developers estimate time?).

Do they frequently deploy their code to a production like environment? It is important that deployment to production is part of the development process early on in the project. This eliminates surprises at the end. It is also important that this process is automated.

Do they have a process in place to deal with bugs? Teams should have a process in place through which they track, prioritise and fix bugs.

Are they familiar with software security? Good software teams consider security from early on in the development project and have checkpoints in place during the project to review security of the application.

How do they stay on top of the latest technologies? Software development is a fast changing industry. Good software teams encourage and support their developers to read, attended conferences, have side projects and experiment with new techniques.

Do they contribute to the software community? All software teams rely heavily on the wealth of knowledge that is provided by the software community. Healthy software teams give back to the community through publishing their knowledge, contributing to open source projects, speaking at conferences, etc.

What is their public reputation?

A good software team has happy clients (client testimonials), they engage in conversation in the public domain (twitter, mailing lists), they are regarded as specialists in their own domain (blogging, speaking at conferences), etc.

Wrapping it up

The above pointers are aimed to help you to judge a vendor’s maturity in few fundamental aspects of software development without having to know the industry jargon. There is more to software development that can be covered in a single blog post but hopefully the above will enable you to have a confident discussion when engaging with a software team for the first time.

About Sibert Lubbe

Siebert is a software engineer, enthusiastic about all things software related – the code, the people, the business, the processes and emerging technologies.

He is based in Melbourne, Australia currently working at realestate.com.au. Find him on Twitter as @siebertlubbe or at siebertlubbe.com.

Should you create your own back-office/collaboration tools?

When an organisation tries to tackle the “collaboration and automated reporting problem“, one of the early decisions to make is whether they should try to build a solution in-house or use an off-the-shelf collaboration tool.

Of course, this is not an easy decision to make and there are a lot of factors involved including:

  • Your firm’s size and resources
  • Cost (in-house != free)
  • The risks involved

But before we begin discussing these factors, it will be useful to get a quick overview what’s really involved in creating a software tool.

The challenge of building a software tool

What we are talking about here is building a solid back-office tool, something your business can rely on, not a quick Python script to parse Nmap output.

My background is in information security and electrical engineering, but I’ve written a lot of software, from small funny side projects, to successful open source tools and commercial products and services.

I’m neither a software engineering guru nor an expert in software development theory (I can point you in the right direction though), but building a software tool involves a few more steps than just “coding it”. For starters:

  • Capture requirements
  • Design the solution
  • Create a test plan
  • Develop
  • Document
  • Maintain
  • Improve
  • Support your users

The steps will vary a little depending on your choice of software development methodology, but you get the idea.

If one of the steps is missing the full thing falls apart. For instance, say someone forgets to ‘document’ a new tool. Whoever wants to use the tool and doesn’t have direct access to the developer (e.g. a new hire 8 months down the line) is stuck. Or if there is no way to track and manage feature requests and improvements, the tool will become outdated pretty quickly.

With this background on what it takes to successfully build a solid software tool, lets consider some of the factors that should play a role in deciding whether to go the in-house route or not.

Your firm’s size and resources

How many resources can your firm invest in this project? Google can put 100 top-notch developers to work on an in-house collaboration tool for a full quarter and the financial department won’t even notice.

If you are a small pentesting firm, chances are you don’t have much in terms of spare time to spend on pet projects. As the team grows, you may be able to work some gaps in the schedule and liberate a few resources though. This could work out. However, you have to consider that not only will you need to find the time to create the initial release of the tool but also you’ll need to be able to find the resources down the line to maintain, improve and support it. The alternative is to bring a small group of developers on payroll to churn back-office tools (I’ve seen some mid- and large-size security firms that successfully pulled this off). However this is a strategic decision which comes with a different set of risks (e.g. how will you keep your devs motivated? What about training/career development for them? Do you have enough back-end tools to write to justify the full salary of a developer every month?).

Along the same lines, if you’re part of the internal security team of an organisation that isn’t focussed on building software, chances are you’ll have plenty in your plate already without having to add software project management and delivery to it.

Cost (in-house != free)

There is often the misconception that because you’re building it in-house, you’re getting it for free. At the end of the day whoever is writing the tool is going to receive the same salary at the end of the month. If you get the tool built at the same time, that’s like printing your own money!

Except… it isn’t. The problem with this line of reasoning is the at the same time part. Most likely the author is being paid to perform a different job, something that’s revenue-generating and has an impact in the bottom line. If the author stops delivering this job, all that revenue never materialises.

Over the years, I’ve seen this scenario play out a few times:

Manager: How long is it going take?
Optimistic geek: X hours, Y days tops
Manager: Cool, do it!

What is missing from the picture is that it is not enough to set aside a few hours for “coding it”, you have to allocate the time for all the tasks involved in the process. And more often than not Maintaining and Improving are going to take the lion’s share of the resources required to successfully build the tool (protip: when in doubt estimating a project: sixtoeightweeks.com).

One of the tasks that really suffers when going the in-house route is Support: if something breaks in an unexpected way, who will fix it? Will this person be available when it breaks or there is a chance he’ll be on-site (or abroad) for a few weeks before the problem can be looked into?

Your firm’s revenue comes from your client work not from spending time and resources working on your back-end systems. The fact that you can find some time and resources to build the first release of a given tool, doesn’t mean that maintaining, supporting and improving your own back-end tools will make economic sense.

The risks of in-house development

There are a few risks involved in the in-house approach that should be considered. For instance, what happens when your in-house geek, the author of the tool, decides to move on and leaves the company? Can someone maintain and improve the old system or are you back to square one? All the time and resources invested up to that point can be lost if you don’t find a way to continue maintaining the tool.

Different developers have different styles and different preferences for development language, technology stack and even source code management system. Professional developers (those that work for a software vendor developing software as their main occupation) usually agree on a set of technologies and practices to be used for a given project, meaning that new people can be brought on board or leave the team seamlessly. Amateur developers (those that like building stuff but don’t do it as their main occupation) have the same preferences and biases as the pros and they are happy to go with them without giving them a second though as they don’t usually have to coordinate with others. Normally, they won’t invest enough time creating documentation or documenting the code because at the end of the day, they created it from scratch and know it inside out (of course 6 months down the line, they’ll think it sucks). Unfortunately this means that the process of handing over or taking ownership of a project created in this way will be a lot more complicated.

When building your own back-end systems you have to think: who is responsible for this tool? Another conversation I’ve seen a few times:

(the original in-house author of the tool just moved on to greener pastures)
Manager: Hey, you like coding, will you take responsibility for this?
Optimistic geek: Sure! But it’s Ruby, I’ll rewrite the entire thing from scratch in Python and we’ll be rolling in no time!
Manager: [sigh]

If you are part of a bigger organisation that can make the long-term strategic commitment to build and maintain the tool then go ahead. If you don’t have all those resources to spare and are relying on your consultants to build and maintain back-end tools, be aware of the risks involved.

Conclusion: why does the in-house approach not always work?

The in-house development cycle of doom:

  1. A requirement for a new back-office tools is identified
  2. An in-house geek is nominated for the task and knocks something together.
  3. A first version of the tool is deployed and people get on with their business.
  4. Time passes, tweaks are required, suggestions are made, but something else always has priority on the creator’s agenda.
  5. Maybe after a few months, management decides to invest a few days from the creator’s time to work on a new version.

As you can imagine, this process is unlikely to yield optimum results. If building software tools is not a core competency of your business, you may be better served by letting a software development specialist help you out. Let them take care of Maintaining, Improving and Supporting it for you while you focus on delivering value to your clients.

Of course the other side of this coin is that if you decide to use a third-party tool, whoever you end up choosing has to be worthy of your trust:

  • How long have they been in business?
  • How many clients are using their solutions?
  • How responsive is their support team?

These are just some of the highlights though, the topic is deep enough to warrant its own blog post.

tl; dr;

Going the in-house route may make sense for larger organisations with deeper pockets. They can either hire an internal development team (or outsource the work and have an internal project manager) or assign one or several in-house geeks to spend time creating and maintaing the tools. But remember: in-house != free.

Smaller teams and those starting up are usually better off with an off-the-shelf solution built by a solid vendor that is flexible and reliable. However, the solution needs to be easily extended/connected with other tools and systems to avoid any vendor lock-ins of your data.

Choosing an independent penetration testing firm

There’s been a recent post on the [pen-test] mailing list asking for advice on the things to consider when choosing a independent penetration testing company. The original request went as follows:

I’m currently in the process of sizing up/comparing various
Penetration Testing firms, and am having a bit of trouble finding
distinguishing characteristics between them. I’ve looked at a fair
few, but they all seem to offer very similar services with little to
recommend one over another.

The thread was full of good advice. However, having first-hand experience in a number of these penetration testing firms I thought it would be a good idea to dig a bit deeper into the subject: what makes a penetration testing company great?

What are your requirements?

But first things first, do you need a penetration test? Do you know what a penetration test consists of? What would be the goal of the test if you performed one? These are really the key questions that you need to be able to answer before even considering choosing an external security partner.

Unfortunately, in depth answers to those questions fall outside the scope of this article. Doing a bit of internet research as well as reaching out to industry colleagues, peers and business acquaintances that are in a role similar to your own would be a first step in the right direction.

It is not a bad idea to ask each of the vendors you evaluate to help you with the answers to those questions. It will enable you to understand their approach. Do they really have your best interest in mind? Will they make sure that you are able to define the problem and sketch your goals before jumping to their keyboards and sending you an invoice? Are they knowledgable enough on the relationship between security and your business and the tradeoffs involved?

Contrary to what one might think, in the majority of the cases security projects are performed “just because” without a clear goal in mind:

  • We made a change in the app and policy says we need to have it pentested.
  • We deployed a new server and IT said we need to pentest it.
  • A year has passed since our last test and we have to do it again.

If you are completely lost on this and don’t know what services you need or what services might be available, it may be worth getting some external help just to help you clarify your requirements. You can bring in an independent consultant for a couple of days to help you gain a sufficient understanding of your own requirements to ensure that when you go out and shop around for security partners you know what you are up against.

IT generalists vs penetration testing specialists

After correctly understanding what your problem is and what type of testing you need, the next thing to get out of the way is to decide whether you should go for a general IT contractor or a security specialist. As usual, there is no clear cut answer and it depends on your needs more than anything else.

I’ve worked with big integrators where the security team was virtually non-existent. Of course that didn’t prevent the business from selling security services. ‘Security consultants’ would spend their time doing IT deployments (e.g. firewall and router configurations, etc.) or coding Java until the off security project arrived when they would gather in a team and deliver it. This may work for you or not, but it’s worth thinking about. Do you need a lot of support in several IT areas? It would definitely be easier to establish a relationship with a single IT provider than shopping around for specialist vendors for each area.

When dealing with a generalist, make sure you understand their approach to security testing. Most of the advice given below for firms can be directly translated to the “security function” inside a bigger consultancy or integrator.

Company background

Lets assume that you have decided to go for a security testing company. What are the important factors to consider before making a decision?

Trust

As with all business decisions, trust is a very important factor when evaluating security services vendors. Can you get any verifiable references of any of these guys? If you approach a firm and made them aware that X from Y company recommended their services, you are likely to get a better deal than if you didn’t. The level of service you will get will also be different as failing/disappointing you can potentially have the risk of upsetting the existing relationship they have with the people that referred you on the first place.

You can always ask the different vendors to put you in contact with organisations of a similar profile to yours. It’s important that they are of a similar profile or otherwise their feedback might not be as valuable. If you are a SME owner in the tourism industry a reference by the CSO of a huge high-street bank is of little value. Chances are they are pouring money over the vendor and the firm is bending over backwards to ensure the bank is fully satisfied.

You can ask for examples of similar projects they have undertaken. Don’t satisfy yourself with a conference call or conversation on the subject, consultants (security or otherwise) are paid to sound good even when they aren’t experts in what they are talking about. Try to push for a sanitised report (not the marketing sample) to see what a real-world deliverable looks like (more on this later).

If you can’t get any solid references or pointers through your business contacts, you’ll need to establish trust by yourself. This will take a bit of work and time but it is definitely worth the investment. Also, beware that this is a very technical service you’re shopping for. You have to be able to trust both the management team and the technical team in the firm. There are several things you can look into when trying to establish that trust.

Research and conferences

Something that you hear often when shopping around for penetration testing providers is that the company “should present at conferences”. This in principle sounds like a reasonable idea: if the team is on top of their game, they will be performing cutting edge research that will be of interest to the security industry which will earn them a spot in the security conferences. However, the truth is that with an ever growing number of security conferences every year which in turn have an ever growing number of different tracks running in parallel, not all conference speaking slots are created equal.To give you an example, this month there are at least six of them (not counting BSides London that we are sponsoring ;)).

When evaluating security conference presence, it is important to analyze the contents that were presented. Was it really research or does the company employ a well known industry expert that is regularly invited to speak at the conferences to give their opinion on the state of the art? Does the research have sufficient breath and depth or was it put together in a rush to have it in time for the conference? Was it relevant to your business? For instance, imagine you need to get your SAP deployment tested. Even if a well-known company has someone finding amazing bugs in some cutting edge technology like NFC, you may be better served by a lesser known company presenting on a SAP testing methodology or on the SAP testing toolset they have created over the last few years doing this type of testing.

Something similar could be said about published advisories: the fact that a company has 100s of published advisories may or may not be relevant to your needs. Are the advisories in technologies your company uses? If all their advisories are on Microsoft-related technologies and you are a Linux/Solaris shop, that wouldn’t help. This is a tricky one to assess, especially for non-security people, but it is worth to be on the look for the “but we publish advisories!” line and ask a few follow up questions to see if the company’s background is aligned with your own needs.

Finally, is all this conference presence and research recent enough? The security industry changes quickly and even though security specialists are fairly loyal to their employers, they move on from time to time. Double-check your facts to ensure that the research the vendor is presenting you as proof of competence is recent and that the authors are still with the company. The same could be said of books, courses or tools that have been written “by company members”. Verify they are still around to help your company and if your find out they are not, at least call their bluff to see how your point of contact reacts. The savvier you look in their eyes the better 🙂

The legalese

This falls a bit out of the scope of this post in the sense that has nothing to do with the firm’s technical competence. However it is essential you consider these points as part of your due diligence process:

  • Does the company carry sufficient insurance and reasonable legal agreements ?
  • Are there any NDA terms that you need to discuss with them?
  • Does the firm hold any relevant certifications that your company might care about (e.g. ISO 27001)?

Their approach to testing

After covering the basics of the company’s background, the next thing to focus on is their approach to testing.

There is a lot of solid advice on this subject on this 2007 post by Chris Eng at the Veracode blog, I’ll include a few references to it here, but please go and read it now, it’s well worth the time.

For instance, Chris recommends asking vendors under what circumstances would they advise a customer to bear the risk of a vulnerability. If they can’t give a good example of this, he continues, you might be dealing with someone who views security in a vacuum and doesn’t consider other business factors when framing recommendations. This hits the nail on the head. Your vendor’s approach needs to be aligned with your business goals. Otherwise the return on your investment will be very poor. This type of question should be asked to the people that will be directly involved in the technical delivery of your projects and not to your sales person or account manager. At the very least, you should have a conversation around this with the head of the pentest practice (or technical director).

Team lottery

When working with a technical consultancy, the bigger it gets, the bigger the risk of being affected by the “team lottery”: the variation on service you will notice depending on who gets assigned to deliver your projects. There are two factors that can minimise the risk of the team lottery: the company’s workflow/methodology and the overall composition of the team.

The team

I want to open this section with a quote from Avoid wasting money on penetration testing that makes a great point:

Finally, remember that companies don’t perform penetration tests, people do. So no matter which company you go to, it always boils down to the person you have working on your account.

It is key to cut through the sales layer and try to reach the technical director or pentest practice leader. If you are going to spend any significant amount of money, I’d push it even harder (at least for the first engagement, and every now and then too) and request a conversation with the testers assigned to your project. Or at the very least request their CVs/bios. Do they have experience working under your requirements? Does their general work experience makes you comfortable (e.g. someone that just started their pentesting career may not be the best fit to test your critical AS/400 mainframe)? If in doubt request a conversation or find out if someone else can be assigned to your project. Scheduling is very fluid in pentest firms and they should be able to accommodate such requests. The goal of this exercise is to minimise the team lottery by being vigilant and pushing back.

The firm’s size is also a factor in this equation, as Chris puts it, the bigger the consulting organization gets, the more likely the consultants will be generalists as opposed to specialists. This may or may not be an issue for you. Depending on your requirements, your needs will be better served by a generalist. On the one hand, you don’t want a reverse engineer that specialises in subverting DRM libraries for embedded systems running your external infrastructure pentest, on the other, you don’t want a generalist looking at your DRM library. Again, this goes back to square one: knowing your requirements.

Another way to try to avoid the lottery is to go for a fairly small team where you know each person is well worth their salt and you will get top-shelf service every time. However this isn’t easy to find (or evaluate) and depending on your own firm’s size you may need to use a bigger vendor (smaller firms can’t usually accommodate too many projects or multiple concurrent projects for a single client). Even though these are not very common, such specialist boutiques exist and depending on your situation, size and approach they could be a great fit.

Finally, another interesting subject is to figure out if the company subcontracts any work (to other firms or to freelancers). Don’t get me wrong, some of the finest testers I’ve worked with wouldn’t change freelancing for any job in the world. However, when third parties are involved you have to double check the situation with the firm’s legal coverage (e.g. liability insurance) and the due diligence you performed on the main team’s technical leadership and members of the team should be extended to any third parties and contractors. Moreover, subcontracting introduces additional challenges in the collaboration and methodology department, which as we will see in the next section, are not free of complications.

Workflow, tools and methodology

Even if they have a bunch of great people in the team, there are still some important things to consider about the firm’s methodology and processes.

The first one is the testing methodologies the company has for the different types of engagements that will be relevant to your company (e.g. it is of no use to you if the company excels are wireless assessments if you just need a code review). As discussed in Using testing methodologies to ensure consistent project delivery creating and maintaining a high quality testing methodology is not without its challenges and the bigger the penetration testing firm, the more important their methodology becomes.

There are a number of industry bodies that provide baseline testing methodologies including:

Be advised that the fact that your point of contact is aware of some of these organisations does not mean that the team assigned to your engagement will follow their methodology (or any other methodology for that matter). Have a conversation with the technical director about the methodologies used by the team. And later on have the same conversation with the team members assigned to your project. Protip: if you get different responses from the technical director and the team members or different responses from different team members chances are the firm is not seriously following any defined testing methodologies. For example, if the technical director mentions OSSTM and CREST, and the team leader mentions OWASP and another team member says he mainly relies on his years of experience, that should be a red herring.

Another key part of the firm’s workflow to consider is whether engagements are typically run by a single person or they routinely involve several testers. I’ve already discussed about the importance of collaboration in the past, having multiple testers in your project ensures that a wide range of skills and expertise are brought to bear against your systems which maximises your chances of uncovering most of the problems.

If multiple testers are going to be involved in your assessment, how does the team coordinate their efforts to ensure there is no time wasted and that all points in the methodology are covered? If your test team is on the same page and have the right collaboration tools, you will ensure there won’t be any time wasted, tasks will be split efficiently among the available team members and all points in the methodology will be covered. If on the other hand, the company does not have the tools or processes defined to ensure seamless collaboration and task splitting, some of the time allocated to your projects will be wasted and some areas of the methodology may remain unexplored while the team is spending time trying to manage the collaboration overhead.

The penetration testing report

In the majority of the cases, when you engage a penetration testing firm, the final deliverable you receive is security report. Before making your decision and choosing a vendor, it is important that you are provided with a sample report by each prospect.

The report needs to be able to stand on its own, providing comprehensive information about the project: from a description of the scope, to a high-level, my-CEO-would-understand-this-language executive summary of the results and a detailed list of findings. It should also provide remediation advice and any supporting information required to both validate the work performed by the team (does it look like they attained sufficient coverage?) and verify that issues had been successfully mitigated after the remediating work is performed.

Whilst some of the report sections have to be very technical and full of proof-of-concept code, requests or tool output, the report also needs to present the results of the engagement in the context of your business. Sure, you found three Highs, seventeen Mediums and twenty Lows, what does it mean for my business? Should I get the team to stop doing what they are doing and fix all the issues? Some of them? None of them? All findings are not created equal, and some testers get carried away by the technical details or the technical mastery required to find and exploit the issues and forget about presenting them in a context that matters to your business. In general the more experienced the tester, the more emphasis will be put in the business context around the findings uncovered (of course “experience” is not a synonym of “age”).

As a result, and to try to avoid the team lottery mentioned above, in an ideal world you would like to be provided with a sanitised report written by the same person that will be writing your own deliverable. This may not be practical in every instance but if you are going to engage on a mid-size or larger assessment, I think it is reasonable to push for this sort of proof to ensure that the final document you will receive is legible, valuable to your business and of an overall high-quality standard.

tl; dr;

  • Your requirements, to get the best value for your investment you need to know what you need help with, is it a pentest? or just a VA? or help with some basic security awareness training for your development team?
  • Trust, can someone recommend you a trustworthy security vendor? If no, then for each prospect partner try to figure out what’s the firm’s background? Have they worked with clients in your industry? Are they interested in your business? Do they perform any research in areas that are relevant to you?
  • Their approach, who will be delivering your assessment? Do they understand your business and motivations? What is their workflow like? Do they have a process in place to ensure consistent, high quality results every time?

There are a lot of moving parts in this process, and not all of them will apply to every vendor and every company looking for a penetration testing provider.

Here are Security Roots we can’t help you with your security testing needs (we will stick to doing what we know best), but hopefully now you should be better equipped to consider all the pros and cons and some of the gotchas involved in deciding what security firm you should trust with your business.

Using a collaboration tool: why being on the same page matters?

In this post I want to expand over the ideas discussed in The importance of collaboration during security testing focusing not so much in the splitting of tasks and work streams (that’s subject of another post) but on the magic that happens when all team members are on the same page sharing a clear picture of what is going on and on the role played by your collaboration tool.

Security testing often benefits from multiple people looking at the same target. The problem with one-person assessments is that, even if you follow a testing methodology you may miss stuff. This is because the way you approach testing, following the same patterns, looking for some telltale signals from the environment, you bring your background and intuition to the test. A fellow auditor would bring their own set of patterns and predefined expectations to the test. Combining the two approaches almost always produces interesting results.

As a rule of thumb, the more people trying to uncover issues, the better. Of course there is a limit to this rule (e.g. in a large enough test team, some testers would try to hide in the crowd and not pull their weight), but again that’s subject for another post.

In an ideal world

In an ideal world, everyone in the team will be on the same room throughout the duration of the test. They’ll be talking to each other, looking over each other’s shoulder when something interesting comes up and writing down everything they find, as they find on a whiteboard.

Both the economics and logistics of real world testing make this scenario highly unlikely except when some very specific circumstances are met. Testers are often based around the country (or the world), clients can’t afford the investment of putting together a larger testing team when a smaller one could get the job done, etc. Only in special circumstances, typically when the reputation of either the penetration testing firm (e.g. PCI ASV accreditation) or the client (e.g. new product launch) is at stake, the conditions are met to make this kind of effort. A strike team is put together to tear the system apart and testers and target are locked in a room (rules of Thunderdome apply).

The one man test

More often than not, security tests are performed by a single auditor. This is fine, provided the auditor has the right background for the job.

Even in this scenario it is fairly easy for things to slip through the cracks. You are focussed investigating a promising issue, then you notice a weird behaviour and if you don’t make a note there and then to look into it later, you will forget about it. The weird behaviour doesn’t get investigated. The test finishes and you even forget that you noticed it on the first place. Everybody looses.

Note-taking is a crucial skill. Noticing minor issues, adding them to the queue, triaging, and back to square one. Of course all this, while you follow a suitable testing methodology. The devil is in the details, and noticing the stuff that is not covered by the standard methodology is often the key to unlocking some of the more interesting bugs.

I would argue that even one-man teams would benefit from using a collaboration tool that lets them keep the big picture of the engagement (e.g. scope, progress, methodology, notes, findings, attachments, etc.). But in this case, I would even settle for a pen and a notebook. Just make sure nothing slips through the cracks! Of course, you’ll also need a cross-cut shredder to dispose of the paper once you are done 😉

Nevertheless, using a collaboration tool would enable you to share your interim findings with other stakeholders (account manager, client POC, etc.) and possibly reduce your reporting time.

The ever changing team

On the opposite side of the spectrum you have ‘fluid team‘ tests. We’ve all been there. On day one, you’ve got 2 testers that are going to be testing for 2 weeks, then in the afternoon, client requirements change, and now you have 3 testers working for 1 week. On day two, they change again and it’s back to 2 testers for 2 weeks, but your original team-mate has been pulled to perform some specialist testing only he’s capable of delivering. With every change in scope and team, you have to make sure everyone is brought to the same page or you won’t be able to make any progress.

If you’re keeping track of the project via email, you’ve got a problem. Every time a new tester joins the team, you’ve got to forward all the scoping emails, plus all the “I’ve found X” emails. Every time someone leaves the team, you have to chase them to send more emails with their latest findings. For the team leader, this is a waste of valuable time, it’s time that he can’t spend testing.

The alternative, using a collaboration tool, could make things a lot simpler. You receive scoping information and add it to the project. A new tester joins the team? Check the project information in the tool to find about the scope. Everyone adds their findings as they go along. Suddenly a team member becomes unavailable? No problem all their findings are already on the tool. A new team member joins half the way through? Check the project page to get up to speed in no time. Go through all the issues covered so far, check the methodology to find out what remains to be done, roll up your sleeves and start working.

The report

I have been arguing for the use of a collaboration tool during the engagement, to ensure everyone is on the same page. If everything has developed according to plan, the scoping information was available in the tool at the beginning of the test and everyone has been feeding their notes and findings as they went along. Now the test is over and it’s time to write the report. Whoever has to write it knows that all the information is on a single place, everything that’s needed: issues, evidence, screenshots, tool output, can be found there. If our report writer is savvy enough, he would have been keeping an eye on the project page to ensure that the information for all the issues found by each team member was complete, every i was dotted and every t was crossed. And all this can happen before the reporting time even starts.

Consider for a moment the alternative. There is no collaboration tool, progress is made via email (e.g. “Hey guys, look what I’ve found!”) and each member of the team is keeping a notes.txt file in their laptop. On the last day of the engagement, the report writer receives the notes from each tester: plain text files, word documents, .zip files with text and screenshots, etc. There will be a significant amount of time wasted collating results. Even if everyone provided all the information, there is still a requirement to re-format and re-style everything for the final report. If someone missed something, or if further evidence/details are required, it is almost certain that you won’t be able to get them. The system is firewalled off again, the test accounts no longer work, the person that found the issue on the first place is now doing a gig for the government in a bunker somewhere with zero connectivity, etc.

The amount of work required to feed a collaboration tool as you go along with complete information about the issues you uncover is insignificant compared to the task of manually collating the results of N testers (remember the ‘fluid team‘ problem?), reformatting and chasing around the missing bits and pieces.

Defining the requirements for a collaboration tool

We’ve covered a lot of ground in this post. Hopefully I’ve managed to highlight some of the merits of using a collaboration tool. The benefits that we would like to see in our solution would be:

  • Effective sharing: keep things organised, provide a big picture overview of the project: scope, coverage/progress, findings, notes, attachments, etc.
  • Flexible: you need to be able to extend the tool, adapt it to your needs and to the other tools and systems in your environment. “Silver bullet” solution that pretend to do everything for everyone out of the box, most likely won’t fit your needs. You need to be able to extend, modify and adapt your tool to fit your needs.
  • Capture all the data needed for the report. If the solution only lets you capture some of the information you’ll need for the report, you will be adding complexity to the workflow. Get all the information at once, while it’s fresh in the tester’s mind, add it to the solution, and forget about it until the report is due.
  • Ideally it should have report generation capabilities. Customisable report templates and the possibility of editing the report yourself after it is generated (e.g. Word vs. PDF) are also a plus.
  • Easy to adopt. To disrupt as little al possible the testers’ workflow, something that is easy to use, and cross-platform will go a long way towards adoption.

These are some of the guiding principles that we followed when we created and open sourced the Dradis Framework back in the day. More than 24,000 downloads later and after Dradis has been included in the BackTrack distro, featured in books like Grey Hat Hacking and Advanced Penetration Testing for Highly-Secure Environments it looks like we were into something.

These days, we continue to work hard to help our users collaborate more effectively and our clients to be more competitive.

Every manager or senior team member that tries to push for a collaboration tool in an environment where none is being used is bound to face a degree of push back. This post provides some of the counter arguments that can be used to fight such push back, however, fighting the push back is a complicated subject that calls for an entire post on its own.

Mapping external tool output to fit your reporting needs

One of the challenges usually faced by information security teams during their day to day operations is trying to make sense of the diverse output produced by the different tools they need to use. Mapping external tool output into a format that is going to be useful for you is no small challenge. This is one of the reasons that we sometimes hear that well rounded security professionals should have a bit of sysadmin and also coding experience so they can quickly knock together a parser script or a concatenate a few greps and awks to parse tool output and produce information that is relevant to the task at hand.

The number of security tools is growing every year, and this is great. However, each tool provides its output in a slightly different format using slightly different labels (not so great). Even tools that are in the same space like Nessus, Nexpose or Qualys can’t agree on the best nomenclature to provide their findings on (e.g. Title vs. Name vs. Plugin Name). It makes sense as part of their commercial strategy to maintain a degree of differentiation by structuring the information in different ways. But that doesn’t help you.

There are several sides to this tool diversity problem: on the one hand you need to be able to make sense of all these heterogeneous output. On the other hand you need to collate, remove duplicates and adjust the information to produce a report that is valuable and accurate. You can do this by hand and repeat it for every project and tool combination or you can automate it.

As you can see, this type of mapping can be useful in a number of different scenarios.

The full fledged pentest or webapp assessment

In a full pentest or webapp assessment, you will be running a bunch of automated scans to complement your manual testing efforts.

Being able to upload tool output and getting it converted to the right format for the final report is going to save you a lot of time. On day one you’d kick a bunch of scans and forget about them. You’d then start manually testing the target system and following your own testing methodology. Ideally you’ll be adding your notes for the different issues and preparing the final report as you test along.

Once the scans have completed you can quickly process the output produced by the tool and map it to the nomenclature that you need in your report. Discard false positives and add a note to make sure you investigate in detail any real positives.

The vulnerability assessment (VA)

Not every client needs a full pentest every time. Take PCI compliance, there is a requirement to run quarterly VA scans. Most likely the client won’t be able to afford a pentest every quarter, but by reducing the scope of the testing to an automated vulnerability assessment, it is possible to achieve a degree of assurance without blowing up the budget.

In this less sophisticated vulnerability assessment jobs, you often need to run some scanners, triage false positives and produce a branded report. The goal is to reduce the time and effort it takes you to implement the full assessment (i.e. scanning + reporting) so you can offer the service at a competitive rate to your clients.

If you had an automated tool that lets you map between the output produced by the different scanners you use and the format your final report is going to be into, you could be saving yourself a lot of time. In this case there is no need for any manual testing (apart from the one required to discard false positives):

  1. Get scanner output
  2. Map to your own format / nomenclature
  3. Produce a branded deliverable

The Plugin Manager

It looks like either of the scenarios described above could benefit from some automation. This is why we introduced the Plugin Manager in Dradis Pro, a module that you can use to map external tool output into the format you need.

As usual there are many different ways to tackle the problem. We could define our own “schema” for the information and hard-code mappings between the different tools and our own nomenclature. Unfortunately, as perfectly captured by xkcd 927, nothing guarantees that our schema is going to be useful to everybody.

What we really needed was a way to let our users define their own schemas and map from external tool output to the schema they had defined. Company A needs to use Title, Risk, Description and Recommendation? That’s fine you can do it. Company B uses Issue, Impact, Probability, Description and Mitigation? Yes, we support that. Once you have agreed on your own representation for the information you can use the Plugin Manager to map between Nessus output and your own or between Qualys or Nexpose and your own. That is a lot of power right there. This is how it works:

Mapping external tool output with the Plugin Manager: from sample to template to result

First, for each of the tools we provide you with a sample of the output produced by the tool. Then you are able to define a template to map between the different fields provided by the tool and those that you need for your deliverables. In the example we are creating a template for Nessus issues. We want to extract the Plugin name, Description and Solution fields from Nessus and map them to Title, Description and Mitigation respectively. The template builder also has a live preview panel that lets you see the end result of your mapping to make sure everything is working as expected.

As you can see, this approach requires a bit of upfront investment (i.e. you have to create the mappings for the different tools you are going to use) but in return it gives you a lot of flexibility and saves you a significant amount of time. Mapping external tool output to fit your reporting needs has never been easier.

Conclusion

With the growing number of tools that made their way into the security tester’s arsenal, making sense of the different output formats in an effective way is becoming crucial skill. If adding a new tool to your methodology is going to mean that you have to spend more time manually mapping the output or collating information, you’re limiting yourself. Finding out a way to map between the different tool outputs and the format you need for your deliverables is a worthwhile investment.

Using testing methodologies to ensure consistent project delivery

It doesn’t matter if you are a freelancer or the Technical Director of a big team: consistency needs to be one of the pillars of your strategy. You need to follow a set of testing methodologies.

But what does consistency mean in the context of security project management? That all projects are delivered to the same high quality standard. Let me repeat that again:

Consistency means that all projects are delivered to the same high quality standard

Even though that sounds like a simple goal, there are a few parts to it:

  • All projects: this means for all of your clients, all the time. It shouldn’t matter if the project team was composed of less experienced people or if this is the 100th test you’re running this year for the same client. All projects matter and nothing will reflect worse in your brand than one of your clients spotting inconsistencies in your approach.
  • The same standard: as soon as you have more than one person in the team, they will have different levels of skill, expertise and ability for each type of engagement. Your goal is to ensure that the process of testing is repeatable enough so each person knows the steps that must be taken in each project type. There are plenty of sources that you can base your own testing methodology upon including the Open Source Testing Methodology Manual or the OWASP Testing Guide (for webapps).
  • High quality: this is not as obvious as it seems, nobody would think of creating and using a low quality methodology, but in order for a methodology to be useful you need to ensure it is reviewed and updated periodically. You should keep an eye on the security conferences calendar (also a CFP list) and a few industry mailing lists throughout the year and update your methodologies accordingly.

So how do you go about accomplishing these goals?

Building the testing methodology

Store your methodology in a file

We’ve seen this time and again. At some point someone decides that it is time to create or update all the testing methodologies in the organization and time is allocated to create a bunch of Word documents containing the methodologies.

Pros:

  • Easy to get the work done
  • Easy to assign the task of building the methodology
  • Backups are managed by your file sharing solution

Cons:

  • Difficult to maintain methodologies up to date.
  • Difficult to connect to other tools
  • Where is the latest version of the document?
  • How do you know when a new version is available?
  • How does a new member of the team learn about the location of all the methodologies?
  • How do you prevent different testers/teams from using different versions of the document?
Use a wiki

Next alternative is to store you methodology in a wiki.

Pros:

  • Easy to get started
  • Easy to update content
  • Easy to find the latest version of the methodology
  • Easier to connect to other tools

Cons:

  • Wikis have a tendency to grow uncontrollably and become messy.
  • You need to agree on a template for your methodologies, otherwise all of them will have a slightly different structure.
  • It is somewhat difficult to know everything that’s in the wiki. Keeping it in good shape requires constant care. For instance, adding content requires adding references to the content in index pages (some times to multiple index pages) and categorizing each page so they are easy to find.
  • There is a small overhead for managing the server / wiki software (updates, backups, maintenance, etc.).
Use a tool to manage your testing methodologies

Using a testing methodology management tool like VulnDB or something you create yourself (warning creating your own tools will not always save you time/money).

Pros:

  • Unlike wikis, these are purpose-built tools with the goal of managing testing methodologies in mind: information is well structured.
  • Easy to update content
  • Easy to find the latest version of the methodology
  • Easiest to connect to other tools
  • There is little overhead involved (if using a 3rd party)

Cons:

  • You don’t have absolute control over them (if using a 3rd party).
  • With any custom / purpose-built system, there is always a learning curve.
  • There is strategic risk involved (if using a 3rd party). Can we trust these guys? Will they be in business tomorrow?

Using the testing methodology

Once you have decided the best way in which to store and manage your testing methodologies the next question to address is: how do you make the process of using your testing methodologies painless enough so you know they will be used every time?

Intellectually we understand that all the steps in our methodology should be performed every time. However, unless there is a convenient way for us to do so, we may end up skipping steps or just ignoring the methodology all together and trusting our good old experience / intuition and just get on with the job at hand. Along the same lines, in bigger teams, it is not enough to say please guys, make sure everyone is using the methodologies. Chances are you won’t have the time to verify everyone is using them so you just have to trust they will.

Freelancers and technical directors alike should focus their attention in removing barriers of adoption. Make the methodologies so easy to use that you’d be wasting time by not using them.

The format in which your methodologies are stored will play a key part in the adoption process. If your methodologies are in Word documents or text files, you need to keep the methodology open while doing your testing and somehow track your progress. This could be easy if your methodologies are structured in a way that lets you start from the top and follow through. However, pentesting is usually not so linear (I like this convergent intelligence vs divergent intelligence post on the subject). As you go along you will notice things and tick off items located in different sections of the methodology.

Even if you store your methodologies in a wiki, the same problem remains. A solution to the progress tracking problem (provided all your wiki-stored methodologies are using a consistent structure) would be to create a tool that extracts the information from the wiki and presents it to the testers in a way they can use (e.g. navigate through the sections, tick-off items as progress is made, etc.). Of course, this involves the overhead of creating (and maintaining) the tool. And then again, it depends on how testers are taking their project notes. If they are using something like Notepad or OneNote, they will have to use at least two different windows: one for the notes and one for following the methodology which isn’t ideal.

In an ideal world you want your methodologies to integrate with the tool you are using for taking project notes. However, as mentioned above, if you are taking your notes using off the shelf note taking applications or text editors you are going to have a hard time integrating. If you are using a collaboration tool like Dradis Pro or some other purpose-built system, then things will be a lot easier. Chances are these tools can be extended to connect to other external tools.

Now you are into something.

If you (or your testers) can take notes and follow a testing methodology without having to go back and forth between different tools, it is very likely you will actually follow the testing methodology.

The importance of collaboration during security testing

Team collaboration is crucial to ensure the success in security testing. Of course this is an age-old problem, and not at all constrained to the security industry. In any meaningful task, team members need to draw upon pieces of each others vision to create a cohesive idea and achieve a significant result.

You know the feeling, you check your calendar and in a few days you start a new project, but uh oh, this is a four-man team gig. Trouble ahead, a gazillion of emails back and forth and no clear picture of where we are, what else do we need to cover or whether we left something out when everyone thought someone else was looking at it.

The first friction point is usually between different business units: does the technical team have everything they need to hit the ground running on the first day of the assessment? Getting your act together as a security services vendor is far from trivial and requires some work. I’ll write about it soon.

Anyway, back to the test proper. In order for the total results to be greater than the sum of their parts something needs to happen. It is not good enough that each team member is thorough, technically excellent and organised. Information needs to be shared, a glitch in part A of the system noticed by one tester can be exploitable from part F which is been looked at by a different tester. If each person works in isolation, this magic won’t happen.

As a team member, how are you solving this problem? How are you making sure that everyone else has a clear picture of what you’ve uncovered so far so they can build upon your findings? And conversely, how are you building upon your team mates’ findings to improve the overall results of the team?

If you are the technical director or founder of a project-based organisation, are you enabling your team to collaborate efficiently? Is the way in which they collaborate formalised or is it left to each tester and team to decide? If they are not sharing the information they’ve got effectively, are your clients getting the most out of your excellent team?

You gotta commit

This answer from Bill Murray really hits the mark:

Bill: You gotta commit. You’ve gotta go out there and improvise and you’ve gotta be completely unafraid to die. You’ve got to be able to take a chance to die. And you have to die lots. You have to die all the time. You’re goin’ out there with just a whisper of an idea. The fear will make you clench up. That’s the fear of dying. When you start and the first few lines don’t grab and people are going like, “What’s this? I’m not laughing and I’m not interested,” then you just put your arms out like this and open way up and that allows your stuff to go out. Otherwise it’s just stuck inside you.

Bill Murray interview in Esquire via nate

When building a product and exposing it to the world, especially if you are a small organisation like us, you have to be unafraid to die. See what works and keep improving it, see what doesn’t, remove it completely and start again.