Somewhere Over The Rainbow – A Story About A Global Ubiquitous Record of All Things Incident

One of my posts from earlier in 2012 discussed the idea that CEOs are to blame for all of our problems in security. The idea that we “must have reliable actuarial data on incidents to stay relevant” is another of the information security holy sacred cows that rears its head every so often, and this post explains covers three angles on the incidents database idea. First I look at the impracticalities of gathering incidents data. Second, even if we have accurate data, exactly how useful is the data to us in the formulation of risk management decisions, and third, even if the data is accurate and useful, did we even need it in the first place?

Shostack and Stewart’s New School book was from the mid-2000s and a chapter is devoted to the absolute necessity of a global, ubiquitous incidents database. Such a thing is proclaimed by many as carrying do or die importance, as if we need it to prove the existence of a threat, and moreover to prove our right to exist as security professionals. Do we really need a global database to prove to the C-levels that spending on security is necessary?

There are of course some real blockers with regard the gathering of incidents data, not least the “what the heck just happened” factor, where post-incident, logging is turned on (it was off until the incident occurred) and $300k invested in a SIEM solution – one that really doesn’t help the business to respond effectively to incidents, it just gives operations a nice vehicle with which they can use to diagnose non-security related problems, and of course the vendor/box pusher consultants knew what they were doing when they configured the thing.

Anyway it really isn’t terribly constructive to talk reality with regard this subject. We need to talk about a theoretical world, somewhere over the rainbow – one where logging is enabled, internal detection controls are well configured, and internal IT ops and sec know how to respond to incidents and they understand log messages. Okay, most businesses don’t even know they were hacked until a botnet command and control box is owned by some supposed good guys somewhere, but all talk of security is null and void if we acknowledge reality here. So let’s not talk reality.

So incidents data simply won’t be available in many cases – that base is now covered. But what are we trying to achieve here? The proposal is to use past incidents data to help us prove the existence of a threat in formulating decisions on risk. We receive a penetration test report that tells us that vulnerability X was compromised 4 years ago in Timbuctoo. The other vulnerabilities mentioned in the report – according to our accurate database there is no mention of them ever having been compromised with resulting financial damage. So we fix vulnerability X and ignore the rest – a wise strategy, especially given that our incidents database hosts accurate information with check-sums and integrity built-in (well – I did say let’s not talk reality here).

But wait a minute – vulnerability X was exploited 4 years ago in a company Y in an industry sector Z. What if company Y’s network was wide open? Where is this leading? All networks are different. All businesses face different challenges. Even those in the same industry sector as each other are completely different. How ludicrous this is getting? Business X can even have the exact same network architecture as another business Y in the same industry sector, and yet the exploitation of the same vulnerability X in business X can lead to $100K damages for business X, but $100 damages in business Y. So are we really going to use these records of financial damages to formulate our risk acceptance, transferal, or mitigation decision? One would hope not.

As if this all isn’t bad enough: even if we ignore the hard reality of the limitations covered here, how long would it take before we have gathered enough information to call the database useful? 10 years? 20 years? One million incidents or…? Products are past their shelf life in less than a decade generally. I did come across an IBM AIX 4.1 (we’re talking mid-to-late 90s here) box at a customer site not so long ago, but thankfully that was an isolated incident.

After all the deliberation, it emerges that security is too complex – it cannot be reduced down to a simple picture a la “vulnerability X was exploited resulting in damages Y in company Z, and therefore we in Botnetz R Us need to address vulnerability X”.

All this is not to say that there is no benefit in knowing what’s going off out there in the wild. We can pick up on on-going bigger picture attack trends to enable us to prepare for what might be down the road for us. But do we really need details of past incidents to convince businesses to part with cash in risk mitigation, or moreover validate our existence?

What we’re really concerned with here is trust. The proponents of a big data repository of incident big data would have it that we need such a thing because the powers that be don’t trust us. When we propose a mitigation of a particular risk, they don’t trust our advice. Unfortunately though, we might get one success story where we consult the oracle of all incidents and we do magically find an incident related to the problem we are trying to fix, and we use this “evidence” to convince the bosses. What about the next problem though? We can’t use the same card trick next time to address the next risk issue if no such problem was ever encountered (and reported) before by another company somewhere else in the world. By looking into the history of all incidents we’re setting a dangerous precedent, and rather than enabling trust, we’re making the situation even worse.

We don’t need an incidents history record. What we need is for our customers to trust us. Our customers are C-levels, other business units, home users, in-house reps, and so on. How do we get them to trust us? What we need is a single accreditation path for security professionals, one that ties Analysts to past experience as an IT admin or developer, and one that ties Security Managers with past Analyst experience. With this, security managers can confidently deliver risk-related advisories to their superiors, safe in the knowledge that their message is backed up by Analysts with solid tech experience, and in whose advice they are comfortable in placing their trust.

A Tribute To Our Oldest And Dearest Of Friends – The Firewall (Part 2)

In the first part of my coverage on firewalls I mentioned about the usefulness of firewalls, and apart from being one of the few commercial offerings to actually deliver in security, the firewall really does do a great deal for our information security posture when its configured well.

Some in the field have advocated that the firewall has seen its day and its time for the knackers yard, but these opinions are borne from a considerable distance from the coal face in this business. Firewalls, when seen as something as in the movies, as in “breaking through”, “punching through” the firewall, can be seen as useless when bad folk have compromised networks seemingly effortlessly. One doesn’t “break through” a firewall. Your profile is assessed. If you fit a certain profile you are allowed through. If not, you absolutely shall not pass.

There have been counters to these arguments in support of firewalls, but the extent of the efficacy of well-configured firewalls has only been covered with some distance from the nuts and bolts, and so is not fully appreciated. What about segmentation for example? Are there any other security controls and products that can undisputedly be linked with cost savings? Segmentation allows us to devote more resources to more critical subnets, rather than blanket measures across a whole network. As a contractor with a logistics multinational in Prague, I was questioned a few times as to why I was testing all internal Linux resources, on a standard issue UK contract rate. The answer? Because they had a flat, wide open internal network with only hot swap redundant firewalls on the perimeter. Regional offices connecting into the data centre had frequent malware problems with routable access to critical infrastructure.

Back in the late 90s, early noughties, some service providers offered a firewall assessment service but the engagements lacked focus and direction, and then this service disappeared altogether…partly because of the lack of thought that went into preparation and also because many in the market really did believe they had nailed firewall configuration. These engagements were delivered in a way that was something like “why do you leave these ports open?”, “because this application X needs those ports open”…and that would be the end of that, because the service providers didn’t know application X, or where its IT assets were located, or the business importance of application X. After thirty minutes into the engagement there were already “why are we here?” faces in the room.

As a roaming consultant, I would always ask to see firewall configurations as part of a wider engagement – usually an architecture workshop whiteboard session, or larger scale risk assessment. Under this guise, there is license to use firewall rulebases to tell us a great deal about the organisation, rather than querying each micro-issue.

Firewall rulebases reveal a large part of the true “face” of an organization. Political divisions are revealed, along with the old classic: opening social networks, betting sites (and such-like) only for senior management subnets, and often times some interesting ports are opened only for manager’s secretaries.

Nine times out of ten, when you ask to see firewall rules, faces will change in the room from “this is a nice time wasting meeting, but maybe I’ll learn something about security” to mild-to-severe discomfort. Discomfort – because there is no hiding place any more. Network and IT ops will often be aware that there are some shortcomings, but if we don’t see their firewall rules, they can hide and deflect the conversation in subtle ways. Firewall rulebases reveal all manner of architectural and application – related issues.

To illustrate some firewall configuration and data flow/architectural issues, here are some examples of common issues:

– Internal private resources 1-to-1 NAT’d to pubic IP addresses: an internal device with a private RFC 1918 address (something like 10. or 192.168. …) has been allocated a public IP address that is routable from the public Internet and clearly “visible” on the perimeter. Why is this a problem? If this device is compromised, the attacker has compromised an internal device and therefore has access to the internal network. What they “see” (can port scan) from there depends on internal network segmentation but if they upload and run their own tools and warez on the compromised device, it won’t take long to learn a great deal about the internal network make-up in these cases. This NAT’ing problem would be a severe problem for most businesses.

– A listening service was phased out, but the firewall still considers the port to be open: this is a problem, the severity of which is usually quite high but just like everything else in security, it depends on a lot of factors. Usually, even in default configurations, firewalls “silently drop” packets when they are denied. So there is no answer to a TCP SYN request from a port scanner trying to fire-up some small talk of a long winter evening. However, when there is no TCP service listening on a higher port (for example) but the firewall also doesn’t block access to this port – there will be a quick response to the effect “I don’t want to talk, I don’t know how to answer you, or maybe you’re just too boring” – this is bad but at least there’s a response. Let’s say port 10000 TCP was left unfiltered. A port scanner like nmap will report other ports as “filtered” but 10000 as “closed”. “Closed” sounds bad but the attacker’s eye light up when seeing this…because they have a port with which to bind their shell – a port that will be accessible remotely. If all ports other than listening services are filtered, this presents a problem for the attacker, it slows them down, and this is what we’re trying to achieve ultimately.

– Dual-homed issues: Sometimes you will see internal firewalls with rules for source addresses that look out of place. For example most of the rules are defined with 10.30.x.x and then in amidst them you see a 172.16.x.x. Oh oh. Turns out this is a source address for a dual-homed host. One NIC has an address for a subnet on one side of a firewall, plus one other NIC on the other side of the firewall. So effectively the dual-homed device is bypassing firewall controls. If this device is compromised, the firewall is rendered ineffective. Nine times out of ten, this dual homing is only setup as a short cut for admins to make their lives easier. I did see this once for a DMZ, where the internal network NIC address was the same subnet as a critical Oracle database.

– VPN gateways in inappropriate places: VPN services should usually be listening on a perimeter firewall. This enables firewalls to control what a VPN user can “see” and cannot see once they are authenticated. Generally, the resources made available to remote users should be in a VPN DMZ – at least give it some consideration. It is surprising (or perhaps not) how often you will see VPN services on internal network devices. So on firewalls such as the inner firewall of a DMZ, you will see classic VPN TCP services permitted to pass inbound! So the VPN client authenticates and then has direct access to the internal network – a nice encrypted tunnel for syphoning off sensitive data.

Outbound Rules

Outbound filtering is often ignored, usually because the business is unaware of the nature of attacks and technical risks. Inbound filtering is usually quite decent, but its still the case as of 2012 that many businesses do not filter any outbound traffic – as in none whatsoever. There are several major concerns when it comes to egress considerations:

– Good netizen: if there is no outbound filtering, your site can be broadcasting all kinds of traffic to all networks everywhere. Sometimes there is nothing malicious in this…its just seen as incompetence by others. But then of course there is the possibility of internal staff hacking other sites, or your site can be used as a base from which to launch other attacks – with a source IP address registered under the ownership of the source of the attack – and this is no small matter.

– Your own firewall can be DOS’d: Border firewalls NAT outgoing traffic, with address translation from private to public space. With some malware outbreaks that involve a lot of traffic generation, the NAT pool can fill quickly and the firewall NAT’ing can fail to service legitimate requests. This wouldn’t happen if these packets are just dropped.

– It will be an essential function of most malware and manual attacks to be able to dial home once “inside” the target – for botnets for example, this is essential. Plus, some publicly available exploits initiate outbound connections rather than fire up listening shells.

Generally, as with ingress, take the standard approach: start with deny-all, then figure out which internal DNS and SMTP servers need to talk to which external devices, and take the same approach with other services. Needless to say, this has to be backed by corporate security standards, and made into a living process.

Some specifics on egress:

– Netbios broadcasts reveal a great deal about internal resources – block them. In fact for any type of broadcast – what possible reason can there be for allowing them outside your network? There are other legacy protocols which broadcast nice information for interested parties – Cisco Discovery Protocol for example.

– Related to the previous point: be as specific as possible with subnet masks. Make these as “micro” as possible.

– There is a general principle around proxies for web access and other services. The proxy is the only device that needs access to the Internet, others can be blocked.

– DNS: Usually there will be an internal DNS server in private space which forwards queries to a public Internet DNS service. Make sure the DNS server is the only device “allowed out”. Direct connections from other devices to public Internet services should be blocked.

– SMTP: Access to mail services is important for many malware variants, or there is mail client functionality in the malware. Internal mail servers should be the only devices permitted to connect to external SMTP services.

As a final note, for those wishing to find more detail, the book I mentioned in part 1 of this diatribe, “Building Internet Firewalls” illustrates some different ways to set up services such as FTP and mail, and explains very well the principles of segregated subnets and DMZs.

A Tribute To Our Oldest And Dearest Of Friends – The Firewall (Part 1)

In my previous article I covered OS and database security in terms of the neglect shown to this area by the information security industry. In the same vein I now take a look at another blast from the past – firewalls. The buzz topics these days are cloud, big data, APT, “cyber”* and BYOD. Firewall was a buzz topic a very long time ago, but the fact that we moved on from that buzz topic, doesn’t mean we nailed it. And guess what? The newer buzz topics all depend heavily on the older ones. There is no cloud security without properly configured firewalls (and moving assets off-campus means even more thought has to be put into this area), and there shouldn’t be any BYOD if there is no firewall(s) between workstation subnets and critical infrastructure. Good OS/DB security, plus thoughtful firewall configs sets the stage on which the new short-sighted strategies are played out and retrenched.

We have a lot of bleeding edge software and hardware products in security backed by fierce marketing engines which set unrealistic expectations, advertised with 5 gold star ratings in infosec publications, coincidentally next to a full page ad for the vendor. Out of all these products, the oldest carries the highest bang for our bucks – the firewall. In fact the firewall is one of the few that actually gives us what we expect to get – network access control, and by and large, as a technology it’s mature and it works. At least when we buy a firewall looking for packet filtering, we get packet filtering, unlike another example where we buy a product which allegedly manages vulnerability, but doesn’t even detect vulnerability, let alone “manage” it.

Passwords, crypto, filesystem permissions – these are old concepts. The firewall arrived on the scene some considerable number of years after the aforementioned, but before some of the more recent marketing ideas such as IdM, SIEM, UTM etc. The firewall, along with anti-virus, formed the basis of the earliest corporate information security strategies.

Given the nature of TCP/IP, the next step on from this creation was quite an intuitive one to take. Network access control – not a bad idea! But the fact that firewalls have been around corporate networks for two decades doesn’t mean we have perfected our approach to configuration and deployment of firewalls – far from it.

What this article is not..

“I’m a firewall, I decide which packets are dropped or passed based on source and destination addresses and services”.

Let’s be clear, this article is not about which firewall is the best. New firewall, new muesli. How does one muesli differ from another? By the definition of muesli, not much, or it’s not muesli any more.

Some firewalls have exotic features – even going back 10 years, Checkpoint Firewall-1 had application layer trackers such as FTP passive mode trackers, earlier versions of which crashed the firewall if enabled – thereby introducing DoS as an innovative add-on. In most cases firewalls need to be able to track conversations and deny/pass packets based on unqualified TCP flags (for example) – but these days they all do this. Firewalls are not so CPU intensive but they can be memory-intensive if conversations are being monitored and we’re being DoS’d – but being a firewall doesn’t make a node uniquely vulnerable to SYN-Flood and so on. The list of considerations in firewall design goes on and on but by 2012 we have covered off most of the more important, and you will find the must-haves and the most useful features in any modern commercial firewall…although I wouldn’t be sure that this covers some of the UTM all-in-one matchbox size offerings.

Matters such as throughput and bandwidth are matters for network ops in reality. Our concern in security should be more about configuration and placement.

On the matter of which firewall to use, we can go back to the basic tenet of a firewall as in the first paragraph of this subsection – sometimes it is perfectly fine to cobble together an old PC, install Linux on it, and use iptables – but probably not for a perimeter choke point firewall that has to handle some considerable throughput. Likewise, do you want the latest bright flashing lights, bridge of the Starship Enterprise enterprise box for the firewall which separates a 10-node development subnet from the commercial business production subnets? Again, probably not – let’s just keep an open mind. Sometimes cheap does what we need. I didn’t mention the term “open source” here because it does tend to evoke quite emotional responses – ok well i did mention it actually, sorry, just couldn’t help myself there. There are the usual issues with open source such as lack of support, but apart from bandwidth, open source is absolutely fine in many cases.

Are firewalls still important?

All attack efforts will be successful given sufficient resources. What we need to do is slow down these efforts such that the resources required outweighs the potential gains from owning the network. Effective firewall configuration helps a great deal in this respect. I still meet analysts who underestimate the effect of a firewall on the security posture.

Taking the classic segregated subnet as in a DMZ type configuration, by now most of us are aware at least that a DMZ is in most cases advisable, and most analysts can draw a DMZ network diagram on a white board. But why DMZ? Chiefly we do this to prevent direct connections from untrusted networks to our most valuable information assets. When an outsider port scans us, we want them to “see” only the services we intend the outside world to see, which usually will be the regular candidates: HTTPs, VPN, etc. So the external firewall blocks access to all services apart from those required, and more importantly, it only allows access to very specific DMZ hosts, certainly no internal addresses should be directly accessible.

Taking the classic example of a DMZ web server application that connects to an internal database. Using firewalls and sensible OS and database configuration, we can create a situation where we can add some considerable time on an attack effort aimed at compromising the database. Having compromised the DMZ webserver, port scanning should then reveal only one or two services on the internal database server, and no other IP addresses need to be visible (usually). The internal firewall limits access from the source address of the DMZ webserver, to only the listening database service and the IP address of the destination database server. This is a considerably more challenging situation for attackers, as compared with a scenario where the internal private IP space is fully accessible…perhaps one where DMZ servers are not at all segregated and their “real” IP addresses are private RFC 1918 addresses, NAT’d to public Internet addresses to make them routable for clients.

Firewalls are not a panacea, especially with so many zero days in circulation, but in an era where even automated attacks can lead to our most financially critical assets disappearing via the upstream link, they can, and regularly do, make all the difference.

We All “Get” Firewalls…right?

There is no judgment being passed here, but it often is the case that security departments don’t have much to offer when it comes to firewall configuration and placement. Network and IT operations teams will try perhaps a couple of times to get some direction with firewalls, but usually what comes back is a check list of “best practices” and “deny all services that are not needed”, some will even take the extraordinary measure of reminding their colleagues about the default-deny, “catch all” rule. But very few security departments will get more involved than this.

IT and network ops teams, by the year 2012 AD, are quite averse in the wily ways of the firewall, and without any further guidance they will do a reasonable job of firewall configuration – but 9 times out of 10 there will be shortcomings. Ops peeps are rarely schooled on the art of technical risks. Its not part of their training. If they do understand the tech risk aspects of network access control, it will have been self-taught. Even if they have attended a course by a vendor, the course will cover the usage aspects, as in navigating GUIs and so on, and little of any significance to keeping bad guys out.

Ops teams generally configure fairly robust ingress filtering, but rarely is there any attention given to egress (more on that in part 2 of this offering), and the importance of other aspects such as whether services are UDP or TCP (with the result that one or other other is left open).

Generally, up to now, there are still some gaps and areas where businesses fall short in their configuration efforts, whereas I am convinced that in many cases attention moved away from firewalls many years ago – as if it’s an area that we have aced and so we can move on to other things.

So where next?

I would like to bring this diatribe to a close for now, until part 2. In the interim I would also like to point budding, enthusiastic analysts, SMEs, Senior *, and Evangelists in the direction of some rather nice reads. Try out TCP/IP Illustrated, at least Volume 1. Then O’ Reilly’s “Building Internet Firewalls”. The latter covers the in and outs of network architecture and how to firewall specific commonly used application layer protocols. This is a good starting point. Also, try some hands-on demo work (sorry – this involves using command shells) with IPtables – you’ll love it (I swear by this), and pay some attention to packet logging.

In Part 2 I will go over some of my experiences as a consultant with a roaming disposition, related to firewall configuration analysis, and I will cover some guiders related to classic misconfigurations – some of which may not be so obvious to the reader.

The Place of Pen Testing In The Infosec Strategy

The subject of network penetration testing, as distinct from application security testing, has been given petabytes of coverage since the late 90s, but in terms of how businesses approach network penetration testing, there are still severe shortcomings in terms of return on investment.

A pre-qualifier to save the reader some time: if your concern in security is purely compliance, you need not read on.

Going back to the birth of network penetration testing as a monetized service, we have gone through a transition period of good ground level skills in the mid to late 90s but poor management skills, to just, well…poor everything. This is in no way a reflection on the individuals involved. With the analysts doing the testing, as a result of modern testing methodology and conditions, the tests are not conducive to driving the analyst to learn deep analytical skills. For managers – the industry as a whole hasn’t identified the need to acquire managers who have “graduated” from tech-centric infosec backgrounds. The industry is still young and still making mistakes.

One thing has remained constant through the juvenile years of the industry and that has been poor management. The erosion of decent analytical skills from network penetration testing is ubiquitous apart from a few niche areas (and the dark side) – but this is by and large a consequence of bad management – and again is more of a reflection on the herd-mentality / downward momentum of the industry in general rather than the individuals. Managers need to have something like a good balance of business acumen and knowledge of technical risks, but in security, most of us still think it’s OK to have managers who are heavily weighted towards the business end of the scale.

Several changes took place in service delivery around the early 2000s. One was the imposition of testing restrictions which reduced the effectiveness of testing. When the analysts explained the negative impact of the restrictions (such as limited testing IP ranges, limited use of exploits on production systems, and fixed source IP ranges), the message was either misunderstood by their managers, and/or mis-communicated to the clients who were imposing the restrictions. So the restrictions took hold. A penetration test that is so heavily restricted can in no way come even close to a simulated attack or even a base level test.

The other factor was improved firewall configurations. There was one major aspect of network security that did improve from the mid 90s until today, and that was firewall configurations. With improved firewall configurations came fewer attack channels, but the testing restrictions had a larger impact on the perceived value of the remote testing service. Improved firewalls may have partly been a result of the earlier penetration tests, but the restrictions turned the testing engagements into an unfair fight.

There were wider forces at work in the security world in the early 2000s which also contributed to the loss of quality from penetration testing delivery, but these are beyond the scope of this article. For all intents and purposes, penetration testing became such a low quality affair that clients stopped paying for it unless they were driven by regulations to perform periodic tests of their perimeter “by an independent third party” – and the situation that arose was one where clients cared not a jot about quality. This lack of interest was passed on to service providers who in some cases actually reprimanded analysts for trying to be well…analytical. Reason? To be analytical is to retard, and to retard is to reduce profits. Service providers were now a production line for poor quality penetration tests.

So i explained enough about the problems and how they came to be. To be fair, i am not the only one who has identified these issues, its just that there aren’t so many of us around these days who were pen testers back in the 90s, and who are willing to put pen to paper on these issues.

I think it should be clear by now that a penetration test with major restrictions applied has only the value that comes from passing the audit. Apart from that? It’s a port scan. Anything else? Not in most cases. Automated tools are used heavily and tools such as vulnerability scanners never were more than glorified port scanners anyway. This is not because the vendors have done a poor job (although in some cases they have), it comes from the nature of remote unauthenticated vulnerability assessment – it’s almost impossible to deduce anything about the target, aside from port scanning and grabbing a few service banners.

But…perhaps with the spate of incidents that has been reported as being a 2010-on phenomena (which has really always been prevalent all through the 2000s) there might be some interest in passing the audit, PLUS getting something else in return for the investment. For this discussion we need to assume utopic conditions. Anything other than unrestricted testing (which also includes use of zero days – another long topic which i’ll side-step for now), delivered by highly skilled testers (with hacker-like skills but not necessarily Hackers), will always, without fail, be a waste of resources.

The key here is really in the level of knowledge of internal IT and security staff at the target network under testing. They realistically have to know everything about their network – every nook and cranny, every router, firewall, application, OS, and how they’re all connected. A penetration test should never be used to substitute this knowledge. Typical testing engagements from my experience are carried out with 3 to 4 analysts over a period of a maximum of 2 weeks. This isn’t enough time to have some outside party teach target staff all they need to know about their own private network. Indeed in most cases, one thousand such tests would be insufficient.

In the scenario where both client staff and testers are sufficiently skilled-up, then a penetration test has at least the potential of delivering good value on top of just base compliance. A test under these conditions can then perhaps find the slightest cracks in the armor – areas where the client’s IT and security staff may have missed something – a misconfiguration, unauthorized change, signs of a previous incident that went undetected, a previously unknown local privilege escalation vector – but the important point is that in most cases there won’t be the white noise of findings that comes from the case where there are huge holes in the network. Under these conditions, the test also delivers value, but the results are deceptive. Huge holes are uncovered, with huge holes probably still remaining.

The perimeter has now shifted. User workstation subnets are rightly being seen by many as having been owned by the bad guys, with the result that the perimeter has now shifted into RFC 1918 private address space. So now there can also be an emphasis on penetration testing of critical infrastructure from user workstation subnets. But again, lack of knowledge of internal configurations and controls just won’t do. Whatever resources are devoted to having external third parties doing penetration testing will have been wasted if there is little awareness of internal networks on behalf of the testing subject. There at least has to be detailed awareness of available OS and database security controls and the degree to which they have been applied. Application security – it’s a story for another day, and one which doesn’t massively affect any of my conclusions here.

Of course there’s a gaping hole in this story. I have spoken of skill levels on behalf of the penetration testing analysts and the analysts on the side of the testing subject. But how do we know who is qualified and who is not qualified? Well, this is the root of all of our problems today. Without this there is no trust. Without a workable accreditation structure, testers can fail to find any reportable findings and accordingly be labelled clowns. Believe it or not, there is a simple solution here, but this also is too wide a subject to cover here….later on that one!