The Place of Pen Testing In The Infosec Strategy

The subject of network penetration testing, as distinct from application security testing, has been given petabytes of coverage since the late 90s, but in terms of how businesses approach network penetration testing, there are still severe shortcomings in terms of return on investment.

A pre-qualifier to save the reader some time: if your concern in security is purely compliance, you need not read on.

Going back to the birth of network penetration testing as a monetized service, we have gone through a transition period of good ground level skills in the mid to late 90s but poor management skills, to just, well…poor everything. This is in no way a reflection on the individuals involved. With the analysts doing the testing, as a result of modern testing methodology and conditions, the tests are not conducive to driving the analyst to learn deep analytical skills. For managers – the industry as a whole hasn’t identified the need to acquire managers who have “graduated” from tech-centric infosec backgrounds. The industry is still young and still making mistakes.

One thing has remained constant through the juvenile years of the industry and that has been poor management. The erosion of decent analytical skills from network penetration testing is ubiquitous apart from a few niche areas (and the dark side) – but this is by and large a consequence of bad management – and again is more of a reflection on the herd-mentality / downward momentum of the industry in general rather than the individuals. Managers need to have something like a good balance of business acumen and knowledge of technical risks, but in security, most of us still think it’s OK to have managers who are heavily weighted towards the business end of the scale.

Several changes took place in service delivery around the early 2000s. One was the imposition of testing restrictions which reduced the effectiveness of testing. When the analysts explained the negative impact of the restrictions (such as limited testing IP ranges, limited use of exploits on production systems, and fixed source IP ranges), the message was either misunderstood by their managers, and/or mis-communicated to the clients who were imposing the restrictions. So the restrictions took hold. A penetration test that is so heavily restricted can in no way come even close to a simulated attack or even a base level test.

The other factor was improved firewall configurations. There was one major aspect of network security that did improve from the mid 90s until today, and that was firewall configurations. With improved firewall configurations came fewer attack channels, but the testing restrictions had a larger impact on the perceived value of the remote testing service. Improved firewalls may have partly been a result of the earlier penetration tests, but the restrictions turned the testing engagements into an unfair fight.

There were wider forces at work in the security world in the early 2000s which also contributed to the loss of quality from penetration testing delivery, but these are beyond the scope of this article. For all intents and purposes, penetration testing became such a low quality affair that clients stopped paying for it unless they were driven by regulations to perform periodic tests of their perimeter “by an independent third party” – and the situation that arose was one where clients cared not a jot about quality. This lack of interest was passed on to service providers who in some cases actually reprimanded analysts for trying to be well…analytical. Reason? To be analytical is to retard, and to retard is to reduce profits. Service providers were now a production line for poor quality penetration tests.

So i explained enough about the problems and how they came to be. To be fair, i am not the only one who has identified these issues, its just that there aren’t so many of us around these days who were pen testers back in the 90s, and who are willing to put pen to paper on these issues.

I think it should be clear by now that a penetration test with major restrictions applied has only the value that comes from passing the audit. Apart from that? It’s a port scan. Anything else? Not in most cases. Automated tools are used heavily and tools such as vulnerability scanners never were more than glorified port scanners anyway. This is not because the vendors have done a poor job (although in some cases they have), it comes from the nature of remote unauthenticated vulnerability assessment – it’s almost impossible to deduce anything about the target, aside from port scanning and grabbing a few service banners.

But…perhaps with the spate of incidents that has been reported as being a 2010-on phenomena (which has really always been prevalent all through the 2000s) there might be some interest in passing the audit, PLUS getting something else in return for the investment. For this discussion we need to assume utopic conditions. Anything other than unrestricted testing (which also includes use of zero days – another long topic which i’ll side-step for now), delivered by highly skilled testers (with hacker-like skills but not necessarily Hackers), will always, without fail, be a waste of resources.

The key here is really in the level of knowledge of internal IT and security staff at the target network under testing. They realistically have to know everything about their network – every nook and cranny, every router, firewall, application, OS, and how they’re all connected. A penetration test should never be used to substitute this knowledge. Typical testing engagements from my experience are carried out with 3 to 4 analysts over a period of a maximum of 2 weeks. This isn’t enough time to have some outside party teach target staff all they need to know about their own private network. Indeed in most cases, one thousand such tests would be insufficient.

In the scenario where both client staff and testers are sufficiently skilled-up, then a penetration test has at least the potential of delivering good value on top of just base compliance. A test under these conditions can then perhaps find the slightest cracks in the armor – areas where the client’s IT and security staff may have missed something – a misconfiguration, unauthorized change, signs of a previous incident that went undetected, a previously unknown local privilege escalation vector – but the important point is that in most cases there won’t be the white noise of findings that comes from the case where there are huge holes in the network. Under these conditions, the test also delivers value, but the results are deceptive. Huge holes are uncovered, with huge holes probably still remaining.

The perimeter has now shifted. User workstation subnets are rightly being seen by many as having been owned by the bad guys, with the result that the perimeter has now shifted into RFC 1918 private address space. So now there can also be an emphasis on penetration testing of critical infrastructure from user workstation subnets. But again, lack of knowledge of internal configurations and controls just won’t do. Whatever resources are devoted to having external third parties doing penetration testing will have been wasted if there is little awareness of internal networks on behalf of the testing subject. There at least has to be detailed awareness of available OS and database security controls and the degree to which they have been applied. Application security – it’s a story for another day, and one which doesn’t massively affect any of my conclusions here.

Of course there’s a gaping hole in this story. I have spoken of skill levels on behalf of the penetration testing analysts and the analysts on the side of the testing subject. But how do we know who is qualified and who is not qualified? Well, this is the root of all of our problems today. Without this there is no trust. Without a workable accreditation structure, testers can fail to find any reportable findings and accordingly be labelled clowns. Believe it or not, there is a simple solution here, but this also is too wide a subject to cover here….later on that one!

 

 

Out With The New, In With The Old – OS Security Re-visited

Operating System Security is radically under-appreciated in security and this has been the case since the big bang of business security practices in the mid-90s. OS security, along with application security is now the front line in the battle against hackers, but this point has not been widely realised…

Terminology

We have a lot of terms in security that have no commonly understood definition, and it seems everybody’s version of the definition is the correct one. First up, what is meant by “Operating System?” The usual meaning in most businesses’ operational sense is a bare install – in other words, the box as in its shipped state, with no in-house or any other applications installed – there will [hopefully!] be nothing installed on the computer other than what is made available from the vendor’s install media. This is different from most university Computer Science course syllabus definitions of the phrase, but we will go with the former definition.
So what of “operating system security” then? Computer operating systems are designed with configuration options, such as file system permissions, that allow administrators to apply some level of protection to information (files, databases, etc) hosted by the computer. Operating system security relates to the degree to which available controls have been applied, in the face of remote and local exploit risks.

Vulnerability Assessment: Current Perceptions and Misconceptions

Mostly, when people think of vulnerability assessment the first thing that comes to mind is network penetration testing or use of automated scanning tools. However, cutting a long story short, neither of these two approaches give us a useful or efficient way of assessing vulnerability for our critical infrastructure.
Penetration testing these days is mostly just performed as a requirement of auditors, and to be too analytical means to be slower, and most businesses will not tolerate this. The quality of delivery is poor, and furthermore the tests are so restricted as to make them close to useless. Usually the only useful item of information to come out of these engagements is the port scan results.
The whole story on automated scanners is a long one (for a longer discussion on this matter refer to Chapter 5 of Security De-engineering) but just to summarize: unauthenticated scanning of critical infrastructure with no further analysis is a recipe for disaster. The marketing engine behind such products claims they can replace manual efforts. Such a claim suits the agenda of many in the security industry but overall it will be the security industry’s customers who will suffer – and are suffering. The expectations were set way too high with these tools. Again, the most valuable output from these tools will be the port scan results.
Further have been made in the way of products misnomered in the Vulnerability Management genre (“management”? – vulnerability is not managed it is only enumerated) and some of these offer authenticated scanning. While there have been some recent improvements in this area, the most important items of OS security remain unchecked by these tools. Furthermore databases such as Oracle are given scant coverage.

Why Analyze OS Security Controls?

With regard the subsection heading “Why Analyze OS Security Controls”, there are two categories of answer to this question. The first goes something like “because we have perfectly fine security standards, signed off by our CEO, that tells us we need to analyze security controls”. The second type of answer is related to technical risk and the efficiency of our vulnerability management programs – and this is the subject of the remainder of this article.
I mentioned the limitations of penetration testing previously and it should not be seen as a panacea. In a scenario where internal IT staff, including security personnel, do not have intimate knowledge of the IT landscape, the two-week penetration test costing 40K USD will barely touch the surface. The gap in knowledge will be filled slightly by a penetration test, but the only scenario where a penetration test can be valuable is one where both target staff and penetration testers are highly experienced in their field. In this case the 40K USD, 40 man-day test is used to try and spot misconfigurations that the internal staff may have missed. This is a good use of funds in most cases. Any other scenario is unlikely to provide much value for businesses. In summary, penetration testing is not the answer for businesses.
We have heard a great deal about APT in recent months. APT has been attributed with many of the recent high profile attacks. Malware is released at rates faster than the anti-virus software vendors can release pattern updates. Ever more ingenious malware is being created by the bucket-load. Then there is the problem of unaware users. Regardless of whatever awareness program or punishment businesses deal out, there will always be some user somewhere who clicks on some click they shouldn’t follow, and corporate policies that disable Javascript or blacklist URLs are doomed to failure.
Overall…the perimeter has shifted. The perimeter is no longer the perimeter if you see what I mean, in that it is no longer the border or choke point external firewalls. Business workstation subnets are owned by Botnet R Us.
Many of the attacks are carried out with undisclosed vulnerabilities and because they are undisclosed to the public, there is no patch available to mitigate the software vulnerability. This is the point where many analysts raise a white flag. But this is also the point where OS controls can save the day in many cases. At least we can say that thoughtful use of OS security controls can prevent a business from becoming a “low hanging fruit”.
Taking a Unix system as an example: an attacker may have remotely compromised a listening service but they only gain the privileges of the listening service process owner. In most cases this is not a root compromise. The attacker will need to elevate their local privileges in most cases. How do they do this? They look for bad file system permissions and anything running under root privileges. They look for anything related to the root account, such as cron jobs owned by lower privileged users that were configured to run under root’s cron.
With effective controls on server operating systems we have the possibility to severely restrict privilege escalation opportunities, even in cases where zero-day/undisclosed vulnerabilities are used by attackers. Some services can even be “chroot jail” configured, but this requires knowledge of “under the hood” operating system controls.
So think of the analysis of operating system controls as a kind of a twist on a penetration test – sort of a penetration test inside out. The approach is “I have compromised the target, now how I would I compromise the target?” Using a root / administrative account allows a vastly more efficient situation where a great deal more can be learned about a system in a short period of time as compared with a remote penetration test.
I mentioned previously about the new perimeter in corporate networks – the perimeter is now critical infrastructure. Many businesses do not have internal segmentation. There is only a DMZ subnet and then a flat internal private network with no further network access control. But assuming that they do have internal network access control, then we can imagine that the new perimeter is the firewall between workstation subnets and critical servers such as database servers and so on. The internal firewall(s) makes up the perimeter along with…you got it…the operating system of the database, LDAP, AD, and other critical application hosts [penny drops].
Businesses can proportion the resources deployed in operating system control security assessment depending on the criticality of the device. The criticality will depend on a number of factors, not least network architecture. In the case of a flat private network with no internal segmentation, effectively every device is a critical device and the budget required for security is going to be much higher than in the case where segments exist at differing levels of business criticality.
Hopefully then the importance of operating system and database security controls has been made clearer. Thoughtful deployment of automated and manual analysis in this area is a nice efficient use of corporate resources with huge returns in risk mitigation. The longer terms costs of information risk management will be less where better targeting of resources is deployed – and certainly, some of the suggestions for vulnerability assessment I have outlined in this article can help a great deal in reducing the costs of vulnerability management for businesses.