Security in Virtual Machine Environments. And the planet.

This post is based on a recent article on the CIO.com site.

I have to say, when I read the title of the article, the cynic in me once again prevailed. And indeed there will be some cynicism and sarcasm in this article, so if that offends the reader, i would like to suggest other sources of information: those which do not accurately reflect the state of the information security industry. Unfortunately the truth is often accompanied by at least cynicism. Indeed, if I meet an IT professional who isn’t cynical and sarcastic, I do find it hard to trust them.

Near the end of the article there will be a quiz with a scammed prize offering, just to take the edge of the punishment of the endless “negativity” and abject non-MBA’edness.

“While organizations have been hot to virtualize their machine operations, that zeal hasn’t been transferred to their adoption of good security practices”. Well you see they’re two different things. Using VMs reduces power and physical space requirements. Note the word “physical” here and being physical, the benefits are easier to understand.

Physical implies something which takes physical form – a matter energy field. Decision makers are familiar with such energy fields. There are other examples in their lives such as tables, chairs, other people, walls, cars. Then there is information in electronic form – that’s a similar thing (also an energy field) but the hunter/gatherer in some of us doesn’t see it that way, and still as of 2013, the concept eludes many IT decision makers who have fought their way up through the ranks as a result of excellent performance in their IT careers (no – it’s not just because they have a MBA, or know the right people).

There is a concept at board level of insuring a building (another matter energy field) against damages from natural causes. But even when 80% of information assets are in electronic form, there is still a disconnect from the information. Come on chaps, we’ve been doing this for 20 years now!

Josh Corman recently tweeted “We depend on software just as much as steel and concrete, its just that software is infinitely more attack-able!”. Mr Corman felt the need to make this statement. Ok, like most other wise men in security, it was intended to boost his Klout score, but one does not achieve that by tweeting stuff that everybody already knows. I would trust someone like Mr Corman to know where the gaps are in the mental portfolios of IT decision makers.

Ok, so moving on…”Nearly half (42 percent) of the 346 administrators participating in the security vendor BeyondTrust‘s survey said they don’t use any security tools regularly as part of operating their virtual systems…”

What tools? You mean anti-virus and firewalls, or the latest heuristic HIDS box of shite? Call me business-friendly but I don’t want to see endless tools on end points, regardless of their function. So if they’re not using tools, is it not at this point good journalism to comment on what tools exactly? Personally I want to see a local firewall and the obligatory and increasingly less beneficial anti-virus (and i do not care as to where, who, whenceforth, or which one…preferably the one where the word “heuristic” is not used in the marketing drivel on the box). Now if you’re talking system hardening and utilizing built-in logging capability – great, that’s a different story, and worthy of a cuddly toy as a prize.

“Insecure practices when creating new virtual images is a systemic problem” – it is, but how many security problems can you really eradicate at build-time and be sure that the change won’t break an application or introduce some other problem. When practical IT-oriented security folk actually try to do this with skilled and experienced ops and devs, they realise that less than 50% of their policies can be implemented safely in a corporate build image. Other security changes need to be assessed on a per-application basis.

Forget VMs and clouds for a moment – 90%+ of firms are not rolling out effectively hardened build images for any platform. The information security world is still some way off with practices in the other VM field (Vulnerability Management).

“If an administrator clones a machine or rolls back a snapshot,”… “the security risks that those machines represent are bubbled up to the administrator, and they can make decisions as to whether they should be powered on, off or left in state.”

Ok, so “the security risks that those machines represent are bubbled up to the administrator”!!?? [Double-take] Really? Ok, this whole security thing really can be automated then? In that case, every platform should be installed as a VM managed under VMware vCenter with the BeyondTrust plugin. A tab that can show us our risks? There has to be a distinction between vulnerability and risk here, because they are two quite different things. No but seriously, I would want to know how those vulnerabilities are detected because to date the information security industry still doesn’t have an accurate way to do this for some platforms.

Another quote: “It’s pretty clear that virtualization has ripped up operational practices and that security lags woefully behind the operational practice of managing the virtual infrastructure,”. I would edit that and just the two words “security” and “lags”. What with visualized stuff being a subset of the full spectrum of play things and all.

“Making matters worse is that traditional security tools don’t work very well in virtual environments”. In this case i would leave remaining five words. A Kenwood Food Mixer goes to the person who can guess which ones those are. See? Who said security isn’t fun?

“System operators believe that somehow virtualization provides their environments with security not found in the world of physical machines”. Now we’re going Twilight Zone. We’ve been discussing the inter-cluster sized gap between the physical world and electronic information in this article, and now we have this? Segmentation fault, core dumped.

Anyway – virtualization does increase security in some cases. It depends how the VM has been configured and what type of networking config is used, but if we’re talking virtualised servers that advertise services to port scanners, and / or SMB shares with their hosts, then clearly the virtualised aspect is suddenly very real. VM guests used in a NAT’ing setup is a decent way to hide information on a laptop/mobile device or anything that hooks into an untrusted network (read: “corporate private network”).

The vendor who was being interviewed finished up with “Every product sounds the same,” …”They all make you secure. And none of them deliver.” Probably if i was a vendor I might not say that.

Sorry, I just find discussions of security with “radical new infrastructure” to be something of a waste of bandwidth. We have some very fundamental, ground level problems in information security that are actually not so hard to understand or even solve, at least until it comes to self-reflection and the thought of looking for a new line of work.

All of these “VM” and “cloud” and “BYOD” discussions would suddenly disappear with the introduction of integrity in our little world because with that, the bigger picture of skills, accreditation, and therefore trust would be solved (note the lack of a CISSP/CEH dig there).

I covered the problems and solutions in detail in Security De-engineering, but you know what? The solution (chapter 11) is no big secret. It comes from the gift of intuition with which many humans are endowed. Anyway – someone had to say it, now its in black and white.

Hardening is Hard If You’re Doing it Right

Yes, ladies and gentlemen, hardening is hard. If its not hard, then there are two possibilities. One is that the maturity of information security in the organization is at such a level that security happens both effectively and transparently – its fully integrated into the fabric of BAU processes and many of said processes are fully automated with accurate results. The second (far more likely given the reality of security in 2013) is that the hardening is not well implemented.

For the purpose of this diatribe, let us first define “hardening” so that we can all be reading from the same hymn sheet. When I’m talking about hardening here, the theme is one of first assessing vulnerability, then addressing the business risk presented by the vulnerability. This can apply to applications, or operating systems, or any aspect of risk assessment on corporate infrastructure.

In assessing vulnerability, if you’re following a check list, hardening is not hard – in fact a parrot can repeat pearls of wisdom from a check list. But the result of merely following a check list will be either wide open critical hosts or over-spending on security – usually the former. For sure, critical production systems will be impacted, and I don’t mean in a positive way.

You see, like most things in security, some thinking is involved. It does suit the agenda of many in this field to pretend that security analysis can be reduced down to parrot-fashion recital of a check list. Unfortunately though, some neural activity is required, at least if gaining the trust of our customers (C-levels, other business units, home users, etc) is important to us.

The actual contents of the check list should be the easy part, although unfortunately as of 2013, we all seem to be using different versions of the check list, and some versions are appallingly lacking. The worst offenders here deliver with a quality that is inversely proportional to the prices they charge – and these are usually external auditors from big 4 consultancies, all of whom have very decent check lists, but who also fail to ensure that Consultants use said check list. There are plenty of cases where the auditor knocks up their own garage’y style shell script for testing. In one case i witnessed not so long ago, the script for testing RedHat Enterprise Linux consisted of 6 tests (!) and one of the tests showed a misunderstanding of the purpose of the /etc/ftpusers file.

But the focus here is not on the methods deployed by auditors, its more general than that. Vulnerability testing in general is not a small subject. I have posted previously on the subject of “manual” network penetration testing. In summary: there will be a need for some businesses to show auditors that their perimeter has been assessed by a “trusted third party”, but in terms of pure value, very few businesses should be paying for the standard two week delivery with a four person team. For businesses to see any real value in a network penetration test, their security has to be at a certain level of maturity. Most businesses are nowhere near that level.

Then there is the subject of automated, unauthenticated “scanning” techniques which I have also written about extensively, both in an earlier post and in Chapter Five of Security De-engineering. In summary, the methodology used in unauthenticated vulnerability scanning results in inaccuracy, large numbers of false positives, wasted resources, and annoyed operations and development teams. This is not a problem with any particular tool, although some of them are especially bad. It is a limitation of the concept of unauthenticated testing, which amounts to little more than pure guesswork in vulnerability assessment.

How about the growing numbers of “vulnerability management” products out there (which do not “manage” vulnerability, they make an attempt at assessing vulnerability)? Well, most of them are either purely an expensive graphical interface to [insert free/open source scanner name], or if the tool was designed to make a serious attempt at accurate vulnerability assessment (more of them do not), then the tests will be lacking or over-done, inaccurate, and / or doing the scanning in an insecure way (e.g. the application is run over a public URL, with the result that all of your configuration data, including admin passwords, are held by an untrusted third party).

In one case, a very expensive VM product literally does nothing other than port scan. It is configured with hundreds of “test” patterns for different types of target (MS Windows, *nix, etc) but if you’re familiar with your OS configurations,you will look at the tool output and be immediately suspicious. I ran the tool against a Linux and Windows test target and “packet sniffed” the scanning engine’s probe attempts. In summary, the tool does nothing. It just produces a long list of configuration items (so effectively a kind of Security Standard for the target) without actually testing for the existence of vulnerability.

So the overall principle: the company [hopefully] has a security standard for each major operating system and database on their network and each item in the standard needs to be tested for all, or some of the information asset hosts in the organization, depending on the overall strategy and network architecture. As of the time of writing, there will need to be some manual / scripted augmentation of automatic vulnerability assessment processes.

So once armed with a list of vulnerabilities, what does one do with it? The vulnerability assessment is the first step. What has to happen after that? Can Security just toss the report over to ops and hope for the best? Yes, they can, but this wouldn’t make them very popular and also there needs to be some input from security regarding the actual risk to the business. Taking the typical function of operations teams (I commented on the functions and relationships between security and operations in an earlier post), if there is no input from security, then every risk mitigation that meets any kind of an impact will be blocked.

There are some security service providers/consultancies who offer a testing AND a subsequent hardening service. They want to offer both detection AND a solution, and this is very innovative and noble of them. However, how many security vulnerabilities can be addressed blindly without impacting critical production processes? Rhetorical question: can applications be broken by applying security fixes? If I remove the setuid bit from a root owned X Window related binary, it probably has no effect on business processes. Right? What if operations teams can no longer authenticate via their usual graphical interface? This is at least a little bit disruptive.

In practice, as it turns out, if you look at a Security Standard for a core technology, lets take Oracle 11g as an example: how many of the numerous elements of a Security Standard can we say can be implemented without fear of breaking applications, limiting access for users or administrators, or generally just making trouble-shooting of critical applications a lot less efficient? The answer is: not many. Dependencies and other problems can come from surprising sources.

Who in the organization knows about dependencies and the complexities of production systems? Usually that would be IT / Network Operations. And how about application – related dependencies? That would be application architects, or just generally we’ll say “dev teams” as they’re so affectionately referred to these days. So the point: even if security does have admin access to IT resources (rare), is the risk mitigation/hardening a job purely for security? Of course the answer is a resounding no, and the same goes for IT Operations.

So, operations and applications architects bring knowledge of the complexities of apps and infrastructure to the table. Security brings knowledge of the network architecture (data flows, firewall configurations, network device configurations), the risk of each vulnerability (how hard is to exploit and what is the impact?), and the importance to the business of information assets/applications. Armed with the aforementioned knowledge, informed and sensible decisions on what to do with the risk (accept, mitigate, work around, or transfer) can be made by the organization, not by security, or operations.

The early days of deciding what to do with the risk will be slow and difficult and there might even be some feisty exchanges, but eventually, addressing the risk becomes a mature, documented process that almost melts into the background hum of the machinery of a business.

How Much Of A CASE Are You?

This piece is adapted from Chapter 3 of Security De-engineering, titled “Checklists and Standards Evangelists”.

My travels in information security have taken to me to 3 different continents and 15 different countries. I have had the pleasure and pain to deal with information security problems in every industry sector that ever existed since the start of the Industrial Revolution (but mostly finance’y/bank’y of course), and I’ve had the misfortune and pleasure to meet a whole variety of species and sub-species of the genus Information Security Professional.

In the good old days of the 90s, it was clear there were some distinctive features that were hard-wired into the modus operandi of the Information Security Professional. This earlier form of life, for want of a better name, I call the “Hacker”, and I will talk about them in my next post.

In the pre-holocene mid to late 90s, the information security professional was still plausibly human, in that they weren’t afraid to display distinguishing characteristics. There was no great drive to “fit in”, to look the same, talk the same, and act the same as all the other information security professionals. There was a class that was information security professional, and at the time, there was only one instance of that class.

Then during the next few years, going into the 2000s, things started to change in response to the needs of ego and other head problems, mostly variants of behaviour born out of insecurity. The need to defend territory, without possession of the necessary intellectual capital to do so, gave birth to a new instance of the class Information Security Professional – the CASE (Checklists and Standards Evangelist). The origin of the name will become clear.

My first engagement in the security world was with a small, ex-countries (mostly former Yugoslavia and Soviet Union) testing team in the late 90s. Responding incorrectly to the perceived needs of the market, around 2001/2 there were a couple of rounds of Hacker lay-offs – a common global story at the time. A few weeks after the second batch of lay-offs, there was a regional team event, wherein our Operations Manager (with a strong background in hotel management) opened the event with “security is no longer about people with green hair and piercings”. Well, ok, but what was it about then? The post 2000s version of “It” is the focus of this post, and I will cover a very scientific methodology for self-diagnosing the level of CASE for the reader.

Ok, so here are some of the elements of CASE’dom that are more commonly witnessed. Feel free to run a self-diagnosis, scoring from 0 to 5 for each point, based either on what you actually believe (how closely you agree with the points), or how closely you see yourself, or how closely you can relate to these points based on your experiences in infosec:

  • “Technical” is a four letter word.
  • Anything “technical”, to do with security (firewall configuration, SIEM, VM, IDS/IPS, IDM etc) comes under the remit of IT/Network Operations.
  • Security is not a technical field – its nothing to do with IT, its purely a business function. Engineers have no place at the table. If a candidate is interviewed for a security position and they use a tech term  such as “computer” or “network”, then they clearly have no security experience and at best they should apply for an lowly ops position.
  • You were once a hacker, but you “grew out of it”.
  • Any type of risk assessment methodology can be reduced down to a CHECKLIST, and recited parrot fashion, thereby replacing the need for actual expertise and thinking. Cost of safeguard versus risk issues are never very complex and can be nailed just by consultation of a check list.
  • Automated vulnerability scanners are a good replacement for manual testing, and therefore manual testers, and by entering target addresses and hitting an enter button, there is no need for any other type of vulnerability assessment, and no need for tech staff in security.
  • There is a standard, universally recognised vocabulary to be used in security which is based on whichever CISSP study guide you read.
  • Are you familiar with this situation: you find yourself in a room with people who talk about the same subject as you, but they use different terms and phrases, and you get angry at them in the belief that your terminology is the correct version?
  • CISSP is everything that was, is, and ever will be. CISSP is the darkness and the light, and the only thing that matters, the alpha and omega. There is a principle: “I am a CISSP, therefore I am”, and if a person does not have CISSP (or it “expired”), then they are not an effective security professional.
  • You are a CEH and therefore a skilled penetration tester.
  • “Best Practice” is a phrase which is ok to use on a regular basis, despite the fact that there is no universally accepted body of knowledge to corroborate the theory that the prescribed practice is the best practice, and business/risk challenges are all very simple to the extent that a fixed solution can be re-used and applied repeatedly to good effect.
  • Ethics is a magnificent weapon to use when one feels the need to defend one’s territory from a person who speaks at, or attends “hacker conferences”. If an analyst has ever used a “hacking tool” in any capacity, then they are not ethical, and subject to negative judgment outside of the law. They are in fact a criminal, regardless of evidence.
  • You look in the mirror and notice that you have a square head and a fixed, stern grimace. At least during work hours, you have no sense of humour and are unable to smile.
  • For a security professional in an in-house situation: it is their job to inform other business units of security standard and policy directives, without assessment of risk on a case-by-case basis, and also no offering of guidance as to how the directives might be realised. As an example: a dev team must be informed that they MUST use two-factor authentication regardless of the risk or the additional cost of implementation. Furthermore, it is imperative to remind the dev team that the standards were signed off by the CEO, and generally to spread terror whilst offering no further guidance.
  • You are a security analyst, but your job function is one of “management” – not analysis or assessment or [insert nasty security term]. Your main function is facilitating external audits and/or processing risk exemption forms.
  • Again for in-house situations: silence is golden. The standard response to any inter-department query is defiance. The key element of any security professional’s arsenal is that of silence, neutrality, and generally not contributing anything. This is a standard defence against ignorance. If a security professional can maintain a false air of confidence while ignoring any form of communique, and generally just not contributing anything, then a bright future awaits. The mask that is worn is one of not actually needing to answer, because you’re too important, and time is too valuable.
  • You fill the gap left by the modern security world by adding in words like “Evangelist” in your job title, or “thought leader”. Subject Matter Expert (SME) also is quite an attractive title. “Senior” can also be used if you have 1 second of experience in the field, or a MBA warrants such a prefix.
  • Your favourite term is “non-repudiation”, because it has that lovely counter-intuitive twist in its meaning. The term has a decent shelf-life, and can be used in any meeting where management staff are present, regardless of applicability to the subject under discussion.
  • Security incident” and “security department” both have the word “security” in them. Management notices this common word, so when there’s an incident and ops refuse to get involved, the baton falls to the security department which has no tools, either mental or otherwise, for dealing with incidents. So, security analysts live in fear of incidents. This is all easily fixed by hiring folk who both need to “fit in” with the rest of the team and also who use words like “forensics” and “incident” on their CV (and they are CISSPs).
  • “Cloud Security” is a new field of security, that only came into existence recently, and is an area of huge intellectual capital. If one has a cloud-related professional accreditation, it means they are very, very special and possess powers other mortals can only dream of. No, really. Cloud adoption is not merely a change in architecture, or places an emphasis on crypto and legal coverage! It’s way more than that!
  • Unlike Hackers, you have unique “access” to C-level management, because you are mature, and can “communicate effectively”.
  • You applied for a job which was advertised as highly technical as per the agent’s (bless ’em eh) job description that was passed on by HR. On day one you realise a problem. You may never see a command shell prompt ever again.

A maximum score of 110 points will be seen as very good or very bad by your management team, hopefully the former for your sake, hopefully the latter for the business’s sake.

Somewhere in the upper area of 73 to 110 points is max’xed-out CASE. This is as CASE as it gets. I wouldn’t want to advocate a new line of work to anyone really, but it might just be the case than an alternative career would lead to a greater sense of fulfilment and happiness.

There is hope for anyone falling in the less than 73 area. For example, its not too late to go through that [insert core technology] Security Standard, try and understand the technical risks, talk to operations about it, and see it all anew. If “tech” really is something that is against your nature, then you will probably be in the 73 to 110 class. Less than 73 is manageable. Of course by getting more tech, you could be alienating yourself or upsetting the apple cart. Its your choice ultimately…

The statement that information security is not actually anything to do with information technology, is of course nothing more than pre-tense, and more and more of our customers are starting to realise this.

What’s Next For BYOD – 2013 And Beyond

There are security and business case arguments about BYOD. They cast different aspects and there’s peta-bytes of valid points out there.

The security argument? Microsoft Windows is still the corporate OS of choice and still therefore the main target for malware writers. As a pre-qualifier – there is no bias towards one Operating System or another here.

Even considering that in most cases, when business asks for something, security considerations are secondary, there is also the point that Windows is by its nature, very hard to make malware-resistant. Plenty of malware problems are not introduced as a result of a lack of user awareness (for example, unknowingly installing malware in the form of faked anti-virus or browser plug-ins), plus plenty of services are required to run with SYSTEM privileges. These factors make Windows platforms hard to defend in a cost-effective, manageable way.

Certainly we have never been able to manage user OS rights/privileges and that isn’t going to change any time soon. There is no 3rd party product that can help. Does security actually make an effective argument in cases where users are asking for control over printers and Wifi management? Should such functions be locked anyway? Not necessarily. And once we start talking fine-grained admin rights control we’re already down a dark alley – at least security needs to justify to operations as to why they are making their jobs more difficult, the environment more complex and therefore less reliable. And with privilege controls, security also must justify to users (including C-levels) as to why their corporate device is less usable and convenient.

For the aforementioned reasons, the security argument is null and void. I don’t see BYOD as a security argument at all, mainly because the place where security is at these days, isn’t a place where we can effectively manage user device security – the doesn’t change with or without BYOD, and this is likely to be the case for some years to come yet. We lost that battle, and the security strategy has to be planned around the assumption that user subnets are compromised. I would agree that in a theoretical case where user devices are wandering freely, not at all subject to corporate controls, then the scope is there for a greater frequency of malware issues, but regardless, the stance has to be based on an assumption that one or more devices in corporate subnets has been compromised and the malware is designed to connect ingress and egress.

How about other OS flavors, such as Apple OS X for example? With other OS flavors, it is possible to manage privileges and lock them down to a much larger degree than we can with Windows, but as has been mentioned plenty of times now, once another OS goes mainstream and grows in corporate popularity, then it also shows up on the radars of malware writers. Reports of  malware designed to exploit vulnerabilities in OS X software started surfacing earlier in 2012, with “The Flashback Trojan” given the widest coverage.

I would venture that at least the possibility exists to use technical controls to lock down Unix-based devices to a much larger degree, as compared with MS Windows variants, but of course the usability experience has to match the needs of business. Basically, regardless of whether our view is utopic or realistic, there will be holes, and quite sizable holes too.

For the business case? Having standard build user workstations and laptops does make life easier for admins, and it allows for manageability and efficiency, but there is a wider picture of course. The business case for BYOD is a harder case to make than we might have at first thought. There are no obvious answers here. One of the more interesting con articles was from CIO Magazine earlier in 2012: BYOD: If You Think You’re Saving Money, Think Again and then Cisco objectively report that there are plenty in the pro corner too: Cisco Study: IT Saying Yes To BYOD.

So what does all this bode for the future? The manageability aspect and therefore the business aspect is centered around the IT costs and efficiency analysis. This is more of an operational discussion than an information risk management discussion.

The business case is inconclusive, with plenty in the “say no to BYOD” camp. The security picture is without foundation – we have a security nightmare with user devices, regardless of who owns the things.

Overall the answer naturally lies in management philosophy, if we can call it that. There is what we should do, and what we will do….and of course these are often out by 180 degrees from each other. The lure of BYOD will be strong at the higher levels who usually only have the balance sheet as evidence, along with the sales pitches of vendors. Accountant-driven organisations will love the idea and there will be variable levels of bravery, confidence, and technical backing in the IT rationalization positions. Similar discussions will have taken place with regard to cloud’ing and outsourcing.

The overall conclusion: BYOD will continue to grow in 2013 and probably beyond. Whether that should be the case or not? That’s a question for operations to answer, but there will be plenty of operations departments that will not support the idea after having analyzed the costs versus benefits picture.

Migrating South: The Devolution Of Security From Security

Devolution might seem a strong word to use. In this article I will be discussing the pros and cons of the migration of some of the more technical elements of information security to IT operations teams.

By the dictionary definition of the word, “devolution” implies a downgrade of security – but sufficed to say my point does not even remotely imply that operations teams are subordinate to security. In fact in many cases, security has been marginalized such that a security manager (if such a function even exists) reports to a CIO, or some other managerial entity within IT operations. Whether this is right or wrong…this is subjective and also not the subject here.

Of course there are other department names that have metamorphosed out of the primordial soup …”Security Operations” or SecOps, DevOps, SecDev, SecOpsDev, SecOpsOps, DevSecOps, SecSecOps and so on. The discussion here is really about security knowledge, and the intellectual capital that needs to exist in a large-sized organisation. Where this intellectual capital resides doesn’t bother me at all – the name on the sign is irrelevant. Terms such as Security and Operations are the more traditional labels on the boxes and no, this is not something “from the 90s”. These two names are probably the more common names in business usage these days, and so these are the references I will use.

Examples of functions that have already, and continue to be pharmed out to Ops are functions such as Vulnerability Management, SIEM, firewalls, IDS/IPS, and Identity Management. In detail…which aspects of these functions are teflonned (non-stick) off? How about all of them? All aspects of the implementation project, including management, are handled by ops teams. And then in production, ops will handle all aspects of monitoring, problem resolution, incident handling ..ad infinitum.

A further pre-qualification is about ideal and actual security skills that are commonly present. Make no mistake…in some cases a shift of tech functions to ops teams will actually result in improvements, but this is only because the self-constructed mandate of the security department is completely non-tech, and some tech at a moderate cost will usually be better than zero tech, checklists, and so on.

We need to talk about typical ops skills. Of course there will be occasional operations team members who are well versed in security matters, and also have a handle on the business aspects, but this is extra-curricular and rare. Ops team members are system administrators usually. If we take Unix security as an example, they will be familiar with at least filesystem permissions and umask settings, so there is a level of security knowledge. Cisco engineers will have a concept of SNMP community strings and ACLs. Oracle DBAs will know how about profiles and roles.

But is the typical security portfolio of system administrators wide enough to form the foundations of an effective information security program? Not really. In fact its some way short. Security Analysts need to have a grasp not only on, for example, file system permissions, they need to know how attackers actually elevate privileges and compromise, for example, a critical database host. They need to know attack vectors and how to defend against them. This kind of knowledge isn’t a typical component of a system administrator’s training schedule. Its one thing to know the effect of a world-write permission bit on a directory, but what is the actual security impact? With some directories this can be relatively harmless, with others, it can present considerable business risk.

The stance from ops will be to patch and protect. While this is [sometimes] better than nothing, there are other ways to compromise servers, other than exploiting known vulnerabilities. There are zero days (i.e. undeclared vulnerabilities for which no patch has been released), and also means of installing back doors and trojans that do not involve exploiting local bugs.

So without the kind of knowledge I have been discussing, how would ops handle a case where a project team blocks the install of a patch because it breaks some aspect of their business-critical application? In most cases they will just agree to not install the patch. In consideration of the business risk several variables come into play. Network architecture, the qualitative technical risk to the host, value of information assets…and also is there a work-around? Is a work-around or compromise even worth the time and effort? Do the developers need to re-work their app at a cost of $15000?

A lack of security input in network operations leads to cases where over-redundancy is deployed. Literally every switch and router will have a hot swap. So take the budget for a core network infrastructure and just double it – in most cases this is excessive expenditure.

With firewall rules, ops teams have a concept of blocking incoming connections, but its not unusual that egress will be over-looked, with all the “bad netizen”, malware / private date harvests, reverse telnet implications. Do we really want our corporate domain name being blacklisted?

Another common symptom of a devolved security model is the excessive usage of automated scanners in vulnerability assessment, without having any idea that there are shortcomings with this family of product. The result of this is to “just run a scanner against it” for critical production servers and miss the kind of LHF (Low Hanging Fruit) false negatives that bad guys and malware writers just love to see.

The results of devolution will be many and varied in nature. What I have described here is only a small sampling. Whatever department is responsible for security analysis is irrelevant, but the knowledge has to be there. I cover this topic more thoroughly in Chapter 5 of Security De-engineering, with more details on the utopic skills in Chapter 11.

The Search For Infosec Minds

Since the early 2000s, and in some of my other posts, I have commented in different forms on the state of play, with a large degree of cynicism, which was greeted with cold reservation, smirks, grunts, and various other types of un-voiced displeasure, up to around 2009 or so. But since at least 2010, how things have changed.

If we fast forward from 2000 to 2005 or so, most business’s security function was reduced down to base parrot-fashion checklists, analysis and thinking were four letter words, and some businesses went as far as outsourcing security functions.

Many businesses who turned their backs on hackers just after the turn of the millennium have since found a need to review their strategies on security hiring. However 10 years is a long time. The personnel who were originally tasked with forming a security function in the late 90s, have since risen like phoenixes from the primordial chasm, and assisted by thermals, they have swooped up to graze on higher plains. Fast forward again to 2012, and the distance between security and IT is in the order of light years in most cases. The idea that security is purely a compliance game hasn’t changed, but unlike the previous decade, it is in many cases seen as no longer sufficient to crawl sloth-like over the compliance finishing line every year.

Businesses were getting hacked all through the 2000s but they weren’t aware of it. Things have changed now. For starters the attacks do seem to be more frequent and now there is SIEM, and audit requirements to aggregate logs. In the past, even default log settings were annulled with the result that there wasn’t even local logging, let alone network aggregation! Mind you, even after having been duped into buying every well-marketed detection product, businesses are still being hacked without knowing it. Quite often the incident comes to light after a botnet command and control system has been owned by the good guys.

Generally there is more nefarious activity now, as a result of many factors, and information security programs are under more “real” focus now (compliance-only is not real focus, in fact it’s not real anything, apart from a real pain the backside).

The problem is that with such a vast distance between IT and security for so long, there is utter confusion about how to get tech’d up. Some businesses are doing it by moving folk out of operations into security. This doesn’t work, and in my next post I will explain why it doesn’t work.

As an example of the sort of confusion that reigns, there was one case I came across earlier in 2012 where a company in the movies business was hacked and they were having their trailers, and in some cases actual movies, put up on various torrent sites for download. Their response was to re-trench their outsourced security function and attempt to hire in-house analysts (one or two!). But what did they go looking for? Because they had suffered from malware problems, they went looking for, and I quote, “Malware Reverse Engineers”. Malware Reverse Engineers? What did they mean by this? After some investigation, it turns out they are really were looking for malware reverse engineers, there was no misnomer – malware reverse engineers as in those who help to develop new patterns for anti-virus engines!! They had acquired a spanking new SIEM, but there was no focus on incident response capability, or prevention/protection at all.

As it turns out “reverse engineer[ing]” is now a buzzword. Whereas in the mid-2000s, buzzwords were “governance” and “identity management” (on the back of…”identity theft” – neat marketing scam), and so on. Now there are more tech-sounding buzzwords which have different connotations depending on who you ask. And these tech sounding buzzwords find their way into skills requirements sent out by HR, and therefore also on CVs as a response. And the tech-sounding buzzwords are born from…yes, you got it…Black Hat conferences, and the multitude of other conferences, B-sides, C-sides, F-sides and so on, that are now as numerous as the stars in the sky.

The segue into Black Hat was quite deliberate. A fairly predictable development is the on-going appearance of Infosec managers at Black Hats, who previously wouldn’t touch these events with a barge pole. They are popping up at these events looking to recruit speakers primarily, because presumably the speakers are among the sharper of the crayons in the box, even if nobody has any clue what they’re talking about.

Before I go on, I need to qualify that I am not going to cover ethics here, mainly because it’s not worth covering. I find the whole ethics brush to be somewhat judgmental and divisive. I prefer to let the law do the judgment.

Any attempt to recruit tech enthusiasts, or “hackers”, can’t be dismissed completely because it’s better than anything that could have been witnessed in 2005. But do businesses necessarily need to go looking for hackers? I think the answer is no. Hackers have a tendency to take security analysis under their patronage, but it has never been their show, and their show alone. Far from it.

In 2012 we can make a clear distinction between protection skills and breaking-in skills. This is because as of 2012, 99.99…[recurring to infinity]% of business networks are poorly defended. Therefore, what are “breaking-in skills”? So a “hacker” breaks into networks, compromises stuff, and posts it on pastebin.com. The hackers finds pride and confidence in such achievements. Next, she’s up on the stage at the next conference bleating about “reverse engineering”, “fuzzing”, or “anti forensics tool kits”…nobody is sure which language is used, but she’s been offered 10 jobs after only 5 minutes into her speech.

However, what is actually required to break into networks? Of the 20000+ paths which were wide open into the network, the hacker chose one of the many paths of least resistance. In most cases, there is no great genius involved here. The term “script kiddy” used to refer to those who port scan, then hunt for public declared exploits for services they find. There is IT literacy required for sure (often the exploits won’t run out-of-the-box, they need to be compiled for different OSs or de-bugged), but no creativity or cunning or …whatever other mythical qualities are associated with hacking in 2012.

The thought process behind hiring a hacker is typically one of “she knows how to break into my network, therefore she can defend against others trying to break in”, but its quite possible that nothing could be further from the truth. In 2012, being a hacker, or possessing “breaking-in skills”, doesn’t actually mean a great deal. Protection is a whole different game. Businesses should be more interested about protection as of 2012, and for at least the next decade.

But what does it take to protect? Protection is a more disciplined, comprehensive IT subject. Collectively, the in-house security teams needs to know the all the nooks and crannies, all the routers, databases, applications, clouds, and operating systems and how they all interact and how they’re all connected. They also collectively need to know the business importance of information assets and applications.

The key pillars of focus for new-hire Security Analysts should be Operating Systems and Applications. When we talk about operating systems and security, the image that comes to mind is of auditors going thru a checklist in some tedious box-ticking exercise. But OS security is more than that, and it’s the front line in the protection battle. The checklists are important (I mean checklists as in standards and policies) but there are two sides to each item on the checklist: one is in the details of how to practically exploit the vulnerability and the potential tech impact, the other are the operational/business impacts involved with the associated safeguard. In other words, OS security is far more than a check-list, box-ticking activity.

In 12 years I never met a “hacker” who could name more than 3 or 4 local privilege elevation vectors for any popular Operating System. They will know the details of the vulnerability they used to root a server last month, but perhaps not the other 100 or so that are covered off by the corporate security standard for that Operating System. So the protection skills don’t come by default just because someone has taken to the stand at a conference.

Skills such as “reverse engineering” and “fuzzing” – these are hard to attain and can be used to compromise systems that are well defended. But the reality is that very few systems are so well defended that such niche skills are ever needed. In 70+ tests for which i have either taken part or been witness, even if the tests were quite unrestricted, “fuzzing” wouldn’t be required to compromise targets – not even close.

A theoretical security team for a 10000+ node business, could be made of a half dozen or so Analysts, plus a Security Manager. Analysts can come from a background of 5 years in admin/ops or devs. To “break into” security, they already have their experience in a core technology (Unix, Windows, Oracle, Cisco etc), then they can demonstrate competence in one or more other core technologies (to demonstrate flexibility), programming/scripting, and security testing with those platforms.

Once qualified as a Security Analyst, the Analyst has a specialization in at least two core technologies. At least 2 analysts can cover application security, then there are other areas such as incident handling and forensics. As for Security Managers, once in possession of 5 years “time served” as an Analyst, they qualify for a manager’s exam, which when passed qualifies them for a role as a Security Manager. The Security Manager is the interface, or agent, between the technical artist Analysts and the business.

Overall then, it is far from the case that Hackers are not well-suited for vocational in-house security roles (moreover I always like to see “spare time” programming experience on a resume because it demonstrates enthusiasm and creativity). But it is also not the case that Analyst positions are under the sole patronage of Black Hat speakers. Hackers still need to demonstrate their capabilities in protection, and doing “grown-up” or “boring” things before being hired. There is no great compelling need for businesses to hire a hacker, although as of today, it could be that a hooligan who throws security stones through security windows is as close as they can get to effective network protection.

Somewhere Over The Rainbow – A Story About A Global Ubiquitous Record of All Things Incident

One of my posts from earlier in 2012 discussed the idea that CEOs are to blame for all of our problems in security. The idea that we “must have reliable actuarial data on incidents to stay relevant” is another of the information security holy sacred cows that rears its head every so often, and this post explains covers three angles on the incidents database idea. First I look at the impracticalities of gathering incidents data. Second, even if we have accurate data, exactly how useful is the data to us in the formulation of risk management decisions, and third, even if the data is accurate and useful, did we even need it in the first place?

Shostack and Stewart’s New School book was from the mid-2000s and a chapter is devoted to the absolute necessity of a global, ubiquitous incidents database. Such a thing is proclaimed by many as carrying do or die importance, as if we need it to prove the existence of a threat, and moreover to prove our right to exist as security professionals. Do we really need a global database to prove to the C-levels that spending on security is necessary?

There are of course some real blockers with regard the gathering of incidents data, not least the “what the heck just happened” factor, where post-incident, logging is turned on (it was off until the incident occurred) and $300k invested in a SIEM solution – one that really doesn’t help the business to respond effectively to incidents, it just gives operations a nice vehicle with which they can use to diagnose non-security related problems, and of course the vendor/box pusher consultants knew what they were doing when they configured the thing.

Anyway it really isn’t terribly constructive to talk reality with regard this subject. We need to talk about a theoretical world, somewhere over the rainbow – one where logging is enabled, internal detection controls are well configured, and internal IT ops and sec know how to respond to incidents and they understand log messages. Okay, most businesses don’t even know they were hacked until a botnet command and control box is owned by some supposed good guys somewhere, but all talk of security is null and void if we acknowledge reality here. So let’s not talk reality.

So incidents data simply won’t be available in many cases – that base is now covered. But what are we trying to achieve here? The proposal is to use past incidents data to help us prove the existence of a threat in formulating decisions on risk. We receive a penetration test report that tells us that vulnerability X was compromised 4 years ago in Timbuctoo. The other vulnerabilities mentioned in the report – according to our accurate database there is no mention of them ever having been compromised with resulting financial damage. So we fix vulnerability X and ignore the rest – a wise strategy, especially given that our incidents database hosts accurate information with check-sums and integrity built-in (well – I did say let’s not talk reality here).

But wait a minute – vulnerability X was exploited 4 years ago in a company Y in an industry sector Z. What if company Y’s network was wide open? Where is this leading? All networks are different. All businesses face different challenges. Even those in the same industry sector as each other are completely different. How ludicrous this is getting? Business X can even have the exact same network architecture as another business Y in the same industry sector, and yet the exploitation of the same vulnerability X in business X can lead to $100K damages for business X, but $100 damages in business Y. So are we really going to use these records of financial damages to formulate our risk acceptance, transferal, or mitigation decision? One would hope not.

As if this all isn’t bad enough: even if we ignore the hard reality of the limitations covered here, how long would it take before we have gathered enough information to call the database useful? 10 years? 20 years? One million incidents or…? Products are past their shelf life in less than a decade generally. I did come across an IBM AIX 4.1 (we’re talking mid-to-late 90s here) box at a customer site not so long ago, but thankfully that was an isolated incident.

After all the deliberation, it emerges that security is too complex – it cannot be reduced down to a simple picture a la “vulnerability X was exploited resulting in damages Y in company Z, and therefore we in Botnetz R Us need to address vulnerability X”.

All this is not to say that there is no benefit in knowing what’s going off out there in the wild. We can pick up on on-going bigger picture attack trends to enable us to prepare for what might be down the road for us. But do we really need details of past incidents to convince businesses to part with cash in risk mitigation, or moreover validate our existence?

What we’re really concerned with here is trust. The proponents of a big data repository of incident big data would have it that we need such a thing because the powers that be don’t trust us. When we propose a mitigation of a particular risk, they don’t trust our advice. Unfortunately though, we might get one success story where we consult the oracle of all incidents and we do magically find an incident related to the problem we are trying to fix, and we use this “evidence” to convince the bosses. What about the next problem though? We can’t use the same card trick next time to address the next risk issue if no such problem was ever encountered (and reported) before by another company somewhere else in the world. By looking into the history of all incidents we’re setting a dangerous precedent, and rather than enabling trust, we’re making the situation even worse.

We don’t need an incidents history record. What we need is for our customers to trust us. Our customers are C-levels, other business units, home users, in-house reps, and so on. How do we get them to trust us? What we need is a single accreditation path for security professionals, one that ties Analysts to past experience as an IT admin or developer, and one that ties Security Managers with past Analyst experience. With this, security managers can confidently deliver risk-related advisories to their superiors, safe in the knowledge that their message is backed up by Analysts with solid tech experience, and in whose advice they are comfortable in placing their trust.

How To Break Into Security – Planet Earth Edition

The venerable Brian Krebs has recently been running some stories from various demigods of the infosec world, aimed at those wishing to enter the information security field – aspiring graduate ninjas, and others seeking the mythical pot of gold at the end of the rainbow.

First up there was there was Tomas Ptacek’s edition, then we had some pearls of wisdom from Bruce Schneier, Jeremiah Grossman, Richard Bejtlich, and then Charlie Miller.

Thomas Ptacek claimed about the security field: “It’s one of the few technology jobs where the most fun roles are well compensated”, and “if you watched “Sneakers” and ideated a life spent breaking or defending software, great news: infosec can be more fun in real life, and it’s fairly lucrative.” Well…I am not refuting any of this, but it is certainly quite unusual for jobs in information security to be fun.

Thomas talks of the benefits of having extensive programming experience – and this is something I advocate myself quite strenuously (more on that later). Thomas’s viewpoint was centered around appsec, which is fine. I think for myself, in terms of defending networks, we need to look at two main areas: appsec and operating systems / databases. There is some good advice in Thomas’s article about breaking into application security, although I wouldn’t say that this area is everything. There’s a little too much religious fervor about appsec in the article for my liking, just as one often sees a lack of balance in other areas, such as CISSP-worship, and malware “reverse engineering” – basically – “my area is the alpha and omega – all that was, is, and ever will be”.

There are other areas that matter in security other than application security. I wouldn’t say its all about appsec, but I would say though that the two main areas are appsec and operating system and database configuration. “But operating systems and databases are also applications” I hear you say. Yes, but when we’re talking appsec in infosec, we’re usually talking about web applications. There are few times when we suffer web attacks where that one single exploit leads to something really bad happening, apart from perhaps a SQLi that in itself reveals sensitive database hosted information.

With regard web applications I think Thomas is spot-on with his comments about learning about web application security assessment, and how to get clued up in this area. Also the comments about Nessus and getting into penetration testing – sad but true.

With regard Bruce Schneier’s “breaking into” edition – nothing he says is factually incorrect (most of what we talk about is subjective, neither black or white, but grey) but the comments are not at all close to the coal-face realities of most business’s in-house or service providers’ practices. Wannabe security pros reading this will be sure to get grandiose visions of their future lives as a security pro – but in 90%-plus of cases the vocational activities of security professionals do not match the picture painted by Bruce Schneier. I’ll explain more on this later.

Richard Bejtlich’s response was centered around getting into penetration testing with Metasploit and Jeremiah Grossman’s was the most all-encompassing and in my opinion, the response that had the most value for security pro wannabes – although Charlie Millar’s wasn’t far behind. In particular Mr Grossman plays down the effectiveness of accreditation programs, in favor of practical experience – wise words indeed. Charlie Millar had similar opinions.

In Charlie Millar’s response there was a lot of talk of really specialized nieche areas like reverse engineering and so on, but he does temper this with “I really do a lot of reverse engineering and binary analysis, which is unusual.” and “for those starting out, it probably makes more sense to learn some languages more useful for web applications, like PHP or Java or something.  The majority of jobs I come across in application security are web applications, so unless you’re a dinosaur like me, you probably want to become a web app expert.  Web application security is a lot easier to get started in as well.”

Charlie Millar’s response sort of segues me into the wider scope here, and that is of the realities “out there”. The articles are based on responses by folk who’ve rightly become esteemed professionals in their field and there is some really valuable insight there. The thing is – there is a lot of talk of the security field being a place for artists and magicians, and of being technically demanding, but there are very few places where technical acrobatics skills in security are seen as having any value to businesses, or even security line managers for that matter – and therefore such intellectual capital just does not appear on the balance books of these businesses.

We’ve been through a vicious 70 to 80% of a sine wave of pain in security since the late 90s. The security world painted by the fellowship of five assembled by Brian Krebs (speaking of whom, it would be nice to hear his version of the “breaking into” story) seems much more like the world of the late 90s than today. Security was heavily de-engineered through the 2000s. Things started to change around 2010, but we’re still very much in non-tech territory, and many of the security line managers who will interview prospective Security Analysts will not have an IT background, and their security practice will be hands-off, non-tech, check-list based. Anything “tech” to do with information risk management will be handled by an ops team, but there won’t be any “reverse engineering” or “fuzzing” over there…far from it. More like – firewall configuration, running bad vulnerability management suites, monitoring IDS/SIEM logs.

Picking up on some of Bruce Schneier’s comments: “You can be an expert in viruses, or policies, or cryptography.” Policies? OK, this part is true – if you want to be an expert in policies, whatever that is, you can certainly find this in 90%+ of businesses – but is this something that is an economically viable and sustainable position, or even for that matter anything that any homo sapien would ever really want to do? Probably accountancy would be a better bet.

“Viruses?” Hmm. There are increasing numbers of openings for “malware reverse engineers”, where really what they’re looking for is incident response – they want to know what happened after they discovered that some of their laptops were connecting out to various addresses in places they hadn’t heard of, prior to the click of doom. If you get interviewed for one of these positions, be prepared to answer questions about SIEM technologies and incident response. These openings are not usually associated with reverse engineering to the level of detail of those pattern-makers in the anti-virus software market- and if they are, they needn’t be, and the line manager will get to realize this after a while.

And “cryptography”? We have Bruce’s comments and then we have a title heading from a book by Shostack and Stewart: “Amateurs Study Cryptography; Professionals Study Economics”. So who do you believe here? To be fair, the book was published at the height of the de-engineering phase of security and it sort of fitted with the agenda of the times. I would go with Bruce Schneier again here, but with some qualification (the final paragraph also talks about math): most security departments won’t ever go anywhere near anything mathematical or even crypto-related, and when they do, it will be with a checklist approach that goes something like “is a strong key used?”, “yes”, “ok, good, tick in a box”, or “is DES used?”, “yes”, “ok, that’s bad, I think, anyway use triple DES please” – with no further assistance for the dev team.

With regard Cryptography, right or wrong, it’s really only on tiny islands where the math is seen as relevant – places where they code the apps and people are assigned to review the security of the app. And these territories are keenly disputed. Most of the concerns in the rest of the business world will never get more techy than discussions about key management – which is the more common challenge with crypto anyway (mostly we’re using public crypto algorithms in security, so the challenge is in protection of the key).

There were comments from all respondents about testing applications and breaking into networks. Again, the places where skills like reverse engineering are actually relevant are so small. Bruce Schneier painted a grand picture of thinking like a hacker, not just a mere engineer, in order to be able to create systems that are difficult to compromise by the most advanced hackers. But most humans who design systems are not even thinking about security, or they’re on such a tight deadline (with related KPIs and bonuses) that they side-step security. So as a pen test guru wannabe, you may possess extremely high levels of fuzzing, exploit coding, and reversing skills, but you will never get to use them, in fact you will intimidate most interviewers, and you’ll be over-qualified. There will be easier ways to break into systems in most cases. In fact in general, as I commented in an earlier post, security is insufficiently mature in most organizations to warrant any manual penetration testing whatsoever.

So really, what I have had to say here may sound harsh or “negative”, but I would hate for anyone to get into a field that they thought was challenging, only to discover that it’s anything but. I believe things are changing, but it’s at rather a slow pace, and the field of security has been broken for so long, that there are very few around who know how to fix it. Security is getting more challenging, that’s for sure, but for the security pro who goes looking for a job in this field because of the tech challenge aspect: just be very careful about what you’re getting into. Many jobs sound great from the job descriptions posted to recruitment agents, but this is only a show. The reality inside the team is that you may be sent to Siberia if you so much as use a tech-sounding word like “computer” or “IP address” – while this sounds unreal I can assure the reader that such a scenario is most certainly real, although it is of course more often the case that the job would just not be offered to a “techie”.

Its impossible to cover the jobs aspect of information security in just this article. I had a more comprehensive stab at it in Chapter Six of Security De-engineering. I would say to the prospective security pro though, that the advice given by the five mentioned in this article is not bad advice at all – it’s just that you may push yourself to higher levels, and not see significant benefit from it in your careers any time soon. There will always be some benefit, just not as much as you might expect. Certainly, you will have more confidence, but also probably over-qualify yourself for your current position.

As a security pro with a tech inclination, getting into security might not be as hard as you thought. Thomas Ptacek mentioned “A good way to move into penetration testing: grab some industry standard tools and use an Amazon EC2 account to set up a “shooting range” to attack. Some of the best-known tools are available for free: the Nessus scanner, for instance, while not an application security tool, is free and can land you a network penetration testing role that you can use as a springboard to breaking applications.” Believe me, this is not a difficult target, but because of the way the security industry is, you could very well land a penetration testing job with the preparation as described by Mr Ptacek.

All I had to say here is aimed at managing expectations. You may well find that you have to market yourself down a bit in order just to get a foothold in the industry. Once there, by pushing yourself to learn more and get more advanced skills, it could be that you would eventually osmosize towards the ideal job of your dreams. However, these positions are so rare in reality. Many of the folk I worked with in the earlier days of my career had these ninja skills that have been discussed in the five articles mentioned here. Once we got to the early 2000s they realized that security was no longer a place for them. Has the demand for these advanced skills returned? It has, to some extent, but still the demand is miniscule compared to the usual skills required the vast majority of businesses.

Blame The CEO?

I would like to start by issuing a warning about the content in this article. I will be taking cynicism to the next level, so the baby-eyed, and “positive” among us should avert their gaze after this first paragraph. For those in tune with their higher consciousness, I will summarise: Can we blame the C-levels for our problems? Answer: no. Ok, pass on through now. More positive vibes may be found in the department of delusion down the hall.

The word “salt” was for the first time ever inserted into the hall of fame of Information Security buzzwords after the Linkedin hack infamy, and then Yahoo came along and spoiled the ridicule-fest by showing to the world that they could do even better than Linkedin by not actually using any password hashing at all.

There is a tendency among the masses to latch onto little islands of intellectual property in the security world. Just as we see with “cloud”, the “salt” element of the Linkedin affair was given plenty of focus, because as a result of the incident, many security professionals had learned something new – a rare occurrence in the usual agenda of tick-in-box-marking that most analysts are mandated to follow.

With Linkedin, little coverage was given to the tedious old nebulous “compromise” element, or “how were the passwords compromised?”. No – the “salt” part was much more exciting to hose into blogs and twitter – but with hundreds of analysts talking about the value of “salting”, the value of this pearl of wisdom was falling exponentially with time – there was a limited amount of time in which to become famous. If you were tardy in showing to the world that you understood what “salting” means, your tweet wouldn’t be favourite’d or re-tweeted, and the analyst would have to step back off the stage and go back to their usual humdrum existence of entering ticks in boxes, telling devs to use two-factor authentication as a matter of “best practices”, “run a vulnerability scanner against it”, and such ticks related matters.

Infosec was down and flailing around helplessly, then came the Linkedin case. The inevitable fall-out from the “salting” incident (I don’t call it the Linkedin incident any more) was a kick of sand in the face of the already writhing information security industry. Although I don’t know of any specific cases, based on twelve happy years of marriage with infosec, i’m sure they’re as abundant as the stars and occurring as I write this. I am sure that nine times out of ten, whenever devs need to store a password, they are told by CISSP-toting self-righteous analysts (and blindly backed up by their managers) that it is “best practice” and “mandatory” to use salting with passwords – regardless of all the other factors that go into making up the full picture of risk, the operational costs, and other needless over-heads. There will be times when salting is a good idea. Other times not. There cannot be a zero-value proposition here – but blanket, parrot-fashion advisories are exactly that.

The subject matter of the previous four paragraphs serves as a recent illustrator of our plight in security. My book covers a much larger piece of the circus-o-sphere and its certainly too much to even to try to summarise here, but we are epic-failing on a daily basis. One of the subjects I cover in Security De-engineering is the role of C-Level executives in security, and I ask the question “can we blame the C-levels” for the broken state of infosec?

Let’s take a trip down memory lane. The heady days of the late 90s were owned by technical wizards, sometimes known as Hackers. They had green hair and piercings. If a CEO ran some variant of a Windows OS on her laptop, she was greeted with a stream of expletives. Ok, “best practices” was nowhere to be seen in the response, and it is a much more offensive swear-phrase than any swear word I can think of, but the point is that the Hacker’s reposte could be better.

Hackers have little or no business acumen. They have the tech talent that the complexities of information security afford, but back when they worked in infosec in the late 90s, they were poorly managed. Artists need an agent to represent them, and there were no agents.

Hackers could theoretically be locked in a room with a cat-flap for food and drink, no email, and no phone. The only person they should be allowed to communicate with is their immediate security line manager. They could be used as a vault of intellectual capital, or a swiss army knife in the organisation. Problem was – the right kind of management was always lacking. Organisations need an interface between themselves and the Hackers. No such interface ever existed unfortunately.

The upper levels of management gave up working with Hackers for various reasons, not just for scaring the living daylights out of their normal earthling colleagues. Then came the early noughties. Hackers were replaced by respectable analysts with suits and ties, who sounded nice, used the words “governance” and “non-repudiation” a lot, and didn’t swear at their managers regardless of ineptitude levels. The problem with the latter CASE (Checklist and Standards Evangelist) were illustrated with the “salting” debacle and Linkedin.

There is a link between information and information security (did you notice the play on words there – information was used in…”information”… and also in… “information security” – thereby implicating that there might just be a connection). The CASE successor to the realm actually managed to convince themselves (but few others in the business world) that security actually has nothing to do with information technology. It is apparently all about “management” and “processes”. So – every analyst is now a “manager”?! So who in the organisation is going to actually talk to ops and devs and solve the risk versus cost of safeguard puzzles? There are no foot soldiers, only a security department composed entirely of managers.

Another side of our woes is the security products space. Products have been lobbied by fierce marketing engines and given ten-out-of-ten ratings by objective information security publications. The products supposedly can automate areas of information risk management, and tell us things we didn’t already know about our networks. The problem is when you automate processes, you’re looking for accurate results. Right? Well, in certain areas such as vulnerability assessment, we don’t even get close to accurate results – and vulnerability assessment is one area where accuracy is sorely needed – especially if we are using automation to assess vulnerability in critical situations.

Some product classes do actually make some sense to deploy in some business cases, but the number of cases where something like SIEM (for example) actually make sense as as an investment is a small number of the whole.

Security line managers feel the pressure of compliance as the main part of their function. In-house advice is pretty much of the out-house variety in most cases, and service providers aren’t always so objective when it comes to technology acquisition. Products are purchased as a show of diligence for clueless auditors and a short cut to a tick-in-a-box.

So the current security landscape is one of a lack of appropriate skills, especially at security line-management level, which in turn leads to market support for whatever bone-headed product idea can be dreamed up next. The problems come in two boxes then – skills and products.

Is it the case that security analysts and line managers are all of the belief that everything is fine in their corner? The slew of incidents, outgoing connections to strange addresses in eastern Europe, and the loss of ownership of workstation subnets – it’s not through any fault of information security professionals? I have heard some use the excuse “we can never keep out bad guys all the time” – which actually is true, but there is little real confidence in the delivery of this message. Even with the most confidence – projecting among us, there is an inward sense of disharmony with things. We all know, just from intuition, that security is about IT (not just business) and that the value we offer to businesses is extremely limited in most cases.

CEOs and other silver-heads read non-IT publications, and often-times incidents will be reported, even in publications such as the Financial Times. Many of them are genuinely concerned about their information assets, and they will ask for updates from someone like a CISO. It is unlikely the case, as some suggest, they don’t care about information security and it is also unlikely, as is often claimed, that security budgets are rejected minus any consideration.

CEOs will make decisions on security spending based on available information. Have they ever been in a position where they can trust us with our line reporting? Back in the 90s they were sworn at with business-averse rhetoric. Later they were bombarded with IT-averse rhetoric, green pie charts from expensive vulnerability management suites, delivered with a perceptible lack of confidence in analyst skills and available tools.

So can we blame CEOs? Of course not, and our prerogative now should be re-engineering of skills, with a better system of “graduation” through the “ranks” in security, and an associated single body of accreditation (Chapter 11 of Security De-engineering covers this in more detail). With better skills, the products market would also follow suit and change radically. All of this would enable CISOs to report on security postures with confidence, which in turn enables trust at the next level up the ladder.

The idea that CEOs are responsible for all our problems is one of the sacred holy cows of the security industry (along with some others that I will be covering). Ladies and gentlemen: security analysts, managers, self-proclaimed “Evangelists”, “Subject Matter Experts”, and other ego-packing gurus of our time are responsible for the problems.

The Perils Of Automation In Vulnerability Assessment

Those who have read my book will be familiar with this topic, but really speaking even if literally everyone had read the book already, I would still be covering this matter because the magnitude of the problem demands coverage, and more coverage. Even when we’re at the point of “we the 99% do understand that we really shouldn’t be doing this stuff any more”, the severity of the issue demands that even if there should still be a lingering one per cent, yet further coverage is warranted.

The specific area of information security in which automation fails completely (yet we still persist in engaging with such technology) is in the area of vulnerability scanning, in particular unauthenticated vulnerability scanning, in relation to black box scanning of web applications and networks. “Run a scanner by it” still appears in so many articles and sound bytes in security – its still very much part of the furniture. Very expensive, software suites are built on the use of automated unauthenticated scanning – in some cases taking an open source scanning engine, wrapping a nice GUI around it with pie charts, and slapping a 25K USD price tag on it.

As of 2012 there are still numerous supporters of vulnerability scanning. The majority still seem to really believe the premise that it is possible (or worse…”best practices”), by use of unauthenticated vulnerability scanning, to automatically deduce a picture of vulnerability on a target – a picture that does not come with a bucket load of condiments in the way of significant false negatives.

False positives are a drain on resources – and yes, there’s a bucket load of those too, but false negatives, in critical situations, is not what the doctor ordered.

Even some of the more senior folk around (note: I did not use the word “Evangelist”) support the use of these tools. Whereas none of them would ever advocate substituting manual penetration testing for an auto-scan, there does seem to be a great deal of “positivity” around the scanning scene. I think this is all just the zen talking to be honest, but really when we engage with zen, we often disengage with reality and objectivity. Its ok to say bad stuff occasionally, who knows, it might even be in line with the direction given to one’s life by one’s higher consciousness.

Way back in the day, when we started off on our path of self-destruction, I ran a pressie on auto-scanning and false expectations, and I duly suffered the ignominy of the accusation of carrying Luddite tendencies. But…thing is see: we had already outsourced our penetration testing to some other firm somewhere – so what was it that I was afraid of losing? Yes, I was a manual tester person, but it was more than 12 months since we outsourced all that jazz – and I wasn’t about to start fighting to get it back. Furthermore, there were no actual logical objections put forward. The feedback was little more than just primordial groans and remote virtual eye rolling – especially when I displayed a chart that showed unauthenticated scanning carrying similar value to port scanning. Yes – it is almost that bad.

It could be because of my exposure to automated scanners that I was able to see the picture as clearly as I did. Actually in the first few runs of a scanning tool (it was the now retired Cybercop Scanner – it actually displayed a 3D rotating map of a network – well, one subnet anyway) I wasn’t aware myself of the lack of usefulness of these tools. I also used other tools to check results, but most of the time they all returned similar results.

Over the course of two years I conducted more than one hundred scans of client perimeters and internal subnets, all with similar results. During this time I was sifting thru the endless detritus of false positives with the realization that in some cases I was spending literally hours dissecting findings. In many cases it was first necessary to figure out what the tool was actually doing in deducing its findings, and for this I used a test Linux box and Ethereal (now Wireshark).

I’m not sure that “testing” as in the usage of a verb is appropriate because it was clear that the tool wasn’t actually doing any testing. In most cases, especially with listening services such as Apache and other webservers, the tool just grabs a banner, finds a version string, and then does a correlation look-up in its database of public declared vulnerability. What is produced is a list of public declared vulnerability for the detected version. No actual “probing” is conducted, or testing as such.

The few tests that produce reasonably reliable returns are those such as SNMP community strings tests (or as reliable as UDP allows) or another Blast From The Past – finger service “intelligence” vulnerability (no comment). The tools now have four figure numbers of testing patterns, less than 10% of which constitute acceptably accurate tests. These tools should be able to conduct some FTP configuration tests because it can all be done with politically correct “I talk to you, you talk to me, I ask some questions, you give me answers” type of testing. But no. Something like a test for anonymous FTP enabled – works for a few FTP servers, but not for some of the other more popular FTP packages. They all return different responses to the same probe you see…

I mentioned Cybercop Scanner before but its important not to get hung up on product names. The key is the nature of the scanning itself and its practical limitations. Many of our beloved security softwares are not coded by devs who have any inkling whatsoever of anything to do with security, but really, we can have a tool deduced and produced with all the miracles that human ingenuity affords, but at some point we always hit a very low and very hard ceiling, in terms of what we can achieve with unauthenticated vulnerability assessment.

With automated vulnerability assessment we’re not doing anything that can destabilize a service (there are some DoS tests and “potentially disruptive tests” but these are fairly useless). We do not do something like running an exploit and making shell connection attempts, or anything of the sort. So what we can really achieve will always be extremely limited. Anyway, why would we want to do any of this when we have a perfectly fine root account to use? Or is that not something we really do in security (get on boxes and poke around as uid=0)? Is that ops ninja territory specifically (See my earlier article on OS Security, and as was said recently by a famous commentator in our field: “Platforms bitches!”)?

The possibility exists to check everything we ever needed to check with authenticated scanning but here, as of 2012, we are still some way short – and that is largely because of a lack of client demand (crikey)! Some spend a cajillion on a software package that does authenticated testing of most popular OSs, plus unauthenticated false positive generation, and _only_ use the sophisticated resource intensive false positives generation engine – “that fixes APTs”.

The masses seem to be more aware of the shortcomings with automated web application vulnerability scanners, but anyway, yes, the picture here is similarly harsh on the eye. Spend a few thousand dollars on these tools? I can’t see why anyone would do that. Perhaps because the tool was given 5 star ratings by unbiased infosec publications? Meanwhile many firms continue to bet their crown jewels on the use of automated vulnerability assessment.

The automobile industry gradually phased in automation over a few decades but even today there are still plenty of actual homo sapiens working in car factories. We should only ever be automating processes when we can get results that are accurate within the bounds of acceptable risks. Is it acceptable that we use unauthenticated automated scanning as the sole means of vulnerability assessment with the top 20% of our most critical devices? It is true that we can never detect every problem and what is safe today, maybe not safe tomorrow. But also we don’t want to miss the most glaring critical vulnerabilities either – but this is exactly the current practice of the majority of businesses.