How To Break Into Information Security

I’ve been asked a few times recently, usually by operations folk, to give some advice about how to break into the security sector, so under much pain I decided to commit my thoughts on the subject to this web log post. I’ve commented on this subject before and more extensively in chapter 6 of Security De-engineering, but this version is more in line with the times (up to 2012 I was advising a wide pass-by trajectory of planet infosec) and it will be shorter – you have my word(s).

blog-image

First I’d just be wary about trying to get into security just because of financial reasons (David Froud has an excellent blog and one of his posts covered this point well). At the time of writing it is possible to get into the field just by having an IT background and a CISSP. But don’t do that unless you have what’s REALLY required (do not judge what is REALLY required for the field based on job descriptions – at the time of writing, there are still plenty of mistakes being made by organisations). Summarising this in a very brief way:

  • You feel like you have grown out of pure IT-based roles and sort of excelled in whatever IT field you were involved in. You’re the IT professional who doesn’t just clear their problem tickets and switch off. You are, for example, looking for ways to automate things, and self-teach around the subject.
  • Don’t think about getting into security straight from higher education. Whereas it is possible, don’t do it. Just…don’t. Operational Security (or opsec/devopssec) is an option but have some awareness of what this is (scroll down to the end for an explanation).
  • Flexibility: can jump freely from a Cisco switch to an Oracle Database on any Operating System. Taking an example: some IT folk are religious about Unix and experience a mental block when it comes to Windows – this doesn’t work for security. Others have some kind of aversion to Cloud, whereas a better mindset for the field is one that embraces the challenge. Security pros in the “engineer” box should be enthusiastic about the new opportunities for learning offered by extended use of YAML, choosing the ideal federated identity management solution, Puppet, Azure Powershell, and so on. [In theory] projects where on-premise applications are being migrated to Cloud are not [in theory] such a bad place to be in security [in theory].
  • You like coding. Maybe you did some Python or some other scripting. What i’ve noticed is that coding skills are more frequently being seen as requirements. In fact I heard that one organisation went as far as putting candidates through a programming test for a security role. Python, Ruby, Shell ([Li,U]nix) and Powershell are common requirements these days. But even if role descriptions don’t mention coding as a requirement – having these skills demonstrates the kind of flexibility and enthusiasm that go well with infosec. “Regex” comes up a lot but if you’ve done lots of Python/Ruby and/or Unix sed/awk you will be more than familiar with regular expressions.

There is a non-tech element to security (sometimes referred to as “GRC”) but this is something you can get into later. Being aware of international standards and checking to see what’s in a typical corporate security policy is a good idea, but don’t be under the impression that you need to be able to recite verses from these. Generally speaking “writing stuff” and communication is more of a requirement in security than other fields, but you don’t need to be polished at day zero. There are some who see the progression path as Security Analyst –> Security Consultant (Analyst who can communicate effectively).

Another common motivator is hacker conferences or Mr Robot. Infosec isn’t like that. Even the dark side – you see Elliott with a hoody writing code with electronic techno-beats in the background, but hackers don’t write code to compromise networks to any huge degree, if at all. All the code is written for them by others mostly. And as with the femtocell and Raspberry Pi incidents, they usually have to assume a physical presence on the inside, or they are an internal employee themselves, or they dupe someone on the inside of the organisation under attack. Even if you’re in a testing role on the light side, the tests are vastly restricted and there’s a very canned approach to the whole thing with performance KPIs based on reports or something else that doesn’t link to actual intellectual value. Its far from glamorous. There’s an awful lot of misunderstanding out there. What is spoken about at hacker confz is interesting but its not usually stuff that is required to prove the existence of vulnerability in a commercial penetration test – most networks are not particularly well defended, and very little attention is given to results, more so because in most cases the only concern is getting ticks in boxes for an audit – and the auditors are often 12 years old and have never seen a command shell. Quality is rarely a concern.

Its a good practice to build up a list of the more influential bloggers and build up a decent Twitter feed and check what’s happening daily, but also, here are the books that I found most useful in terms of starting out in the field:

  • TCP/IP Illustrated – there are 3 volumes. 1 and 2 are the most useful. Then…
  • Building Internet Firewalls – really a very good way to understand some of the bigger picture ideas behind network architecture design and data flows. I hear rolling of eyes from some sectors, but the same principles apply to Cloud and other “modern” ideas that are from the 90s. With Cloud you have less control over network aspects but network access control and trust relationships are still very much a concern.
  • Network Security Assessment – the earlier versions are also still pertinent unless you will never see a Secure Shell or SMB port (hint: you will).
  • Security Engineering – there’s a very good chapter on Cryptography and Key Management.
  • The Art of Software Security Assessment – whether or not you will be doing appsec for a living you should look at OWASP‘s site and check out Webgoat. They are reportedly looking to bolster their API security coverage, which is nice (a lot of APIs are full of the same holes that were plugged in public apps by the same orgs some years ago). But if you are planning on network penetration testing or application security as a day job, then read this book, its priceless and still very applicable today.
  • The Phoenix Project – a good background illustrative for gaining a better understanding of the landscape in devops.

Also – take a look at perhaps a Windows security standard from the range of CIS benchmarks.

Finally – as i alluded earlier – opsec is not security. Why do i say this? Because i did come across many who believe they made it as a security pro once they joined a SOC/NOC team and then switched off. Security is a holistic function that covers the entire organisation – not just its IT estate, but its people, management, availability and resilience concerns, and processes. As an example – you could be part of a SOC team analysing the alerts generated by a SIEM (BTW some of the best SIEM material online is that written by Dr Anton Chuvakin). This is a very product centric role. So what knowledge is required to architect a SIEM and design its correlation rules? This is security. The same applies to IDS. Responding to alerts and working with the product is opsec. Security is designing the rulebase on an internal node that feeds off a strategically placed network tap. You need to know how hackers work among other areas (see above). Security is a holistic function. A further example: opsec takes the alerts generated by vulnerability management enterprise suites and maybe does some base false positives testing. But how does the organisation respond effectively to a discovered vulnerability? This is security.

Addressing The Information Security Skills Gap

We are told there is a skills gap in information security. I agree – there is, but recent suggestions to address the gap take us to dangerous places that are great for recruitment agencies, but not so great for the business world.

I want to steer away from use of the phrase ‘skills’ in this article because its too micro and the phrase has been violated by modern hiring practices. We are not looking for ‘Websense’, ‘DLP’ skills or as i saw recently ‘HSM’ skills. These requirements are silly unless it is the plan for organisations to spend 10 to 50 times more than they need on human resource, and have a security team of 300. Its healthier for organisations to look at ‘habits’ or ‘backgrounds’, and along those lines, in information security we’re looking for the following:

  • At least 5 years in an IT discipline: sys admin, DBA, devops bod, programmer for example
  • Evidence of having excelled in those positions and sort of grown out of them
  • Flexibility: for example, the crusty Radagast BSD-derivative disciple who has no fundamentalist views of other operating systems (think ‘Windows’) and not only can happily work with something like Active Directory, but they actually love working with Active Directory
  • A good-to-have-but-not-critical is past evidence of breaking or making things, but this should seen as a nice bonus. In its own right, it is insufficient – recruiting from hacker confz is far from guaranteed to work – too much to cover here

So really it should be seen that a career in infosec is a sort of ‘graduation’ on from other IT vocations. There should be an entrance exam based on core technologies and penetration testing. The career progression path goes something like: Analyst (5 years) –> Consultant -> Architect/Manager. Managers and architects cannot be effective if they do not have a solid IT background. An architect who doesn’t know her way around a Cisco router, implement a new SIEM correlation rule, or who cannot run or interpret the output from a packet sniffer is not an architect.

Analysts and Consultants should be skilled with the core building blocks to the level of being confidently handed administrative access to production systems. As it is, security pros find it hard to even get read-only access to firewall management suites. And having fast access to information on firewall rules – it can be critical.

Some may believe that individuals fitting the above profile are hard to find, and they’d be right. However, with the aforementioned model, the workforce will change from lots of people with micro-skills or product-based pseudo skills, to fewer people who are just fast learners and whose core areas complement each other. If you consider that a team of 300 could be reduced to 6 – the game has changed beyond recognition.

Quoting a recent article: “The most in demand cyber security certifications were Security+, Ethical Hacking, Network+, CISSP, and A+. The most in demand skills were Ethical Hacking, Computer Forensics, CISSP, Malware Analysis, and Advanced Penetration Testing”. There are more problems with this to describe in a reasonable time frame but none of these should ever be called ‘skills’. Of these, Penetration Testing (leave out the ‘ethics’ qualifier because it adds a distasteful layer of judgment on top of the law) is the only one that should be called a specification in its own right.

And yes, Governance, Risk and Control (GRC) is an area that needs addressing, but this must be the role of the Information Security Manager. There is a connection between Information Security Manage-ment and Information Security Manage-er.  Some organisations have separate GRC functions, the UK public sector usually has dedicated “assurance” functions, and as i’ve seen with some law firms, they are separated from the rest of security and IT.  Decision making on risk acceptance or mitigation, and areas such as Information Classification, MUST have an IT input and this is the role of the Information Security Manager. There must be one holistic security team consisting of a few individuals and one Information Security Manager.

In security we should not be leaving the impression that one can leave higher education, take a course in forensics, get accreditation, and then go and get a job in forensics. This is not bridging the security skills gap – its adding security costs with scant return. If you know something about forensics (usually this will be seen as ‘Encase‘ by the uninitiated) but don’t even have the IT background, let alone the security background, you will not know where to look in an investigation, or have a picture of risk. You will not have an inkling of how systems are compromised or the macro-techniques used by malware authors. So you may know how to use Encase and take an integral disk image for example, but that will be the limit of your contribution. Doesn’t sound like a particularly rewarding way to spend 200 business days per year? You’d be right.

Sticking with the forensics theme: an Analyst with the right mindset can contribute effectively in an incident investigation from day one. There are some brief aspects of incident response for them to consider, but it is not advisable to view forensics/incident response as a deep area. We can call it a specification, just as an involuntary action such as breathing is a specification, but if we do, we are saying that it takes more than one person to change a light bulb.

Incident response from the organisational / Incident Response Plan (IRP) formation point of view is a one-day training course or a few hours of reading. The tech aspects are 99% not distinct from the core areas of IT and network security. This is not a specialisation.

Other areas such as DLP, Threat Intelligence, SIEM, Cryptography and Key Management – these can be easily adopted by the right security minds. And with regard security products – it should be seen that security professionals are picking up new tools on-the-fly and don’t need 2 week training courses that cost $4000. Some of the tools in the VM and proxy space are GUIs for older open source efforts such as Nessus, OpenVAS, and Squid with which they will be well-versed, and if they’re not, it will take an hour to pick up the essentials.

There’s been a lot of talk of Operating Systems (OS) thus far. Operating Systems are not ‘a thing from 1998’. Take an old idea that has been labelled ‘modern’ as an example: ok, lets go with ‘Cloud’. Clouds have operating systems. VMs deployed to clouds have operating systems. When we deploy a critical service to a cloud, we cannot ignore the OS even if its a PaaS deployment. So in security we need people who can view an OS in the same way that a hacker views an OS – we need to think about Kill Chains and local privilege elevations. The Threat and Vulnerability Management (TVM) challenge does not disappear just because you have PaaS’d everything. Moreover if you have PaaS’d everything, you have immediately lost the TVM battle. As Beaker famously said in his cloud presentation – “Platforms Bitches”. Popular OS like Windows, *nix, Linux, and popular applications such as Oracle Database are going to be around for some time yet and its the OS where the front-lines are drawn.

Also what is a common misconception and does not work: a secops/network engineer going straight into security with no evidence of interest in other areas. ‘Secops’ is not good preparation for a security career, mainly because secops is sort of purgatory. Just as “there is no Dana, only Zool“, so “there is no secops, only ops”. There is only a security element to these roles because the role covers operational processes with security products. That is anti-security.

All Analyst roles should have an element of penetration testing and appsec, and when I say penetration testing, i do mean unrestricted testing as in an actual simulation. That means no restrictions on exploit usage or source address – because attackers do not have such restrictions. Why spend on this type of testing if its not an actual simulation?

Usage of Cisco Discovery Protocol (CDP) offers a good example of how a lack of penetration testing experience can impede a security team. If security is being done even marginally professionally in an organisation, there will exist a security standard for Cisco network devices that mandates the disabling of CDP.  But once asked to disable CDP, network ops teams will want justification. Any experienced penetration tester knows the value of intelligence in expediting the attack effort and CDP is a relative gold mine of intelligence that is blasted multicast around networks. It can, and often does, reveal the identity and IP address of a core switch. But without the testing experience or knowledge of how attacks actually go down, the point will be lost, and the confidence missing from the advisory.

The points i’ve just covered are not actually ground-breaking at all. Analysts with a good core background of IT and network security can easily move into any new area that marketeers can dream up.

There is an intuition that Information Security has a connection with Information Technology, if only for the common word in them both (that was ‘Information’ by the way, in case you didn’t get it). However, as Upton Sinclair said “It is difficult to get a man to understand something, when his salary depends upon his not understanding it”.

And please don’t create specialisations for Big Data or Internet of Things…woops, too late.

So, consider a small team of enthusiastic, flexible, fast learners, rather than a large team of people who can be trained at a high cost to understand the UI of an application that was designed in the international language and to be intuitive and easy to learn.

Consider using one person to change a light bulb, and don’t be the butt of future jokes.

Information Security Pseudo-skills and the Power of 6

How many Security Analysts does it take to change a light bulb? The answer should be one but it seems organisations are insistent on spending huge amounts of money on armies of Analysts with very niche “skills”, as opposed to 6 (yes, 6!) Analysts with certain core skills groups whose abilities complement each other. Banks and telcos with 300 security professionals could reduce that number to 6.

Let me ask you something: is Symantec Control Compliance Suite (CCS) a “skill” or a product or both? Is Vulnerability Management a skill? Its certainly not a product. Is HP Tippingpoint IPS a product or a skill?

Is McAfee Vulnerability Manager 7.5 a skill whereas the previous version is another skill? So if a person has experience with 7.5, they are not qualified to apply for a shop where the previous version is used? Ok this is taking it to the extreme, but i dare say there have been cases where this analogy is applicable.

How long does it take a person to get “skilled up” with HP Arcsight SIEM? I was told by a respected professional who runs his own practice that the answer is 6 months. My immediate thought is not printable here. Sorry – 6 months is ridiculous.

So let me ask again, is Symantec CCS a skill? No – its a product. Its not a skill. If you take a person who has experience in operational/technical Vulnerability Management – you know, vulnerability assessment followed by the treatment of risk, then they will laugh at the idea that CCS is a skill. Its only a skill to someone who has never seen a command shell before, tested manually for a false positive, or taken part in an unrestricted manual network penetration test.

Being a software product from a major vendor means the GUI has been designed to make the software intuitive to use. I know that in vulnerability assessment, i need to supply the tool with IP addresses of targets and i need to tell the tool which tests I want to run against those targets. Maybe the area where I supply the addresses of targets is the tab which has “targets” written on it? And I don’t want to configure the same test every time I run it, maybe this “templates” tab might be able to help me? Do i need a $4000 2-week training course and a nice certificate to prove to the world that I can work effectively with such a product? Or should there be an effective accreditation program which certifies core competencies (including evidence of the ability to adapt fast to new tools) in security? I think the answer is clear.

A product such as a Vulnerability Management product is only a “Window” to a Vulnerability Management solution. Its only a GUI. It has been tailored to be intuitive to use. Its the thin layer on top of the Vulnerability Management solution. The solution itself is much bigger than this. The product only generates list of vulnerabilities. Its how the organisation treats those vulnerabilities that is key – and the product does not help too much with the bigger picture.

Historically vulnerability management has been around for years. Then came along commercial products, which basically just slapped a GUI on processes and practices that existed for 20 years+, after which the jobs market decided to call the product the solution. The product is the skill now, whereas its really vulnerability management that is the skill.

The ability to adapt fast to new tools is a skill in itself but it also is a skill that should be built-in by default: a skill that should be inherent with all professionals who enter the field. Flexibility is the key.

The real skills are those associated with areas for large volumes of intellectual capital. These are core technologies. Say a person has 5 years+ experience of working in Unix environments as a system administrator and has shown interest in scripting. Then they learn some aspects of network penetration testing and are also not afraid of other technologies (such as Windows). I can guarantee that they will be up and running in less than one day with any new Vulnerability Management tool, or SIEM tool, or [insert marketing buzzphrase here] that vendors can magic up.

Different SIEM tools use different terms and phrases for the same basic idea. HP uses “flex connectors” whilst Splunk talks about “Forwarders” and “Heavy Forwarders” and so on. But guess what? I understand English but If i don’t know what the words mean, i can check in an online dictionary. I know what a SIEM is designed to do and i get the data flows and architecture concept. Network aggregation of syslog and Windows Events is not an alien concept to me, and neither are all layers of the TCP/IP stack (a really basic requirement for all Analysts – or should be). Therefore i can adapt very easily to new tools in this space.

IPS/IDS and Firewalls? Well they’re not even very functional devices. If you have ever setup Snort or iptables you’ll be fine with whatever product is out there. Recently myself and another Consultant were asked to configure a Tippingpoint device. We were up and running in 10 minutes. There were a few small items that we needed to check against the product documentation. I have 15 years+ experience in the field but the other guy is new. Nonetheless he had configured another IPS product before. He was immediately up and running with the product – no problem. Of course what to configure in the rule base – that is a bigger story and it requires knowledge of threats, attack techniques and vulnerabilities – but that area is GENERIC to security – its not specific to a product.

I’ve seen some truly crazy job specifications. One i saw was Websense Specialist!! Come on – its a web proxy! Its Squid with extra cosmetic functions. The position would be filled by a Websense “Olympian” probably. But what sort of job is that? Carpe Diem my friends, Carpe Diem.

If you run a security consultancy and you follow the usual market game of micro-boxed, pigeon-holed security skills, i don’t know how you can survive. A requirement comes up for a project that involves a number of different products. Your existing consultants don’t have those products written anywhere on their CVs, so you go to market looking for contractors at 600 USD per day. You either find the people somehow, or you turn the project down.  Either way you lose out massively. Or – you could have a base of 6 (its that number again) consultants with core skills that complement each other.

If the over-specialisation issue were addressed, businesses would save considerably on human resource and also find it easier to attract the right people. Pigeon-holed jobs are boring. It is possible and advisable to acquire human resource able to cover more bases in risk management.

There are those for and against accreditation in security. I think there is a solution here which is covered in more detail of Chapter 11 of Security De-engineering.

So how many Security Analysts does it take to change a light bulb? The answer is 6, but typically in real life the number is the mark of the beast: 666.

Information Security Careers: The Merits Of Going In-house

Job hunting in information security can be a confusing game. The lack of any standard nomenclature across the sector doesn’t help in this regard. Some of the terms used to describe open positions can be interpreted in wildly different ways. “Architect” is a good example. This term can have a non-technical connotation with some, and a technical connotation with others.

There are plenty of pros who came into security, perhaps via the network penetration testing route, who only ever worked for consultancies that provide services, mainly for businesses such as banks and telcos. The majority of such “external” services are centered around network penetration testing and application testing.

I myself started out in infosec on the consultancy path. My colleagues were whiz kids and some were well known in the field. Some would call them “hackers”, others “ethical” or “white hat” network penetration testers. This article does not cover ethics or pander to some of the verdicts that tend to be passed outside of the law.

Many Analysts and Consultants will face the decision to go in-house at some point in their careers, or remain in a service provider capacity. Others may be in-house and considering the switch to a consultancy. This post hopefully can help the decision making process.

The idea of going in-house and, for example, taking up an Analyst position with a big bank – it usually doesn’t hold much appeal with external consultants. The idea prevails that this type of position is boring or unchallenging. I also had this viewpoint and it was largely derived from the few visions and sound bytes I had witnessed behind the veil. However, what I discovered when I took up an analyst position with a large logistics firm was that nothing could be further from the truth. Going in-house can benefit one’s career immensely and open the eyes to the real challenges in security.

Of course my experiences do not apply across the whole spectrum of in-house security positions. Some actually are boring for technically oriented folk. Different organisations do things in different ways. Some just use their security department for compliance purposes with no attention to detail. However there are also plenty that engage effectively with other teams such as IT operations and development project teams.

As an Analyst in a large, complex environment, the opportunity exists to learn a great deal more about security than one could as an external consultant.  An external consultant’s exposure to an organisation’s security challenges will only usually come in the form of a network or application assessment, and even if the testing is conducted thoroughly and over a period of weeks, the view will be extremely limited. The test report is sent to the client, and its a common assumption that all of the problems described in the report can be easily addressed. In the vast majority of cases, nothing could be further from the truth. What becomes apparent at a very early stage in one’s life as an in-house Analyst, is that very few vulnerabilities can be mitigated easily.

One of the main pillars of a security strategy is Vulnerability Management. The basis of any vulnerability management program is the security standard – the document that depicts how, from a security perspective, computer operating systems, DBMS, network devices, and so on, should be configured. So an Analyst will put together a list of configuration items and compose a security standard. Next they will meet with another team, usually IT operations, in an attempt to actually implement the standard in existing and future platforms. For many, this will be the point where they realize the real nature of the challenges.

Taking an example, the security department at a bank is attempting to introduce a Redhat Enterprise Linux security standard as a live document. How many of the configuration directives can be implemented across the board with an acceptable level of risk in terms of breaking applications or impacting the business in any way? The answer is “not many”. This will come as a surprise for many external consultants. Limiting factors can come from surprising sources. Enlightened IT ops and dev teams can open security’s eyes in this regard and help them to understand how the business really functions.

The whole process of vulnerability management, minus VM product marketeers’ diatribe, is basically detection, then deduce the risk, then take decisions on how to address the risk (i.e. don’t address the vulnerability and accept the risk, or address / work around the vulnerability and mitigate the risk). But as an external consultant, one will only usually get to hand a client a list of vulnerabilities and that will be the end of the story. As an in-house Security Analyst, one gets to take the process from start to finish and learn a great deal more in the process.

As a security consultant passing beyond the iron curtain, the best thing that can possibly happen to their careers is that they find themselves in a situation where they get to interface with the enlightened ones in IT operations, network operations (usually there are a few in net ops who really know their security quite well), and application architects (and that’s where it gets to be really fun).

For the Security Consultant who just metamorphosized into an in-house Analyst, it may well be the first time in their careers that they get to encounter real business concerns. IT operations teams live in fear of disrupting applications that generate huge revenues per minute. The fear will be palpable and it encourages the kind of professionalism that one may never have a direct need to have as an external consultant. Generally, the in-house Analyst gets to experience in detail how the business translates into applications and then into servers, databases, and data flows. Then the risks to information assets seem much more real.

The internal challenge versus the external challenge in security is of course one of protection versus breaking-in. Security is full of rock stars who break into badly defended customer networks and then advertise the feat from the roof tops. In between commercial tests and twittering school yard insults, the rock stars are preparing their next Black Hat speech with research into the latest exotic sploit technique that will never be used in a live test, because the target can easily be compromised with simple methods.

However the rock stars get all the attention and security is all about reversing and fuzzing so we hear. But the bigger challenge is not breaking in, its protection, but then protection is a lot less exotic and sexy than breaking in. So there lies the main disadvantage of going in-house. It could mean less attention for the gifted Analyst. But for many, this won’t be such an issue, because the internal job is much more challenging and interesting, and it also lights up a CV, especially if the names are those in banking and telecoms.

How about going full circle? How about 3 years with a service provider, then 5 years in-house, then going back to consulting? Such a consultant is indeed a powerful weapon for consultancies and adds a whole new dimension for service providers (and their portfolio of services can be expanded). In fact such a security professional would be well positioned to start their own consultancy at this stage.

So in conclusion: going in-house can be the best thing that a Security Consultant can do with their careers. Is going in-house less interesting? Not at all. Does it mean you will get less attention? You can still speak at conferences probably.

Scangate Re-visited: Vulnerability Scanners Uncovered

I have covered VA tools before but I feel that one year later, the same misconceptions prevail. The notion that VA tools really can be used to give a decent picture of vulnerability is still heavily embedded, and that notion in itself presents a serious vulnerability for businesses.

A more concise approach at a run down on the functionality of VA warez may be worth a try. At least lets give it one last shot. On second thoughts, no, don’t shoot anything.

Actually forget “positive” or “negative” views on VAs before reading this. I am just going to present the facts based on what I know myself and of course I’m open to logical, objective discussion. I may have missed something.

Why the focus on VA? Well, the tools are still so commonplace and heavily used and I don’t believe that’s in our best interests.

What I discovered many years ago (it was actually 2002 at first) was that discussions around these tools can evoke some quite emotional responses. “Emotional” you quiz? Yes. I mean when you think about it, whole empires have been built using these tools. The tools are so widespread in security and used as the basis of corporate VM programs. VM market revenues runs at around 1 billion USD annually. Songs and poems have been written about VAs – OK I can’t back that up, but careers have been built and whole enterprise level security software suites built using a nasty open source VA engine.

I presented on the subject of automation in VA all those years ago, and put forward a notion that running VA tools doesn’t carry much more value as compared to something like this: nmap -v -sS -sV <targets> . Any Security Analyst worth their weight in spam would see open ports and service banners, and quickly deduce vulnerability from this limited perspective. “Limited”, maybe, but is a typical VA tool in a better position to interrogate a target autotragically?

One pre-qualifier I need to throw out is that the type of scanners I will discuss here are Nessus-like scanners, the modus operandi of which is to use unauthenticated means to scan a target. Nessus itself isn’t the main focus but it’s the tool that’s most well known and widely used. The others do not present any major advantages over Nessus. In fact Nessus is really as good as it gets. There’s a highly limited potential with these tools and Nessus reaches that limit.

Over the course of my infosec career I have had the privilege to be in a position where I have been coerced into using VAs extensively, and spent many long hours investigating false positives. In many cases I set up a dummy Linux target and used a packet sniffer to deduce what the tool was doing. As a summary, the findings were approximately:

  • Out of the 1000s of tests, or “patterns”, configured in the tools, only a few have the potential to result in accurate/useful findings. Some examples of these are SNMP community string tests, and tests for plain text services (e.g. telnet, FTP).
  • The vast majority of the other tests merely grab a service “banner”. For example, the tool port scans, finds an open port 80 TCP, then runs a test to grab a service banner (e.g. Apache 2.2.22, mickey mouse plug-in, bla bla). I was sort of expecting the tool to do some more probing having found a specific service and version, but in most cases it does not.
  • The tool, having found what it thinks is a certain application layer service and version, then correlates its finding with its database of public disclosed vulnerabilities for the detected service.

Even for some of the plan text services, some of the tests which have the potential to reveal useful findings have been botched by the developers. For example, tests for anonymous FTP only work with a very specific flavour of FTP. Other FTP daemons return different messages for successful anonymous logins and the tool does not accommodate this.

Also what happens if a service is moved from its default port? I had some spectacular failures with running Nessus against a FTP service on port 1980 TCP (usually it is listening on port 21). Different timing options were tested. Nessus uses a nmap engine for port scanning, but nmap by itself is usually able to find non-default port services using default settings.

So in summary, what the VA tools do is mostly just report that you are running ridiculous unencrypted blast-from-the-past services or old, down-level services – maybe. Really I would hope security teams wouldn’t need to spend 25K USD on an enterprise solution to tell them this.

False positives is one thing, but false negatives is quite another. Popular magazines always report something like 50% success rate in finding vulnerabilities in staged tests. Why is it always 50%? Remember also that the product under testing is usually one from a vendor who pays for a full spread ad in that magazine.

Putting numbers to false negatives makes little sense with huge, complex software packages of millions of lines of source code. However, it occurred to me not so long ago whilst doing some white box testing on a client’s critical infrastructure: how many of the vulnerabilities under testing could possibly be discovered by use of a VA tool? In the case of Oracle Database the answer was less than 5%. And when we’re talking Oracle, we’re usually talking critical, as in crown jewels critical.

If nothing else, the main aspect I would hope the reader would take out of this discussion is about expectation. The expectation that is set by marketing people with VA tools is that the tools really can be used to accurately detect a wide range of vulnerability, and you can bet your business on the tools by using them to test critical infrastructure. Ladies and gentlemen: please don’t be deceived by this!

Can you safely replace manual testing with use of these tools? Yes, but only if the target has zero value to the business.

 

Security in Virtual Machine Environments. And the planet.

This post is based on a recent article on the CIO.com site.

I have to say, when I read the title of the article, the cynic in me once again prevailed. And indeed there will be some cynicism and sarcasm in this article, so if that offends the reader, i would like to suggest other sources of information: those which do not accurately reflect the state of the information security industry. Unfortunately the truth is often accompanied by at least cynicism. Indeed, if I meet an IT professional who isn’t cynical and sarcastic, I do find it hard to trust them.

Near the end of the article there will be a quiz with a scammed prize offering, just to take the edge of the punishment of the endless “negativity” and abject non-MBA’edness.

“While organizations have been hot to virtualize their machine operations, that zeal hasn’t been transferred to their adoption of good security practices”. Well you see they’re two different things. Using VMs reduces power and physical space requirements. Note the word “physical” here and being physical, the benefits are easier to understand.

Physical implies something which takes physical form – a matter energy field. Decision makers are familiar with such energy fields. There are other examples in their lives such as tables, chairs, other people, walls, cars. Then there is information in electronic form – that’s a similar thing (also an energy field) but the hunter/gatherer in some of us doesn’t see it that way, and still as of 2013, the concept eludes many IT decision makers who have fought their way up through the ranks as a result of excellent performance in their IT careers (no – it’s not just because they have a MBA, or know the right people).

There is a concept at board level of insuring a building (another matter energy field) against damages from natural causes. But even when 80% of information assets are in electronic form, there is still a disconnect from the information. Come on chaps, we’ve been doing this for 20 years now!

Josh Corman recently tweeted “We depend on software just as much as steel and concrete, its just that software is infinitely more attack-able!”. Mr Corman felt the need to make this statement. Ok, like most other wise men in security, it was intended to boost his Klout score, but one does not achieve that by tweeting stuff that everybody already knows. I would trust someone like Mr Corman to know where the gaps are in the mental portfolios of IT decision makers.

Ok, so moving on…”Nearly half (42 percent) of the 346 administrators participating in the security vendor BeyondTrust‘s survey said they don’t use any security tools regularly as part of operating their virtual systems…”

What tools? You mean anti-virus and firewalls, or the latest heuristic HIDS box of shite? Call me business-friendly but I don’t want to see endless tools on end points, regardless of their function. So if they’re not using tools, is it not at this point good journalism to comment on what tools exactly? Personally I want to see a local firewall and the obligatory and increasingly less beneficial anti-virus (and i do not care as to where, who, whenceforth, or which one…preferably the one where the word “heuristic” is not used in the marketing drivel on the box). Now if you’re talking system hardening and utilizing built-in logging capability – great, that’s a different story, and worthy of a cuddly toy as a prize.

“Insecure practices when creating new virtual images is a systemic problem” – it is, but how many security problems can you really eradicate at build-time and be sure that the change won’t break an application or introduce some other problem. When practical IT-oriented security folk actually try to do this with skilled and experienced ops and devs, they realise that less than 50% of their policies can be implemented safely in a corporate build image. Other security changes need to be assessed on a per-application basis.

Forget VMs and clouds for a moment – 90%+ of firms are not rolling out effectively hardened build images for any platform. The information security world is still some way off with practices in the other VM field (Vulnerability Management).

“If an administrator clones a machine or rolls back a snapshot,”… “the security risks that those machines represent are bubbled up to the administrator, and they can make decisions as to whether they should be powered on, off or left in state.”

Ok, so “the security risks that those machines represent are bubbled up to the administrator”!!?? [Double-take] Really? Ok, this whole security thing really can be automated then? In that case, every platform should be installed as a VM managed under VMware vCenter with the BeyondTrust plugin. A tab that can show us our risks? There has to be a distinction between vulnerability and risk here, because they are two quite different things. No but seriously, I would want to know how those vulnerabilities are detected because to date the information security industry still doesn’t have an accurate way to do this for some platforms.

Another quote: “It’s pretty clear that virtualization has ripped up operational practices and that security lags woefully behind the operational practice of managing the virtual infrastructure,”. I would edit that and just the two words “security” and “lags”. What with visualized stuff being a subset of the full spectrum of play things and all.

“Making matters worse is that traditional security tools don’t work very well in virtual environments”. In this case i would leave remaining five words. A Kenwood Food Mixer goes to the person who can guess which ones those are. See? Who said security isn’t fun?

“System operators believe that somehow virtualization provides their environments with security not found in the world of physical machines”. Now we’re going Twilight Zone. We’ve been discussing the inter-cluster sized gap between the physical world and electronic information in this article, and now we have this? Segmentation fault, core dumped.

Anyway – virtualization does increase security in some cases. It depends how the VM has been configured and what type of networking config is used, but if we’re talking virtualised servers that advertise services to port scanners, and / or SMB shares with their hosts, then clearly the virtualised aspect is suddenly very real. VM guests used in a NAT’ing setup is a decent way to hide information on a laptop/mobile device or anything that hooks into an untrusted network (read: “corporate private network”).

The vendor who was being interviewed finished up with “Every product sounds the same,” …”They all make you secure. And none of them deliver.” Probably if i was a vendor I might not say that.

Sorry, I just find discussions of security with “radical new infrastructure” to be something of a waste of bandwidth. We have some very fundamental, ground level problems in information security that are actually not so hard to understand or even solve, at least until it comes to self-reflection and the thought of looking for a new line of work.

All of these “VM” and “cloud” and “BYOD” discussions would suddenly disappear with the introduction of integrity in our little world because with that, the bigger picture of skills, accreditation, and therefore trust would be solved (note the lack of a CISSP/CEH dig there).

I covered the problems and solutions in detail in Security De-engineering, but you know what? The solution (chapter 11) is no big secret. It comes from the gift of intuition with which many humans are endowed. Anyway – someone had to say it, now its in black and white.

Hardening is Hard If You’re Doing it Right

Yes, ladies and gentlemen, hardening is hard. If its not hard, then there are two possibilities. One is that the maturity of information security in the organization is at such a level that security happens both effectively and transparently – its fully integrated into the fabric of BAU processes and many of said processes are fully automated with accurate results. The second (far more likely given the reality of security in 2013) is that the hardening is not well implemented.

For the purpose of this diatribe, let us first define “hardening” so that we can all be reading from the same hymn sheet. When I’m talking about hardening here, the theme is one of first assessing vulnerability, then addressing the business risk presented by the vulnerability. This can apply to applications, or operating systems, or any aspect of risk assessment on corporate infrastructure.

In assessing vulnerability, if you’re following a check list, hardening is not hard – in fact a parrot can repeat pearls of wisdom from a check list. But the result of merely following a check list will be either wide open critical hosts or over-spending on security – usually the former. For sure, critical production systems will be impacted, and I don’t mean in a positive way.

You see, like most things in security, some thinking is involved. It does suit the agenda of many in this field to pretend that security analysis can be reduced down to parrot-fashion recital of a check list. Unfortunately though, some neural activity is required, at least if gaining the trust of our customers (C-levels, other business units, home users, etc) is important to us.

The actual contents of the check list should be the easy part, although unfortunately as of 2013, we all seem to be using different versions of the check list, and some versions are appallingly lacking. The worst offenders here deliver with a quality that is inversely proportional to the prices they charge – and these are usually external auditors from big 4 consultancies, all of whom have very decent check lists, but who also fail to ensure that Consultants use said check list. There are plenty of cases where the auditor knocks up their own garage’y style shell script for testing. In one case i witnessed not so long ago, the script for testing RedHat Enterprise Linux consisted of 6 tests (!) and one of the tests showed a misunderstanding of the purpose of the /etc/ftpusers file.

But the focus here is not on the methods deployed by auditors, its more general than that. Vulnerability testing in general is not a small subject. I have posted previously on the subject of “manual” network penetration testing. In summary: there will be a need for some businesses to show auditors that their perimeter has been assessed by a “trusted third party”, but in terms of pure value, very few businesses should be paying for the standard two week delivery with a four person team. For businesses to see any real value in a network penetration test, their security has to be at a certain level of maturity. Most businesses are nowhere near that level.

Then there is the subject of automated, unauthenticated “scanning” techniques which I have also written about extensively, both in an earlier post and in Chapter Five of Security De-engineering. In summary, the methodology used in unauthenticated vulnerability scanning results in inaccuracy, large numbers of false positives, wasted resources, and annoyed operations and development teams. This is not a problem with any particular tool, although some of them are especially bad. It is a limitation of the concept of unauthenticated testing, which amounts to little more than pure guesswork in vulnerability assessment.

How about the growing numbers of “vulnerability management” products out there (which do not “manage” vulnerability, they make an attempt at assessing vulnerability)? Well, most of them are either purely an expensive graphical interface to [insert free/open source scanner name], or if the tool was designed to make a serious attempt at accurate vulnerability assessment (more of them do not), then the tests will be lacking or over-done, inaccurate, and / or doing the scanning in an insecure way (e.g. the application is run over a public URL, with the result that all of your configuration data, including admin passwords, are held by an untrusted third party).

In one case, a very expensive VM product literally does nothing other than port scan. It is configured with hundreds of “test” patterns for different types of target (MS Windows, *nix, etc) but if you’re familiar with your OS configurations,you will look at the tool output and be immediately suspicious. I ran the tool against a Linux and Windows test target and “packet sniffed” the scanning engine’s probe attempts. In summary, the tool does nothing. It just produces a long list of configuration items (so effectively a kind of Security Standard for the target) without actually testing for the existence of vulnerability.

So the overall principle: the company [hopefully] has a security standard for each major operating system and database on their network and each item in the standard needs to be tested for all, or some of the information asset hosts in the organization, depending on the overall strategy and network architecture. As of the time of writing, there will need to be some manual / scripted augmentation of automatic vulnerability assessment processes.

So once armed with a list of vulnerabilities, what does one do with it? The vulnerability assessment is the first step. What has to happen after that? Can Security just toss the report over to ops and hope for the best? Yes, they can, but this wouldn’t make them very popular and also there needs to be some input from security regarding the actual risk to the business. Taking the typical function of operations teams (I commented on the functions and relationships between security and operations in an earlier post), if there is no input from security, then every risk mitigation that meets any kind of an impact will be blocked.

There are some security service providers/consultancies who offer a testing AND a subsequent hardening service. They want to offer both detection AND a solution, and this is very innovative and noble of them. However, how many security vulnerabilities can be addressed blindly without impacting critical production processes? Rhetorical question: can applications be broken by applying security fixes? If I remove the setuid bit from a root owned X Window related binary, it probably has no effect on business processes. Right? What if operations teams can no longer authenticate via their usual graphical interface? This is at least a little bit disruptive.

In practice, as it turns out, if you look at a Security Standard for a core technology, lets take Oracle 11g as an example: how many of the numerous elements of a Security Standard can we say can be implemented without fear of breaking applications, limiting access for users or administrators, or generally just making trouble-shooting of critical applications a lot less efficient? The answer is: not many. Dependencies and other problems can come from surprising sources.

Who in the organization knows about dependencies and the complexities of production systems? Usually that would be IT / Network Operations. And how about application – related dependencies? That would be application architects, or just generally we’ll say “dev teams” as they’re so affectionately referred to these days. So the point: even if security does have admin access to IT resources (rare), is the risk mitigation/hardening a job purely for security? Of course the answer is a resounding no, and the same goes for IT Operations.

So, operations and applications architects bring knowledge of the complexities of apps and infrastructure to the table. Security brings knowledge of the network architecture (data flows, firewall configurations, network device configurations), the risk of each vulnerability (how hard is to exploit and what is the impact?), and the importance to the business of information assets/applications. Armed with the aforementioned knowledge, informed and sensible decisions on what to do with the risk (accept, mitigate, work around, or transfer) can be made by the organization, not by security, or operations.

The early days of deciding what to do with the risk will be slow and difficult and there might even be some feisty exchanges, but eventually, addressing the risk becomes a mature, documented process that almost melts into the background hum of the machinery of a business.

Migrating South: The Devolution Of Security From Security

Devolution might seem a strong word to use. In this article I will be discussing the pros and cons of the migration of some of the more technical elements of information security to IT operations teams.

By the dictionary definition of the word, “devolution” implies a downgrade of security – but sufficed to say my point does not even remotely imply that operations teams are subordinate to security. In fact in many cases, security has been marginalized such that a security manager (if such a function even exists) reports to a CIO, or some other managerial entity within IT operations. Whether this is right or wrong…this is subjective and also not the subject here.

Of course there are other department names that have metamorphosed out of the primordial soup …”Security Operations” or SecOps, DevOps, SecDev, SecOpsDev, SecOpsOps, DevSecOps, SecSecOps and so on. The discussion here is really about security knowledge, and the intellectual capital that needs to exist in a large-sized organisation. Where this intellectual capital resides doesn’t bother me at all – the name on the sign is irrelevant. Terms such as Security and Operations are the more traditional labels on the boxes and no, this is not something “from the 90s”. These two names are probably the more common names in business usage these days, and so these are the references I will use.

Examples of functions that have already, and continue to be pharmed out to Ops are functions such as Vulnerability Management, SIEM, firewalls, IDS/IPS, and Identity Management. In detail…which aspects of these functions are teflonned (non-stick) off? How about all of them? All aspects of the implementation project, including management, are handled by ops teams. And then in production, ops will handle all aspects of monitoring, problem resolution, incident handling ..ad infinitum.

A further pre-qualification is about ideal and actual security skills that are commonly present. Make no mistake…in some cases a shift of tech functions to ops teams will actually result in improvements, but this is only because the self-constructed mandate of the security department is completely non-tech, and some tech at a moderate cost will usually be better than zero tech, checklists, and so on.

We need to talk about typical ops skills. Of course there will be occasional operations team members who are well versed in security matters, and also have a handle on the business aspects, but this is extra-curricular and rare. Ops team members are system administrators usually. If we take Unix security as an example, they will be familiar with at least filesystem permissions and umask settings, so there is a level of security knowledge. Cisco engineers will have a concept of SNMP community strings and ACLs. Oracle DBAs will know how about profiles and roles.

But is the typical security portfolio of system administrators wide enough to form the foundations of an effective information security program? Not really. In fact its some way short. Security Analysts need to have a grasp not only on, for example, file system permissions, they need to know how attackers actually elevate privileges and compromise, for example, a critical database host. They need to know attack vectors and how to defend against them. This kind of knowledge isn’t a typical component of a system administrator’s training schedule. Its one thing to know the effect of a world-write permission bit on a directory, but what is the actual security impact? With some directories this can be relatively harmless, with others, it can present considerable business risk.

The stance from ops will be to patch and protect. While this is [sometimes] better than nothing, there are other ways to compromise servers, other than exploiting known vulnerabilities. There are zero days (i.e. undeclared vulnerabilities for which no patch has been released), and also means of installing back doors and trojans that do not involve exploiting local bugs.

So without the kind of knowledge I have been discussing, how would ops handle a case where a project team blocks the install of a patch because it breaks some aspect of their business-critical application? In most cases they will just agree to not install the patch. In consideration of the business risk several variables come into play. Network architecture, the qualitative technical risk to the host, value of information assets…and also is there a work-around? Is a work-around or compromise even worth the time and effort? Do the developers need to re-work their app at a cost of $15000?

A lack of security input in network operations leads to cases where over-redundancy is deployed. Literally every switch and router will have a hot swap. So take the budget for a core network infrastructure and just double it – in most cases this is excessive expenditure.

With firewall rules, ops teams have a concept of blocking incoming connections, but its not unusual that egress will be over-looked, with all the “bad netizen”, malware / private date harvests, reverse telnet implications. Do we really want our corporate domain name being blacklisted?

Another common symptom of a devolved security model is the excessive usage of automated scanners in vulnerability assessment, without having any idea that there are shortcomings with this family of product. The result of this is to “just run a scanner against it” for critical production servers and miss the kind of LHF (Low Hanging Fruit) false negatives that bad guys and malware writers just love to see.

The results of devolution will be many and varied in nature. What I have described here is only a small sampling. Whatever department is responsible for security analysis is irrelevant, but the knowledge has to be there. I cover this topic more thoroughly in Chapter 5 of Security De-engineering, with more details on the utopic skills in Chapter 11.

The Search For Infosec Minds

Since the early 2000s, and in some of my other posts, I have commented in different forms on the state of play, with a large degree of cynicism, which was greeted with cold reservation, smirks, grunts, and various other types of un-voiced displeasure, up to around 2009 or so. But since at least 2010, how things have changed.

If we fast forward from 2000 to 2005 or so, most business’s security function was reduced down to base parrot-fashion checklists, analysis and thinking were four letter words, and some businesses went as far as outsourcing security functions.

Many businesses who turned their backs on hackers just after the turn of the millennium have since found a need to review their strategies on security hiring. However 10 years is a long time. The personnel who were originally tasked with forming a security function in the late 90s, have since risen like phoenixes from the primordial chasm, and assisted by thermals, they have swooped up to graze on higher plains. Fast forward again to 2012, and the distance between security and IT is in the order of light years in most cases. The idea that security is purely a compliance game hasn’t changed, but unlike the previous decade, it is in many cases seen as no longer sufficient to crawl sloth-like over the compliance finishing line every year.

Businesses were getting hacked all through the 2000s but they weren’t aware of it. Things have changed now. For starters the attacks do seem to be more frequent and now there is SIEM, and audit requirements to aggregate logs. In the past, even default log settings were annulled with the result that there wasn’t even local logging, let alone network aggregation! Mind you, even after having been duped into buying every well-marketed detection product, businesses are still being hacked without knowing it. Quite often the incident comes to light after a botnet command and control system has been owned by the good guys.

Generally there is more nefarious activity now, as a result of many factors, and information security programs are under more “real” focus now (compliance-only is not real focus, in fact it’s not real anything, apart from a real pain the backside).

The problem is that with such a vast distance between IT and security for so long, there is utter confusion about how to get tech’d up. Some businesses are doing it by moving folk out of operations into security. This doesn’t work, and in my next post I will explain why it doesn’t work.

As an example of the sort of confusion that reigns, there was one case I came across earlier in 2012 where a company in the movies business was hacked and they were having their trailers, and in some cases actual movies, put up on various torrent sites for download. Their response was to re-trench their outsourced security function and attempt to hire in-house analysts (one or two!). But what did they go looking for? Because they had suffered from malware problems, they went looking for, and I quote, “Malware Reverse Engineers”. Malware Reverse Engineers? What did they mean by this? After some investigation, it turns out they are really were looking for malware reverse engineers, there was no misnomer – malware reverse engineers as in those who help to develop new patterns for anti-virus engines!! They had acquired a spanking new SIEM, but there was no focus on incident response capability, or prevention/protection at all.

As it turns out “reverse engineer[ing]” is now a buzzword. Whereas in the mid-2000s, buzzwords were “governance” and “identity management” (on the back of…”identity theft” – neat marketing scam), and so on. Now there are more tech-sounding buzzwords which have different connotations depending on who you ask. And these tech sounding buzzwords find their way into skills requirements sent out by HR, and therefore also on CVs as a response. And the tech-sounding buzzwords are born from…yes, you got it…Black Hat conferences, and the multitude of other conferences, B-sides, C-sides, F-sides and so on, that are now as numerous as the stars in the sky.

The segue into Black Hat was quite deliberate. A fairly predictable development is the on-going appearance of Infosec managers at Black Hats, who previously wouldn’t touch these events with a barge pole. They are popping up at these events looking to recruit speakers primarily, because presumably the speakers are among the sharper of the crayons in the box, even if nobody has any clue what they’re talking about.

Before I go on, I need to qualify that I am not going to cover ethics here, mainly because it’s not worth covering. I find the whole ethics brush to be somewhat judgmental and divisive. I prefer to let the law do the judgment.

Any attempt to recruit tech enthusiasts, or “hackers”, can’t be dismissed completely because it’s better than anything that could have been witnessed in 2005. But do businesses necessarily need to go looking for hackers? I think the answer is no. Hackers have a tendency to take security analysis under their patronage, but it has never been their show, and their show alone. Far from it.

In 2012 we can make a clear distinction between protection skills and breaking-in skills. This is because as of 2012, 99.99…[recurring to infinity]% of business networks are poorly defended. Therefore, what are “breaking-in skills”? So a “hacker” breaks into networks, compromises stuff, and posts it on pastebin.com. The hackers finds pride and confidence in such achievements. Next, she’s up on the stage at the next conference bleating about “reverse engineering”, “fuzzing”, or “anti forensics tool kits”…nobody is sure which language is used, but she’s been offered 10 jobs after only 5 minutes into her speech.

However, what is actually required to break into networks? Of the 20000+ paths which were wide open into the network, the hacker chose one of the many paths of least resistance. In most cases, there is no great genius involved here. The term “script kiddy” used to refer to those who port scan, then hunt for public declared exploits for services they find. There is IT literacy required for sure (often the exploits won’t run out-of-the-box, they need to be compiled for different OSs or de-bugged), but no creativity or cunning or …whatever other mythical qualities are associated with hacking in 2012.

The thought process behind hiring a hacker is typically one of “she knows how to break into my network, therefore she can defend against others trying to break in”, but its quite possible that nothing could be further from the truth. In 2012, being a hacker, or possessing “breaking-in skills”, doesn’t actually mean a great deal. Protection is a whole different game. Businesses should be more interested about protection as of 2012, and for at least the next decade.

But what does it take to protect? Protection is a more disciplined, comprehensive IT subject. Collectively, the in-house security teams needs to know the all the nooks and crannies, all the routers, databases, applications, clouds, and operating systems and how they all interact and how they’re all connected. They also collectively need to know the business importance of information assets and applications.

The key pillars of focus for new-hire Security Analysts should be Operating Systems and Applications. When we talk about operating systems and security, the image that comes to mind is of auditors going thru a checklist in some tedious box-ticking exercise. But OS security is more than that, and it’s the front line in the protection battle. The checklists are important (I mean checklists as in standards and policies) but there are two sides to each item on the checklist: one is in the details of how to practically exploit the vulnerability and the potential tech impact, the other are the operational/business impacts involved with the associated safeguard. In other words, OS security is far more than a check-list, box-ticking activity.

In 12 years I never met a “hacker” who could name more than 3 or 4 local privilege elevation vectors for any popular Operating System. They will know the details of the vulnerability they used to root a server last month, but perhaps not the other 100 or so that are covered off by the corporate security standard for that Operating System. So the protection skills don’t come by default just because someone has taken to the stand at a conference.

Skills such as “reverse engineering” and “fuzzing” – these are hard to attain and can be used to compromise systems that are well defended. But the reality is that very few systems are so well defended that such niche skills are ever needed. In 70+ tests for which i have either taken part or been witness, even if the tests were quite unrestricted, “fuzzing” wouldn’t be required to compromise targets – not even close.

A theoretical security team for a 10000+ node business, could be made of a half dozen or so Analysts, plus a Security Manager. Analysts can come from a background of 5 years in admin/ops or devs. To “break into” security, they already have their experience in a core technology (Unix, Windows, Oracle, Cisco etc), then they can demonstrate competence in one or more other core technologies (to demonstrate flexibility), programming/scripting, and security testing with those platforms.

Once qualified as a Security Analyst, the Analyst has a specialization in at least two core technologies. At least 2 analysts can cover application security, then there are other areas such as incident handling and forensics. As for Security Managers, once in possession of 5 years “time served” as an Analyst, they qualify for a manager’s exam, which when passed qualifies them for a role as a Security Manager. The Security Manager is the interface, or agent, between the technical artist Analysts and the business.

Overall then, it is far from the case that Hackers are not well-suited for vocational in-house security roles (moreover I always like to see “spare time” programming experience on a resume because it demonstrates enthusiasm and creativity). But it is also not the case that Analyst positions are under the sole patronage of Black Hat speakers. Hackers still need to demonstrate their capabilities in protection, and doing “grown-up” or “boring” things before being hired. There is no great compelling need for businesses to hire a hacker, although as of today, it could be that a hooligan who throws security stones through security windows is as close as they can get to effective network protection.

Somewhere Over The Rainbow – A Story About A Global Ubiquitous Record of All Things Incident

One of my posts from earlier in 2012 discussed the idea that CEOs are to blame for all of our problems in security. The idea that we “must have reliable actuarial data on incidents to stay relevant” is another of the information security holy sacred cows that rears its head every so often, and this post explains covers three angles on the incidents database idea. First I look at the impracticalities of gathering incidents data. Second, even if we have accurate data, exactly how useful is the data to us in the formulation of risk management decisions, and third, even if the data is accurate and useful, did we even need it in the first place?

Shostack and Stewart’s New School book was from the mid-2000s and a chapter is devoted to the absolute necessity of a global, ubiquitous incidents database. Such a thing is proclaimed by many as carrying do or die importance, as if we need it to prove the existence of a threat, and moreover to prove our right to exist as security professionals. Do we really need a global database to prove to the C-levels that spending on security is necessary?

There are of course some real blockers with regard the gathering of incidents data, not least the “what the heck just happened” factor, where post-incident, logging is turned on (it was off until the incident occurred) and $300k invested in a SIEM solution – one that really doesn’t help the business to respond effectively to incidents, it just gives operations a nice vehicle with which they can use to diagnose non-security related problems, and of course the vendor/box pusher consultants knew what they were doing when they configured the thing.

Anyway it really isn’t terribly constructive to talk reality with regard this subject. We need to talk about a theoretical world, somewhere over the rainbow – one where logging is enabled, internal detection controls are well configured, and internal IT ops and sec know how to respond to incidents and they understand log messages. Okay, most businesses don’t even know they were hacked until a botnet command and control box is owned by some supposed good guys somewhere, but all talk of security is null and void if we acknowledge reality here. So let’s not talk reality.

So incidents data simply won’t be available in many cases – that base is now covered. But what are we trying to achieve here? The proposal is to use past incidents data to help us prove the existence of a threat in formulating decisions on risk. We receive a penetration test report that tells us that vulnerability X was compromised 4 years ago in Timbuctoo. The other vulnerabilities mentioned in the report – according to our accurate database there is no mention of them ever having been compromised with resulting financial damage. So we fix vulnerability X and ignore the rest – a wise strategy, especially given that our incidents database hosts accurate information with check-sums and integrity built-in (well – I did say let’s not talk reality here).

But wait a minute – vulnerability X was exploited 4 years ago in a company Y in an industry sector Z. What if company Y’s network was wide open? Where is this leading? All networks are different. All businesses face different challenges. Even those in the same industry sector as each other are completely different. How ludicrous this is getting? Business X can even have the exact same network architecture as another business Y in the same industry sector, and yet the exploitation of the same vulnerability X in business X can lead to $100K damages for business X, but $100 damages in business Y. So are we really going to use these records of financial damages to formulate our risk acceptance, transferal, or mitigation decision? One would hope not.

As if this all isn’t bad enough: even if we ignore the hard reality of the limitations covered here, how long would it take before we have gathered enough information to call the database useful? 10 years? 20 years? One million incidents or…? Products are past their shelf life in less than a decade generally. I did come across an IBM AIX 4.1 (we’re talking mid-to-late 90s here) box at a customer site not so long ago, but thankfully that was an isolated incident.

After all the deliberation, it emerges that security is too complex – it cannot be reduced down to a simple picture a la “vulnerability X was exploited resulting in damages Y in company Z, and therefore we in Botnetz R Us need to address vulnerability X”.

All this is not to say that there is no benefit in knowing what’s going off out there in the wild. We can pick up on on-going bigger picture attack trends to enable us to prepare for what might be down the road for us. But do we really need details of past incidents to convince businesses to part with cash in risk mitigation, or moreover validate our existence?

What we’re really concerned with here is trust. The proponents of a big data repository of incident big data would have it that we need such a thing because the powers that be don’t trust us. When we propose a mitigation of a particular risk, they don’t trust our advice. Unfortunately though, we might get one success story where we consult the oracle of all incidents and we do magically find an incident related to the problem we are trying to fix, and we use this “evidence” to convince the bosses. What about the next problem though? We can’t use the same card trick next time to address the next risk issue if no such problem was ever encountered (and reported) before by another company somewhere else in the world. By looking into the history of all incidents we’re setting a dangerous precedent, and rather than enabling trust, we’re making the situation even worse.

We don’t need an incidents history record. What we need is for our customers to trust us. Our customers are C-levels, other business units, home users, in-house reps, and so on. How do we get them to trust us? What we need is a single accreditation path for security professionals, one that ties Analysts to past experience as an IT admin or developer, and one that ties Security Managers with past Analyst experience. With this, security managers can confidently deliver risk-related advisories to their superiors, safe in the knowledge that their message is backed up by Analysts with solid tech experience, and in whose advice they are comfortable in placing their trust.