Migrating South: The Devolution Of Security From Security

Devolution might seem a strong word to use. In this article I will be discussing the pros and cons of the migration of some of the more technical elements of information security to IT operations teams.

By the dictionary definition of the word, “devolution” implies a downgrade of security – but sufficed to say my point does not even remotely imply that operations teams are subordinate to security. In fact in many cases, security has been marginalized such that a security manager (if such a function even exists) reports to a CIO, or some other managerial entity within IT operations. Whether this is right or wrong…this is subjective and also not the subject here.

Of course there are other department names that have metamorphosed out of the primordial soup …”Security Operations” or SecOps, DevOps, SecDev, SecOpsDev, SecOpsOps, DevSecOps, SecSecOps and so on. The discussion here is really about security knowledge, and the intellectual capital that needs to exist in a large-sized organisation. Where this intellectual capital resides doesn’t bother me at all – the name on the sign is irrelevant. Terms such as Security and Operations are the more traditional labels on the boxes and no, this is not something “from the 90s”. These two names are probably the more common names in business usage these days, and so these are the references I will use.

Examples of functions that have already, and continue to be pharmed out to Ops are functions such as Vulnerability Management, SIEM, firewalls, IDS/IPS, and Identity Management. In detail…which aspects of these functions are teflonned (non-stick) off? How about all of them? All aspects of the implementation project, including management, are handled by ops teams. And then in production, ops will handle all aspects of monitoring, problem resolution, incident handling ..ad infinitum.

A further pre-qualification is about ideal and actual security skills that are commonly present. Make no mistake…in some cases a shift of tech functions to ops teams will actually result in improvements, but this is only because the self-constructed mandate of the security department is completely non-tech, and some tech at a moderate cost will usually be better than zero tech, checklists, and so on.

We need to talk about typical ops skills. Of course there will be occasional operations team members who are well versed in security matters, and also have a handle on the business aspects, but this is extra-curricular and rare. Ops team members are system administrators usually. If we take Unix security as an example, they will be familiar with at least filesystem permissions and umask settings, so there is a level of security knowledge. Cisco engineers will have a concept of SNMP community strings and ACLs. Oracle DBAs will know how about profiles and roles.

But is the typical security portfolio of system administrators wide enough to form the foundations of an effective information security program? Not really. In fact its some way short. Security Analysts need to have a grasp not only on, for example, file system permissions, they need to know how attackers actually elevate privileges and compromise, for example, a critical database host. They need to know attack vectors and how to defend against them. This kind of knowledge isn’t a typical component of a system administrator’s training schedule. Its one thing to know the effect of a world-write permission bit on a directory, but what is the actual security impact? With some directories this can be relatively harmless, with others, it can present considerable business risk.

The stance from ops will be to patch and protect. While this is [sometimes] better than nothing, there are other ways to compromise servers, other than exploiting known vulnerabilities. There are zero days (i.e. undeclared vulnerabilities for which no patch has been released), and also means of installing back doors and trojans that do not involve exploiting local bugs.

So without the kind of knowledge I have been discussing, how would ops handle a case where a project team blocks the install of a patch because it breaks some aspect of their business-critical application? In most cases they will just agree to not install the patch. In consideration of the business risk several variables come into play. Network architecture, the qualitative technical risk to the host, value of information assets…and also is there a work-around? Is a work-around or compromise even worth the time and effort? Do the developers need to re-work their app at a cost of $15000?

A lack of security input in network operations leads to cases where over-redundancy is deployed. Literally every switch and router will have a hot swap. So take the budget for a core network infrastructure and just double it – in most cases this is excessive expenditure.

With firewall rules, ops teams have a concept of blocking incoming connections, but its not unusual that egress will be over-looked, with all the “bad netizen”, malware / private date harvests, reverse telnet implications. Do we really want our corporate domain name being blacklisted?

Another common symptom of a devolved security model is the excessive usage of automated scanners in vulnerability assessment, without having any idea that there are shortcomings with this family of product. The result of this is to “just run a scanner against it” for critical production servers and miss the kind of LHF (Low Hanging Fruit) false negatives that bad guys and malware writers just love to see.

The results of devolution will be many and varied in nature. What I have described here is only a small sampling. Whatever department is responsible for security analysis is irrelevant, but the knowledge has to be there. I cover this topic more thoroughly in Chapter 5 of Security De-engineering, with more details on the utopic skills in Chapter 11.

The Search For Infosec Minds

Since the early 2000s, and in some of my other posts, I have commented in different forms on the state of play, with a large degree of cynicism, which was greeted with cold reservation, smirks, grunts, and various other types of un-voiced displeasure, up to around 2009 or so. But since at least 2010, how things have changed.

If we fast forward from 2000 to 2005 or so, most business’s security function was reduced down to base parrot-fashion checklists, analysis and thinking were four letter words, and some businesses went as far as outsourcing security functions.

Many businesses who turned their backs on hackers just after the turn of the millennium have since found a need to review their strategies on security hiring. However 10 years is a long time. The personnel who were originally tasked with forming a security function in the late 90s, have since risen like phoenixes from the primordial chasm, and assisted by thermals, they have swooped up to graze on higher plains. Fast forward again to 2012, and the distance between security and IT is in the order of light years in most cases. The idea that security is purely a compliance game hasn’t changed, but unlike the previous decade, it is in many cases seen as no longer sufficient to crawl sloth-like over the compliance finishing line every year.

Businesses were getting hacked all through the 2000s but they weren’t aware of it. Things have changed now. For starters the attacks do seem to be more frequent and now there is SIEM, and audit requirements to aggregate logs. In the past, even default log settings were annulled with the result that there wasn’t even local logging, let alone network aggregation! Mind you, even after having been duped into buying every well-marketed detection product, businesses are still being hacked without knowing it. Quite often the incident comes to light after a botnet command and control system has been owned by the good guys.

Generally there is more nefarious activity now, as a result of many factors, and information security programs are under more “real” focus now (compliance-only is not real focus, in fact it’s not real anything, apart from a real pain the backside).

The problem is that with such a vast distance between IT and security for so long, there is utter confusion about how to get tech’d up. Some businesses are doing it by moving folk out of operations into security. This doesn’t work, and in my next post I will explain why it doesn’t work.

As an example of the sort of confusion that reigns, there was one case I came across earlier in 2012 where a company in the movies business was hacked and they were having their trailers, and in some cases actual movies, put up on various torrent sites for download. Their response was to re-trench their outsourced security function and attempt to hire in-house analysts (one or two!). But what did they go looking for? Because they had suffered from malware problems, they went looking for, and I quote, “Malware Reverse Engineers”. Malware Reverse Engineers? What did they mean by this? After some investigation, it turns out they are really were looking for malware reverse engineers, there was no misnomer – malware reverse engineers as in those who help to develop new patterns for anti-virus engines!! They had acquired a spanking new SIEM, but there was no focus on incident response capability, or prevention/protection at all.

As it turns out “reverse engineer[ing]” is now a buzzword. Whereas in the mid-2000s, buzzwords were “governance” and “identity management” (on the back of…”identity theft” – neat marketing scam), and so on. Now there are more tech-sounding buzzwords which have different connotations depending on who you ask. And these tech sounding buzzwords find their way into skills requirements sent out by HR, and therefore also on CVs as a response. And the tech-sounding buzzwords are born from…yes, you got it…Black Hat conferences, and the multitude of other conferences, B-sides, C-sides, F-sides and so on, that are now as numerous as the stars in the sky.

The segue into Black Hat was quite deliberate. A fairly predictable development is the on-going appearance of Infosec managers at Black Hats, who previously wouldn’t touch these events with a barge pole. They are popping up at these events looking to recruit speakers primarily, because presumably the speakers are among the sharper of the crayons in the box, even if nobody has any clue what they’re talking about.

Before I go on, I need to qualify that I am not going to cover ethics here, mainly because it’s not worth covering. I find the whole ethics brush to be somewhat judgmental and divisive. I prefer to let the law do the judgment.

Any attempt to recruit tech enthusiasts, or “hackers”, can’t be dismissed completely because it’s better than anything that could have been witnessed in 2005. But do businesses necessarily need to go looking for hackers? I think the answer is no. Hackers have a tendency to take security analysis under their patronage, but it has never been their show, and their show alone. Far from it.

In 2012 we can make a clear distinction between protection skills and breaking-in skills. This is because as of 2012, 99.99…[recurring to infinity]% of business networks are poorly defended. Therefore, what are “breaking-in skills”? So a “hacker” breaks into networks, compromises stuff, and posts it on pastebin.com. The hackers finds pride and confidence in such achievements. Next, she’s up on the stage at the next conference bleating about “reverse engineering”, “fuzzing”, or “anti forensics tool kits”…nobody is sure which language is used, but she’s been offered 10 jobs after only 5 minutes into her speech.

However, what is actually required to break into networks? Of the 20000+ paths which were wide open into the network, the hacker chose one of the many paths of least resistance. In most cases, there is no great genius involved here. The term “script kiddy” used to refer to those who port scan, then hunt for public declared exploits for services they find. There is IT literacy required for sure (often the exploits won’t run out-of-the-box, they need to be compiled for different OSs or de-bugged), but no creativity or cunning or …whatever other mythical qualities are associated with hacking in 2012.

The thought process behind hiring a hacker is typically one of “she knows how to break into my network, therefore she can defend against others trying to break in”, but its quite possible that nothing could be further from the truth. In 2012, being a hacker, or possessing “breaking-in skills”, doesn’t actually mean a great deal. Protection is a whole different game. Businesses should be more interested about protection as of 2012, and for at least the next decade.

But what does it take to protect? Protection is a more disciplined, comprehensive IT subject. Collectively, the in-house security teams needs to know the all the nooks and crannies, all the routers, databases, applications, clouds, and operating systems and how they all interact and how they’re all connected. They also collectively need to know the business importance of information assets and applications.

The key pillars of focus for new-hire Security Analysts should be Operating Systems and Applications. When we talk about operating systems and security, the image that comes to mind is of auditors going thru a checklist in some tedious box-ticking exercise. But OS security is more than that, and it’s the front line in the protection battle. The checklists are important (I mean checklists as in standards and policies) but there are two sides to each item on the checklist: one is in the details of how to practically exploit the vulnerability and the potential tech impact, the other are the operational/business impacts involved with the associated safeguard. In other words, OS security is far more than a check-list, box-ticking activity.

In 12 years I never met a “hacker” who could name more than 3 or 4 local privilege elevation vectors for any popular Operating System. They will know the details of the vulnerability they used to root a server last month, but perhaps not the other 100 or so that are covered off by the corporate security standard for that Operating System. So the protection skills don’t come by default just because someone has taken to the stand at a conference.

Skills such as “reverse engineering” and “fuzzing” – these are hard to attain and can be used to compromise systems that are well defended. But the reality is that very few systems are so well defended that such niche skills are ever needed. In 70+ tests for which i have either taken part or been witness, even if the tests were quite unrestricted, “fuzzing” wouldn’t be required to compromise targets – not even close.

A theoretical security team for a 10000+ node business, could be made of a half dozen or so Analysts, plus a Security Manager. Analysts can come from a background of 5 years in admin/ops or devs. To “break into” security, they already have their experience in a core technology (Unix, Windows, Oracle, Cisco etc), then they can demonstrate competence in one or more other core technologies (to demonstrate flexibility), programming/scripting, and security testing with those platforms.

Once qualified as a Security Analyst, the Analyst has a specialization in at least two core technologies. At least 2 analysts can cover application security, then there are other areas such as incident handling and forensics. As for Security Managers, once in possession of 5 years “time served” as an Analyst, they qualify for a manager’s exam, which when passed qualifies them for a role as a Security Manager. The Security Manager is the interface, or agent, between the technical artist Analysts and the business.

Overall then, it is far from the case that Hackers are not well-suited for vocational in-house security roles (moreover I always like to see “spare time” programming experience on a resume because it demonstrates enthusiasm and creativity). But it is also not the case that Analyst positions are under the sole patronage of Black Hat speakers. Hackers still need to demonstrate their capabilities in protection, and doing “grown-up” or “boring” things before being hired. There is no great compelling need for businesses to hire a hacker, although as of today, it could be that a hooligan who throws security stones through security windows is as close as they can get to effective network protection.

Somewhere Over The Rainbow – A Story About A Global Ubiquitous Record of All Things Incident

One of my posts from earlier in 2012 discussed the idea that CEOs are to blame for all of our problems in security. The idea that we “must have reliable actuarial data on incidents to stay relevant” is another of the information security holy sacred cows that rears its head every so often, and this post explains covers three angles on the incidents database idea. First I look at the impracticalities of gathering incidents data. Second, even if we have accurate data, exactly how useful is the data to us in the formulation of risk management decisions, and third, even if the data is accurate and useful, did we even need it in the first place?

Shostack and Stewart’s New School book was from the mid-2000s and a chapter is devoted to the absolute necessity of a global, ubiquitous incidents database. Such a thing is proclaimed by many as carrying do or die importance, as if we need it to prove the existence of a threat, and moreover to prove our right to exist as security professionals. Do we really need a global database to prove to the C-levels that spending on security is necessary?

There are of course some real blockers with regard the gathering of incidents data, not least the “what the heck just happened” factor, where post-incident, logging is turned on (it was off until the incident occurred) and $300k invested in a SIEM solution – one that really doesn’t help the business to respond effectively to incidents, it just gives operations a nice vehicle with which they can use to diagnose non-security related problems, and of course the vendor/box pusher consultants knew what they were doing when they configured the thing.

Anyway it really isn’t terribly constructive to talk reality with regard this subject. We need to talk about a theoretical world, somewhere over the rainbow – one where logging is enabled, internal detection controls are well configured, and internal IT ops and sec know how to respond to incidents and they understand log messages. Okay, most businesses don’t even know they were hacked until a botnet command and control box is owned by some supposed good guys somewhere, but all talk of security is null and void if we acknowledge reality here. So let’s not talk reality.

So incidents data simply won’t be available in many cases – that base is now covered. But what are we trying to achieve here? The proposal is to use past incidents data to help us prove the existence of a threat in formulating decisions on risk. We receive a penetration test report that tells us that vulnerability X was compromised 4 years ago in Timbuctoo. The other vulnerabilities mentioned in the report – according to our accurate database there is no mention of them ever having been compromised with resulting financial damage. So we fix vulnerability X and ignore the rest – a wise strategy, especially given that our incidents database hosts accurate information with check-sums and integrity built-in (well – I did say let’s not talk reality here).

But wait a minute – vulnerability X was exploited 4 years ago in a company Y in an industry sector Z. What if company Y’s network was wide open? Where is this leading? All networks are different. All businesses face different challenges. Even those in the same industry sector as each other are completely different. How ludicrous this is getting? Business X can even have the exact same network architecture as another business Y in the same industry sector, and yet the exploitation of the same vulnerability X in business X can lead to $100K damages for business X, but $100 damages in business Y. So are we really going to use these records of financial damages to formulate our risk acceptance, transferal, or mitigation decision? One would hope not.

As if this all isn’t bad enough: even if we ignore the hard reality of the limitations covered here, how long would it take before we have gathered enough information to call the database useful? 10 years? 20 years? One million incidents or…? Products are past their shelf life in less than a decade generally. I did come across an IBM AIX 4.1 (we’re talking mid-to-late 90s here) box at a customer site not so long ago, but thankfully that was an isolated incident.

After all the deliberation, it emerges that security is too complex – it cannot be reduced down to a simple picture a la “vulnerability X was exploited resulting in damages Y in company Z, and therefore we in Botnetz R Us need to address vulnerability X”.

All this is not to say that there is no benefit in knowing what’s going off out there in the wild. We can pick up on on-going bigger picture attack trends to enable us to prepare for what might be down the road for us. But do we really need details of past incidents to convince businesses to part with cash in risk mitigation, or moreover validate our existence?

What we’re really concerned with here is trust. The proponents of a big data repository of incident big data would have it that we need such a thing because the powers that be don’t trust us. When we propose a mitigation of a particular risk, they don’t trust our advice. Unfortunately though, we might get one success story where we consult the oracle of all incidents and we do magically find an incident related to the problem we are trying to fix, and we use this “evidence” to convince the bosses. What about the next problem though? We can’t use the same card trick next time to address the next risk issue if no such problem was ever encountered (and reported) before by another company somewhere else in the world. By looking into the history of all incidents we’re setting a dangerous precedent, and rather than enabling trust, we’re making the situation even worse.

We don’t need an incidents history record. What we need is for our customers to trust us. Our customers are C-levels, other business units, home users, in-house reps, and so on. How do we get them to trust us? What we need is a single accreditation path for security professionals, one that ties Analysts to past experience as an IT admin or developer, and one that ties Security Managers with past Analyst experience. With this, security managers can confidently deliver risk-related advisories to their superiors, safe in the knowledge that their message is backed up by Analysts with solid tech experience, and in whose advice they are comfortable in placing their trust.

How To Break Into Security – Planet Earth Edition

The venerable Brian Krebs has recently been running some stories from various demigods of the infosec world, aimed at those wishing to enter the information security field – aspiring graduate ninjas, and others seeking the mythical pot of gold at the end of the rainbow.

First up there was there was Tomas Ptacek’s edition, then we had some pearls of wisdom from Bruce Schneier, Jeremiah Grossman, Richard Bejtlich, and then Charlie Miller.

Thomas Ptacek claimed about the security field: “It’s one of the few technology jobs where the most fun roles are well compensated”, and “if you watched “Sneakers” and ideated a life spent breaking or defending software, great news: infosec can be more fun in real life, and it’s fairly lucrative.” Well…I am not refuting any of this, but it is certainly quite unusual for jobs in information security to be fun.

Thomas talks of the benefits of having extensive programming experience – and this is something I advocate myself quite strenuously (more on that later). Thomas’s viewpoint was centered around appsec, which is fine. I think for myself, in terms of defending networks, we need to look at two main areas: appsec and operating systems / databases. There is some good advice in Thomas’s article about breaking into application security, although I wouldn’t say that this area is everything. There’s a little too much religious fervor about appsec in the article for my liking, just as one often sees a lack of balance in other areas, such as CISSP-worship, and malware “reverse engineering” – basically – “my area is the alpha and omega – all that was, is, and ever will be”.

There are other areas that matter in security other than application security. I wouldn’t say its all about appsec, but I would say though that the two main areas are appsec and operating system and database configuration. “But operating systems and databases are also applications” I hear you say. Yes, but when we’re talking appsec in infosec, we’re usually talking about web applications. There are few times when we suffer web attacks where that one single exploit leads to something really bad happening, apart from perhaps a SQLi that in itself reveals sensitive database hosted information.

With regard web applications I think Thomas is spot-on with his comments about learning about web application security assessment, and how to get clued up in this area. Also the comments about Nessus and getting into penetration testing – sad but true.

With regard Bruce Schneier’s “breaking into” edition – nothing he says is factually incorrect (most of what we talk about is subjective, neither black or white, but grey) but the comments are not at all close to the coal-face realities of most business’s in-house or service providers’ practices. Wannabe security pros reading this will be sure to get grandiose visions of their future lives as a security pro – but in 90%-plus of cases the vocational activities of security professionals do not match the picture painted by Bruce Schneier. I’ll explain more on this later.

Richard Bejtlich’s response was centered around getting into penetration testing with Metasploit and Jeremiah Grossman’s was the most all-encompassing and in my opinion, the response that had the most value for security pro wannabes – although Charlie Millar’s wasn’t far behind. In particular Mr Grossman plays down the effectiveness of accreditation programs, in favor of practical experience – wise words indeed. Charlie Millar had similar opinions.

In Charlie Millar’s response there was a lot of talk of really specialized nieche areas like reverse engineering and so on, but he does temper this with “I really do a lot of reverse engineering and binary analysis, which is unusual.” and “for those starting out, it probably makes more sense to learn some languages more useful for web applications, like PHP or Java or something.  The majority of jobs I come across in application security are web applications, so unless you’re a dinosaur like me, you probably want to become a web app expert.  Web application security is a lot easier to get started in as well.”

Charlie Millar’s response sort of segues me into the wider scope here, and that is of the realities “out there”. The articles are based on responses by folk who’ve rightly become esteemed professionals in their field and there is some really valuable insight there. The thing is – there is a lot of talk of the security field being a place for artists and magicians, and of being technically demanding, but there are very few places where technical acrobatics skills in security are seen as having any value to businesses, or even security line managers for that matter – and therefore such intellectual capital just does not appear on the balance books of these businesses.

We’ve been through a vicious 70 to 80% of a sine wave of pain in security since the late 90s. The security world painted by the fellowship of five assembled by Brian Krebs (speaking of whom, it would be nice to hear his version of the “breaking into” story) seems much more like the world of the late 90s than today. Security was heavily de-engineered through the 2000s. Things started to change around 2010, but we’re still very much in non-tech territory, and many of the security line managers who will interview prospective Security Analysts will not have an IT background, and their security practice will be hands-off, non-tech, check-list based. Anything “tech” to do with information risk management will be handled by an ops team, but there won’t be any “reverse engineering” or “fuzzing” over there…far from it. More like – firewall configuration, running bad vulnerability management suites, monitoring IDS/SIEM logs.

Picking up on some of Bruce Schneier’s comments: “You can be an expert in viruses, or policies, or cryptography.” Policies? OK, this part is true – if you want to be an expert in policies, whatever that is, you can certainly find this in 90%+ of businesses – but is this something that is an economically viable and sustainable position, or even for that matter anything that any homo sapien would ever really want to do? Probably accountancy would be a better bet.

“Viruses?” Hmm. There are increasing numbers of openings for “malware reverse engineers”, where really what they’re looking for is incident response – they want to know what happened after they discovered that some of their laptops were connecting out to various addresses in places they hadn’t heard of, prior to the click of doom. If you get interviewed for one of these positions, be prepared to answer questions about SIEM technologies and incident response. These openings are not usually associated with reverse engineering to the level of detail of those pattern-makers in the anti-virus software market- and if they are, they needn’t be, and the line manager will get to realize this after a while.

And “cryptography”? We have Bruce’s comments and then we have a title heading from a book by Shostack and Stewart: “Amateurs Study Cryptography; Professionals Study Economics”. So who do you believe here? To be fair, the book was published at the height of the de-engineering phase of security and it sort of fitted with the agenda of the times. I would go with Bruce Schneier again here, but with some qualification (the final paragraph also talks about math): most security departments won’t ever go anywhere near anything mathematical or even crypto-related, and when they do, it will be with a checklist approach that goes something like “is a strong key used?”, “yes”, “ok, good, tick in a box”, or “is DES used?”, “yes”, “ok, that’s bad, I think, anyway use triple DES please” – with no further assistance for the dev team.

With regard Cryptography, right or wrong, it’s really only on tiny islands where the math is seen as relevant – places where they code the apps and people are assigned to review the security of the app. And these territories are keenly disputed. Most of the concerns in the rest of the business world will never get more techy than discussions about key management – which is the more common challenge with crypto anyway (mostly we’re using public crypto algorithms in security, so the challenge is in protection of the key).

There were comments from all respondents about testing applications and breaking into networks. Again, the places where skills like reverse engineering are actually relevant are so small. Bruce Schneier painted a grand picture of thinking like a hacker, not just a mere engineer, in order to be able to create systems that are difficult to compromise by the most advanced hackers. But most humans who design systems are not even thinking about security, or they’re on such a tight deadline (with related KPIs and bonuses) that they side-step security. So as a pen test guru wannabe, you may possess extremely high levels of fuzzing, exploit coding, and reversing skills, but you will never get to use them, in fact you will intimidate most interviewers, and you’ll be over-qualified. There will be easier ways to break into systems in most cases. In fact in general, as I commented in an earlier post, security is insufficiently mature in most organizations to warrant any manual penetration testing whatsoever.

So really, what I have had to say here may sound harsh or “negative”, but I would hate for anyone to get into a field that they thought was challenging, only to discover that it’s anything but. I believe things are changing, but it’s at rather a slow pace, and the field of security has been broken for so long, that there are very few around who know how to fix it. Security is getting more challenging, that’s for sure, but for the security pro who goes looking for a job in this field because of the tech challenge aspect: just be very careful about what you’re getting into. Many jobs sound great from the job descriptions posted to recruitment agents, but this is only a show. The reality inside the team is that you may be sent to Siberia if you so much as use a tech-sounding word like “computer” or “IP address” – while this sounds unreal I can assure the reader that such a scenario is most certainly real, although it is of course more often the case that the job would just not be offered to a “techie”.

Its impossible to cover the jobs aspect of information security in just this article. I had a more comprehensive stab at it in Chapter Six of Security De-engineering. I would say to the prospective security pro though, that the advice given by the five mentioned in this article is not bad advice at all – it’s just that you may push yourself to higher levels, and not see significant benefit from it in your careers any time soon. There will always be some benefit, just not as much as you might expect. Certainly, you will have more confidence, but also probably over-qualify yourself for your current position.

As a security pro with a tech inclination, getting into security might not be as hard as you thought. Thomas Ptacek mentioned “A good way to move into penetration testing: grab some industry standard tools and use an Amazon EC2 account to set up a “shooting range” to attack. Some of the best-known tools are available for free: the Nessus scanner, for instance, while not an application security tool, is free and can land you a network penetration testing role that you can use as a springboard to breaking applications.” Believe me, this is not a difficult target, but because of the way the security industry is, you could very well land a penetration testing job with the preparation as described by Mr Ptacek.

All I had to say here is aimed at managing expectations. You may well find that you have to market yourself down a bit in order just to get a foothold in the industry. Once there, by pushing yourself to learn more and get more advanced skills, it could be that you would eventually osmosize towards the ideal job of your dreams. However, these positions are so rare in reality. Many of the folk I worked with in the earlier days of my career had these ninja skills that have been discussed in the five articles mentioned here. Once we got to the early 2000s they realized that security was no longer a place for them. Has the demand for these advanced skills returned? It has, to some extent, but still the demand is miniscule compared to the usual skills required the vast majority of businesses.

Blame The CEO?

I would like to start by issuing a warning about the content in this article. I will be taking cynicism to the next level, so the baby-eyed, and “positive” among us should avert their gaze after this first paragraph. For those in tune with their higher consciousness, I will summarise: Can we blame the C-levels for our problems? Answer: no. Ok, pass on through now. More positive vibes may be found in the department of delusion down the hall.

The word “salt” was for the first time ever inserted into the hall of fame of Information Security buzzwords after the Linkedin hack infamy, and then Yahoo came along and spoiled the ridicule-fest by showing to the world that they could do even better than Linkedin by not actually using any password hashing at all.

There is a tendency among the masses to latch onto little islands of intellectual property in the security world. Just as we see with “cloud”, the “salt” element of the Linkedin affair was given plenty of focus, because as a result of the incident, many security professionals had learned something new – a rare occurrence in the usual agenda of tick-in-box-marking that most analysts are mandated to follow.

With Linkedin, little coverage was given to the tedious old nebulous “compromise” element, or “how were the passwords compromised?”. No – the “salt” part was much more exciting to hose into blogs and twitter – but with hundreds of analysts talking about the value of “salting”, the value of this pearl of wisdom was falling exponentially with time – there was a limited amount of time in which to become famous. If you were tardy in showing to the world that you understood what “salting” means, your tweet wouldn’t be favourite’d or re-tweeted, and the analyst would have to step back off the stage and go back to their usual humdrum existence of entering ticks in boxes, telling devs to use two-factor authentication as a matter of “best practices”, “run a vulnerability scanner against it”, and such ticks related matters.

Infosec was down and flailing around helplessly, then came the Linkedin case. The inevitable fall-out from the “salting” incident (I don’t call it the Linkedin incident any more) was a kick of sand in the face of the already writhing information security industry. Although I don’t know of any specific cases, based on twelve happy years of marriage with infosec, i’m sure they’re as abundant as the stars and occurring as I write this. I am sure that nine times out of ten, whenever devs need to store a password, they are told by CISSP-toting self-righteous analysts (and blindly backed up by their managers) that it is “best practice” and “mandatory” to use salting with passwords – regardless of all the other factors that go into making up the full picture of risk, the operational costs, and other needless over-heads. There will be times when salting is a good idea. Other times not. There cannot be a zero-value proposition here – but blanket, parrot-fashion advisories are exactly that.

The subject matter of the previous four paragraphs serves as a recent illustrator of our plight in security. My book covers a much larger piece of the circus-o-sphere and its certainly too much to even to try to summarise here, but we are epic-failing on a daily basis. One of the subjects I cover in Security De-engineering is the role of C-Level executives in security, and I ask the question “can we blame the C-levels” for the broken state of infosec?

Let’s take a trip down memory lane. The heady days of the late 90s were owned by technical wizards, sometimes known as Hackers. They had green hair and piercings. If a CEO ran some variant of a Windows OS on her laptop, she was greeted with a stream of expletives. Ok, “best practices” was nowhere to be seen in the response, and it is a much more offensive swear-phrase than any swear word I can think of, but the point is that the Hacker’s reposte could be better.

Hackers have little or no business acumen. They have the tech talent that the complexities of information security afford, but back when they worked in infosec in the late 90s, they were poorly managed. Artists need an agent to represent them, and there were no agents.

Hackers could theoretically be locked in a room with a cat-flap for food and drink, no email, and no phone. The only person they should be allowed to communicate with is their immediate security line manager. They could be used as a vault of intellectual capital, or a swiss army knife in the organisation. Problem was – the right kind of management was always lacking. Organisations need an interface between themselves and the Hackers. No such interface ever existed unfortunately.

The upper levels of management gave up working with Hackers for various reasons, not just for scaring the living daylights out of their normal earthling colleagues. Then came the early noughties. Hackers were replaced by respectable analysts with suits and ties, who sounded nice, used the words “governance” and “non-repudiation” a lot, and didn’t swear at their managers regardless of ineptitude levels. The problem with the latter CASE (Checklist and Standards Evangelist) were illustrated with the “salting” debacle and Linkedin.

There is a link between information and information security (did you notice the play on words there – information was used in…”information”… and also in… “information security” – thereby implicating that there might just be a connection). The CASE successor to the realm actually managed to convince themselves (but few others in the business world) that security actually has nothing to do with information technology. It is apparently all about “management” and “processes”. So – every analyst is now a “manager”?! So who in the organisation is going to actually talk to ops and devs and solve the risk versus cost of safeguard puzzles? There are no foot soldiers, only a security department composed entirely of managers.

Another side of our woes is the security products space. Products have been lobbied by fierce marketing engines and given ten-out-of-ten ratings by objective information security publications. The products supposedly can automate areas of information risk management, and tell us things we didn’t already know about our networks. The problem is when you automate processes, you’re looking for accurate results. Right? Well, in certain areas such as vulnerability assessment, we don’t even get close to accurate results – and vulnerability assessment is one area where accuracy is sorely needed – especially if we are using automation to assess vulnerability in critical situations.

Some product classes do actually make some sense to deploy in some business cases, but the number of cases where something like SIEM (for example) actually make sense as as an investment is a small number of the whole.

Security line managers feel the pressure of compliance as the main part of their function. In-house advice is pretty much of the out-house variety in most cases, and service providers aren’t always so objective when it comes to technology acquisition. Products are purchased as a show of diligence for clueless auditors and a short cut to a tick-in-a-box.

So the current security landscape is one of a lack of appropriate skills, especially at security line-management level, which in turn leads to market support for whatever bone-headed product idea can be dreamed up next. The problems come in two boxes then – skills and products.

Is it the case that security analysts and line managers are all of the belief that everything is fine in their corner? The slew of incidents, outgoing connections to strange addresses in eastern Europe, and the loss of ownership of workstation subnets – it’s not through any fault of information security professionals? I have heard some use the excuse “we can never keep out bad guys all the time” – which actually is true, but there is little real confidence in the delivery of this message. Even with the most confidence – projecting among us, there is an inward sense of disharmony with things. We all know, just from intuition, that security is about IT (not just business) and that the value we offer to businesses is extremely limited in most cases.

CEOs and other silver-heads read non-IT publications, and often-times incidents will be reported, even in publications such as the Financial Times. Many of them are genuinely concerned about their information assets, and they will ask for updates from someone like a CISO. It is unlikely the case, as some suggest, they don’t care about information security and it is also unlikely, as is often claimed, that security budgets are rejected minus any consideration.

CEOs will make decisions on security spending based on available information. Have they ever been in a position where they can trust us with our line reporting? Back in the 90s they were sworn at with business-averse rhetoric. Later they were bombarded with IT-averse rhetoric, green pie charts from expensive vulnerability management suites, delivered with a perceptible lack of confidence in analyst skills and available tools.

So can we blame CEOs? Of course not, and our prerogative now should be re-engineering of skills, with a better system of “graduation” through the “ranks” in security, and an associated single body of accreditation (Chapter 11 of Security De-engineering covers this in more detail). With better skills, the products market would also follow suit and change radically. All of this would enable CISOs to report on security postures with confidence, which in turn enables trust at the next level up the ladder.

The idea that CEOs are responsible for all our problems is one of the sacred holy cows of the security industry (along with some others that I will be covering). Ladies and gentlemen: security analysts, managers, self-proclaimed “Evangelists”, “Subject Matter Experts”, and other ego-packing gurus of our time are responsible for the problems.

The Perils Of Automation In Vulnerability Assessment

Those who have read my book will be familiar with this topic, but really speaking even if literally everyone had read the book already, I would still be covering this matter because the magnitude of the problem demands coverage, and more coverage. Even when we’re at the point of “we the 99% do understand that we really shouldn’t be doing this stuff any more”, the severity of the issue demands that even if there should still be a lingering one per cent, yet further coverage is warranted.

The specific area of information security in which automation fails completely (yet we still persist in engaging with such technology) is in the area of vulnerability scanning, in particular unauthenticated vulnerability scanning, in relation to black box scanning of web applications and networks. “Run a scanner by it” still appears in so many articles and sound bytes in security – its still very much part of the furniture. Very expensive, software suites are built on the use of automated unauthenticated scanning – in some cases taking an open source scanning engine, wrapping a nice GUI around it with pie charts, and slapping a 25K USD price tag on it.

As of 2012 there are still numerous supporters of vulnerability scanning. The majority still seem to really believe the premise that it is possible (or worse…”best practices”), by use of unauthenticated vulnerability scanning, to automatically deduce a picture of vulnerability on a target – a picture that does not come with a bucket load of condiments in the way of significant false negatives.

False positives are a drain on resources – and yes, there’s a bucket load of those too, but false negatives, in critical situations, is not what the doctor ordered.

Even some of the more senior folk around (note: I did not use the word “Evangelist”) support the use of these tools. Whereas none of them would ever advocate substituting manual penetration testing for an auto-scan, there does seem to be a great deal of “positivity” around the scanning scene. I think this is all just the zen talking to be honest, but really when we engage with zen, we often disengage with reality and objectivity. Its ok to say bad stuff occasionally, who knows, it might even be in line with the direction given to one’s life by one’s higher consciousness.

Way back in the day, when we started off on our path of self-destruction, I ran a pressie on auto-scanning and false expectations, and I duly suffered the ignominy of the accusation of carrying Luddite tendencies. But…thing is see: we had already outsourced our penetration testing to some other firm somewhere – so what was it that I was afraid of losing? Yes, I was a manual tester person, but it was more than 12 months since we outsourced all that jazz – and I wasn’t about to start fighting to get it back. Furthermore, there were no actual logical objections put forward. The feedback was little more than just primordial groans and remote virtual eye rolling – especially when I displayed a chart that showed unauthenticated scanning carrying similar value to port scanning. Yes – it is almost that bad.

It could be because of my exposure to automated scanners that I was able to see the picture as clearly as I did. Actually in the first few runs of a scanning tool (it was the now retired Cybercop Scanner – it actually displayed a 3D rotating map of a network – well, one subnet anyway) I wasn’t aware myself of the lack of usefulness of these tools. I also used other tools to check results, but most of the time they all returned similar results.

Over the course of two years I conducted more than one hundred scans of client perimeters and internal subnets, all with similar results. During this time I was sifting thru the endless detritus of false positives with the realization that in some cases I was spending literally hours dissecting findings. In many cases it was first necessary to figure out what the tool was actually doing in deducing its findings, and for this I used a test Linux box and Ethereal (now Wireshark).

I’m not sure that “testing” as in the usage of a verb is appropriate because it was clear that the tool wasn’t actually doing any testing. In most cases, especially with listening services such as Apache and other webservers, the tool just grabs a banner, finds a version string, and then does a correlation look-up in its database of public declared vulnerability. What is produced is a list of public declared vulnerability for the detected version. No actual “probing” is conducted, or testing as such.

The few tests that produce reasonably reliable returns are those such as SNMP community strings tests (or as reliable as UDP allows) or another Blast From The Past – finger service “intelligence” vulnerability (no comment). The tools now have four figure numbers of testing patterns, less than 10% of which constitute acceptably accurate tests. These tools should be able to conduct some FTP configuration tests because it can all be done with politically correct “I talk to you, you talk to me, I ask some questions, you give me answers” type of testing. But no. Something like a test for anonymous FTP enabled – works for a few FTP servers, but not for some of the other more popular FTP packages. They all return different responses to the same probe you see…

I mentioned Cybercop Scanner before but its important not to get hung up on product names. The key is the nature of the scanning itself and its practical limitations. Many of our beloved security softwares are not coded by devs who have any inkling whatsoever of anything to do with security, but really, we can have a tool deduced and produced with all the miracles that human ingenuity affords, but at some point we always hit a very low and very hard ceiling, in terms of what we can achieve with unauthenticated vulnerability assessment.

With automated vulnerability assessment we’re not doing anything that can destabilize a service (there are some DoS tests and “potentially disruptive tests” but these are fairly useless). We do not do something like running an exploit and making shell connection attempts, or anything of the sort. So what we can really achieve will always be extremely limited. Anyway, why would we want to do any of this when we have a perfectly fine root account to use? Or is that not something we really do in security (get on boxes and poke around as uid=0)? Is that ops ninja territory specifically (See my earlier article on OS Security, and as was said recently by a famous commentator in our field: “Platforms bitches!”)?

The possibility exists to check everything we ever needed to check with authenticated scanning but here, as of 2012, we are still some way short – and that is largely because of a lack of client demand (crikey)! Some spend a cajillion on a software package that does authenticated testing of most popular OSs, plus unauthenticated false positive generation, and _only_ use the sophisticated resource intensive false positives generation engine – “that fixes APTs”.

The masses seem to be more aware of the shortcomings with automated web application vulnerability scanners, but anyway, yes, the picture here is similarly harsh on the eye. Spend a few thousand dollars on these tools? I can’t see why anyone would do that. Perhaps because the tool was given 5 star ratings by unbiased infosec publications? Meanwhile many firms continue to bet their crown jewels on the use of automated vulnerability assessment.

The automobile industry gradually phased in automation over a few decades but even today there are still plenty of actual homo sapiens working in car factories. We should only ever be automating processes when we can get results that are accurate within the bounds of acceptable risks. Is it acceptable that we use unauthenticated automated scanning as the sole means of vulnerability assessment with the top 20% of our most critical devices? It is true that we can never detect every problem and what is safe today, maybe not safe tomorrow. But also we don’t want to miss the most glaring critical vulnerabilities either – but this is exactly the current practice of the majority of businesses.

ZDnet’s Interview with Mikko Hypponen – “The current state of the cybercrime ecosystem” – Highlights

Last week Dancho Danchev interviewed Mikko Hypponen (CSO @ F-Secure) on the subject of CaaS (Cybercrime as a Service), the recent Botnet takedowns, and OPsec within cybercrime “organsations”. The questions from the interviewer occupied 3 times as much real estate as the answers (!), so here is a distillation of some of the more salient points arising from the interview, covered fully in this ZDnet article . Also some of the questions provided a lot of information (:)) .

The lack of OPsec (operational security), whether there is a lack or not, is not how criminals and botnet masters are traced – it’s chiefly because they like to brag about their exploits on forums and chat. This makes them easier to trace than might be expected.

The traditional cybercrime marketplaces have been illuminated and the DarkMarket as its been called is not so dark any more – indeed some have even claimed that it no longer exists. Mikko Hypponen talks about Tor and Freenet and how services are moving to the “deep web” – and this worries law enforcement, but few details were forthcoming.

These days, everything from spam, phishing to launching malware attacks and coding custom malware is available as a professionally packaged service. Mikko replies there was little the good guys could do to prevent this. “These are not technological problems; they are mostly social problems. And social problems are always hard to fix”.

“Some criminals are sellings banking trojans and then other hackers are selling tailor-made configuration files for those trojans, targeting any particular bank. Going prices for such config customization seem to be around $500 at the moment.”

“Partnerka” affiliate networks with rogue AVs and ransom trojans have been highly successful for the bad guys, and this kind of affiliate model also means that the masters behind the schemes don’t need to get their hands dirty anymore.

Mac OS X and security: Historically the Flashback.K thing is very important – a turning point. Only 2 to 5% of all macs were infected, but this is huge nonetheless. It means that whereas in the past, Mac owners didn’t need anti-virus – now they do need it. However, there is still only one gang behind Mac malware – this is likely to change.

Despite the multiple claims from many media sources, the cybercrime marketplace does not generate more revenue than sales of hard drugs, but at the same time we do not posses the means to quantify the financial numbers. It is known that individual groups have made tens of millions of dollars. But not hundreds.

These days malware and trojans are not as much about exploiting Patch Tuesday issues as they are about using browser extensions and plugins. Drive-by-downloads via exploits targeting browser add-ons and plugins are clearly the most common way of getting infected.

Mozilla’s plugin check is quite effective but in practice the Chrome model of sandboxing and replacing third-party add-ons with their own replacements seems to work really well. Chrome has issues with privacy but in terms of security its better than the others. Chrome users get exploited less than the others.

Opt-in botnets have been a growing problem over the past two years – often this is about patriotic hacktivism, where users sometimes deliberately infect themselves with a DDoS agent. These are likely to be around for a very long time, and it’s been reported recently by Akamai that DDoS attacks have been launched from a botnet of mobile phones. We’re likely to see DDoS botnets move to totally new platforms in the future. Think cars and microwave ovens launching attacks. Tools as LOIC and HOIC have brought the “Opt-in botnet” model to the masses, and it works. Unfortunately.

Android has made malware for Linux a reality, as identified in a F-Secure report.  Quoting Mr Hypponen: “Old Symbian malware is going away. Nobody is targeting Windows Phone. Nobody is targeting iPhone. And Android is getting targeted more and more. iOS, the operating system in iPhone (and iPad and iPod) was released with the iPhone in the summer of 2007 – five years ago. The system has been targeted by attacker for five years, with no success. We still haven’t seen a single real-world malware attack against the iPhone. This is a great accomplishment and we really have to give credit to Apple for a job well done. Out of all Linux variants, Android is the clear leader in malware.”

Mobile malware vendors cashing out by sending text messages and placing calls to expensive premium-rate numbers – this will be around for at least the near future – It works and it’s easy to do. Eventually, we’ll probably see more mobile banking trojans and new trojans targeting micropayments.

Attacks against human rights activists are undeniably coming from China, according to Mr Hypponen. Some of the attacks came from the same source as attacks against defence contractors and governments – although proving it is hard.

Facebook, Twitter, Amazon’s EC2, LinkedIn, Baidu, Blogspot and Google Groups have all had criminal groups launching their campaigns from their networks in the past. Some of these are easily able to kick out abusers though, and spot them fairly quickly.

Anti-virus software and its failings aside…operators are in a key position to move security from a product to service and to protect the masses with both managed security solutions on end-user devices as well as behind-the-scenes monitoring and filtering of malicious traffic.

In March, 2011 Dancho proposed that all ISPs should quarantine their malware infected users until they prove they can use the Internet in a safe way. Mikko agrees this is a good idea, and is currently now being practised successfully with F-Secure’s solutions and several operators.

A Tribute To Our Oldest And Dearest Of Friends – The Firewall (Part 2)

In the first part of my coverage on firewalls I mentioned about the usefulness of firewalls, and apart from being one of the few commercial offerings to actually deliver in security, the firewall really does do a great deal for our information security posture when its configured well.

Some in the field have advocated that the firewall has seen its day and its time for the knackers yard, but these opinions are borne from a considerable distance from the coal face in this business. Firewalls, when seen as something as in the movies, as in “breaking through”, “punching through” the firewall, can be seen as useless when bad folk have compromised networks seemingly effortlessly. One doesn’t “break through” a firewall. Your profile is assessed. If you fit a certain profile you are allowed through. If not, you absolutely shall not pass.

There have been counters to these arguments in support of firewalls, but the extent of the efficacy of well-configured firewalls has only been covered with some distance from the nuts and bolts, and so is not fully appreciated. What about segmentation for example? Are there any other security controls and products that can undisputedly be linked with cost savings? Segmentation allows us to devote more resources to more critical subnets, rather than blanket measures across a whole network. As a contractor with a logistics multinational in Prague, I was questioned a few times as to why I was testing all internal Linux resources, on a standard issue UK contract rate. The answer? Because they had a flat, wide open internal network with only hot swap redundant firewalls on the perimeter. Regional offices connecting into the data centre had frequent malware problems with routable access to critical infrastructure.

Back in the late 90s, early noughties, some service providers offered a firewall assessment service but the engagements lacked focus and direction, and then this service disappeared altogether…partly because of the lack of thought that went into preparation and also because many in the market really did believe they had nailed firewall configuration. These engagements were delivered in a way that was something like “why do you leave these ports open?”, “because this application X needs those ports open”…and that would be the end of that, because the service providers didn’t know application X, or where its IT assets were located, or the business importance of application X. After thirty minutes into the engagement there were already “why are we here?” faces in the room.

As a roaming consultant, I would always ask to see firewall configurations as part of a wider engagement – usually an architecture workshop whiteboard session, or larger scale risk assessment. Under this guise, there is license to use firewall rulebases to tell us a great deal about the organisation, rather than querying each micro-issue.

Firewall rulebases reveal a large part of the true “face” of an organization. Political divisions are revealed, along with the old classic: opening social networks, betting sites (and such-like) only for senior management subnets, and often times some interesting ports are opened only for manager’s secretaries.

Nine times out of ten, when you ask to see firewall rules, faces will change in the room from “this is a nice time wasting meeting, but maybe I’ll learn something about security” to mild-to-severe discomfort. Discomfort – because there is no hiding place any more. Network and IT ops will often be aware that there are some shortcomings, but if we don’t see their firewall rules, they can hide and deflect the conversation in subtle ways. Firewall rulebases reveal all manner of architectural and application – related issues.

To illustrate some firewall configuration and data flow/architectural issues, here are some examples of common issues:

– Internal private resources 1-to-1 NAT’d to pubic IP addresses: an internal device with a private RFC 1918 address (something like 10. or 192.168. …) has been allocated a public IP address that is routable from the public Internet and clearly “visible” on the perimeter. Why is this a problem? If this device is compromised, the attacker has compromised an internal device and therefore has access to the internal network. What they “see” (can port scan) from there depends on internal network segmentation but if they upload and run their own tools and warez on the compromised device, it won’t take long to learn a great deal about the internal network make-up in these cases. This NAT’ing problem would be a severe problem for most businesses.

– A listening service was phased out, but the firewall still considers the port to be open: this is a problem, the severity of which is usually quite high but just like everything else in security, it depends on a lot of factors. Usually, even in default configurations, firewalls “silently drop” packets when they are denied. So there is no answer to a TCP SYN request from a port scanner trying to fire-up some small talk of a long winter evening. However, when there is no TCP service listening on a higher port (for example) but the firewall also doesn’t block access to this port – there will be a quick response to the effect “I don’t want to talk, I don’t know how to answer you, or maybe you’re just too boring” – this is bad but at least there’s a response. Let’s say port 10000 TCP was left unfiltered. A port scanner like nmap will report other ports as “filtered” but 10000 as “closed”. “Closed” sounds bad but the attacker’s eye light up when seeing this…because they have a port with which to bind their shell – a port that will be accessible remotely. If all ports other than listening services are filtered, this presents a problem for the attacker, it slows them down, and this is what we’re trying to achieve ultimately.

– Dual-homed issues: Sometimes you will see internal firewalls with rules for source addresses that look out of place. For example most of the rules are defined with 10.30.x.x and then in amidst them you see a 172.16.x.x. Oh oh. Turns out this is a source address for a dual-homed host. One NIC has an address for a subnet on one side of a firewall, plus one other NIC on the other side of the firewall. So effectively the dual-homed device is bypassing firewall controls. If this device is compromised, the firewall is rendered ineffective. Nine times out of ten, this dual homing is only setup as a short cut for admins to make their lives easier. I did see this once for a DMZ, where the internal network NIC address was the same subnet as a critical Oracle database.

– VPN gateways in inappropriate places: VPN services should usually be listening on a perimeter firewall. This enables firewalls to control what a VPN user can “see” and cannot see once they are authenticated. Generally, the resources made available to remote users should be in a VPN DMZ – at least give it some consideration. It is surprising (or perhaps not) how often you will see VPN services on internal network devices. So on firewalls such as the inner firewall of a DMZ, you will see classic VPN TCP services permitted to pass inbound! So the VPN client authenticates and then has direct access to the internal network – a nice encrypted tunnel for syphoning off sensitive data.

Outbound Rules

Outbound filtering is often ignored, usually because the business is unaware of the nature of attacks and technical risks. Inbound filtering is usually quite decent, but its still the case as of 2012 that many businesses do not filter any outbound traffic – as in none whatsoever. There are several major concerns when it comes to egress considerations:

– Good netizen: if there is no outbound filtering, your site can be broadcasting all kinds of traffic to all networks everywhere. Sometimes there is nothing malicious in this…its just seen as incompetence by others. But then of course there is the possibility of internal staff hacking other sites, or your site can be used as a base from which to launch other attacks – with a source IP address registered under the ownership of the source of the attack – and this is no small matter.

– Your own firewall can be DOS’d: Border firewalls NAT outgoing traffic, with address translation from private to public space. With some malware outbreaks that involve a lot of traffic generation, the NAT pool can fill quickly and the firewall NAT’ing can fail to service legitimate requests. This wouldn’t happen if these packets are just dropped.

– It will be an essential function of most malware and manual attacks to be able to dial home once “inside” the target – for botnets for example, this is essential. Plus, some publicly available exploits initiate outbound connections rather than fire up listening shells.

Generally, as with ingress, take the standard approach: start with deny-all, then figure out which internal DNS and SMTP servers need to talk to which external devices, and take the same approach with other services. Needless to say, this has to be backed by corporate security standards, and made into a living process.

Some specifics on egress:

– Netbios broadcasts reveal a great deal about internal resources – block them. In fact for any type of broadcast – what possible reason can there be for allowing them outside your network? There are other legacy protocols which broadcast nice information for interested parties – Cisco Discovery Protocol for example.

– Related to the previous point: be as specific as possible with subnet masks. Make these as “micro” as possible.

– There is a general principle around proxies for web access and other services. The proxy is the only device that needs access to the Internet, others can be blocked.

– DNS: Usually there will be an internal DNS server in private space which forwards queries to a public Internet DNS service. Make sure the DNS server is the only device “allowed out”. Direct connections from other devices to public Internet services should be blocked.

– SMTP: Access to mail services is important for many malware variants, or there is mail client functionality in the malware. Internal mail servers should be the only devices permitted to connect to external SMTP services.

As a final note, for those wishing to find more detail, the book I mentioned in part 1 of this diatribe, “Building Internet Firewalls” illustrates some different ways to set up services such as FTP and mail, and explains very well the principles of segregated subnets and DMZs.

isPasswordOK?

Not sure how resistant is your password to real world hacker password enumeration vectors? There’s an app for that.

The folks over at Literatecode have developed an IPhone app called isPasswordOK to help the average person select a strong password.

When users sign-up for online services they are often asked to give a password. They enter the password, as an example…”Password”, this doesn’t pass the test on the blog sign-up form? However “P4ssw0rd” passes the test (there’s a Javascript “password strength” indicator script that assesses password strength allegedly), but those meters do not take into account real world enumeration/ccracking techniques. Generally the user is told they have a “Good” or “fair” password as long as their password passes the format test (e.g. One upper case, one lower case, one punctuation mark) . Clearly to those in the know, P4ssw0rd is a bad password, but not according to most online registration form strength indicators.

Passwords authentication poses a nightmare for users and security departments. They can’t be too complex because then they get written down somewhere where others can see them (in one case the employee wrote a password on the ceiling board over their desk). Then…obviously they can’t be too simple. But passwords are all we have in many cases, so let’s do our best to choose a good one – and this is not as simple as many folk believe it to be.

This app is a nice innovation, long overdue, and sorely needed, as demonstrated by the myriad of password compromises globally.

A Tribute To Our Oldest And Dearest Of Friends – The Firewall (Part 1)

In my previous article I covered OS and database security in terms of the neglect shown to this area by the information security industry. In the same vein I now take a look at another blast from the past – firewalls. The buzz topics these days are cloud, big data, APT, “cyber”* and BYOD. Firewall was a buzz topic a very long time ago, but the fact that we moved on from that buzz topic, doesn’t mean we nailed it. And guess what? The newer buzz topics all depend heavily on the older ones. There is no cloud security without properly configured firewalls (and moving assets off-campus means even more thought has to be put into this area), and there shouldn’t be any BYOD if there is no firewall(s) between workstation subnets and critical infrastructure. Good OS/DB security, plus thoughtful firewall configs sets the stage on which the new short-sighted strategies are played out and retrenched.

We have a lot of bleeding edge software and hardware products in security backed by fierce marketing engines which set unrealistic expectations, advertised with 5 gold star ratings in infosec publications, coincidentally next to a full page ad for the vendor. Out of all these products, the oldest carries the highest bang for our bucks – the firewall. In fact the firewall is one of the few that actually gives us what we expect to get – network access control, and by and large, as a technology it’s mature and it works. At least when we buy a firewall looking for packet filtering, we get packet filtering, unlike another example where we buy a product which allegedly manages vulnerability, but doesn’t even detect vulnerability, let alone “manage” it.

Passwords, crypto, filesystem permissions – these are old concepts. The firewall arrived on the scene some considerable number of years after the aforementioned, but before some of the more recent marketing ideas such as IdM, SIEM, UTM etc. The firewall, along with anti-virus, formed the basis of the earliest corporate information security strategies.

Given the nature of TCP/IP, the next step on from this creation was quite an intuitive one to take. Network access control – not a bad idea! But the fact that firewalls have been around corporate networks for two decades doesn’t mean we have perfected our approach to configuration and deployment of firewalls – far from it.

What this article is not..

“I’m a firewall, I decide which packets are dropped or passed based on source and destination addresses and services”.

Let’s be clear, this article is not about which firewall is the best. New firewall, new muesli. How does one muesli differ from another? By the definition of muesli, not much, or it’s not muesli any more.

Some firewalls have exotic features – even going back 10 years, Checkpoint Firewall-1 had application layer trackers such as FTP passive mode trackers, earlier versions of which crashed the firewall if enabled – thereby introducing DoS as an innovative add-on. In most cases firewalls need to be able to track conversations and deny/pass packets based on unqualified TCP flags (for example) – but these days they all do this. Firewalls are not so CPU intensive but they can be memory-intensive if conversations are being monitored and we’re being DoS’d – but being a firewall doesn’t make a node uniquely vulnerable to SYN-Flood and so on. The list of considerations in firewall design goes on and on but by 2012 we have covered off most of the more important, and you will find the must-haves and the most useful features in any modern commercial firewall…although I wouldn’t be sure that this covers some of the UTM all-in-one matchbox size offerings.

Matters such as throughput and bandwidth are matters for network ops in reality. Our concern in security should be more about configuration and placement.

On the matter of which firewall to use, we can go back to the basic tenet of a firewall as in the first paragraph of this subsection – sometimes it is perfectly fine to cobble together an old PC, install Linux on it, and use iptables – but probably not for a perimeter choke point firewall that has to handle some considerable throughput. Likewise, do you want the latest bright flashing lights, bridge of the Starship Enterprise enterprise box for the firewall which separates a 10-node development subnet from the commercial business production subnets? Again, probably not – let’s just keep an open mind. Sometimes cheap does what we need. I didn’t mention the term “open source” here because it does tend to evoke quite emotional responses – ok well i did mention it actually, sorry, just couldn’t help myself there. There are the usual issues with open source such as lack of support, but apart from bandwidth, open source is absolutely fine in many cases.

Are firewalls still important?

All attack efforts will be successful given sufficient resources. What we need to do is slow down these efforts such that the resources required outweighs the potential gains from owning the network. Effective firewall configuration helps a great deal in this respect. I still meet analysts who underestimate the effect of a firewall on the security posture.

Taking the classic segregated subnet as in a DMZ type configuration, by now most of us are aware at least that a DMZ is in most cases advisable, and most analysts can draw a DMZ network diagram on a white board. But why DMZ? Chiefly we do this to prevent direct connections from untrusted networks to our most valuable information assets. When an outsider port scans us, we want them to “see” only the services we intend the outside world to see, which usually will be the regular candidates: HTTPs, VPN, etc. So the external firewall blocks access to all services apart from those required, and more importantly, it only allows access to very specific DMZ hosts, certainly no internal addresses should be directly accessible.

Taking the classic example of a DMZ web server application that connects to an internal database. Using firewalls and sensible OS and database configuration, we can create a situation where we can add some considerable time on an attack effort aimed at compromising the database. Having compromised the DMZ webserver, port scanning should then reveal only one or two services on the internal database server, and no other IP addresses need to be visible (usually). The internal firewall limits access from the source address of the DMZ webserver, to only the listening database service and the IP address of the destination database server. This is a considerably more challenging situation for attackers, as compared with a scenario where the internal private IP space is fully accessible…perhaps one where DMZ servers are not at all segregated and their “real” IP addresses are private RFC 1918 addresses, NAT’d to public Internet addresses to make them routable for clients.

Firewalls are not a panacea, especially with so many zero days in circulation, but in an era where even automated attacks can lead to our most financially critical assets disappearing via the upstream link, they can, and regularly do, make all the difference.

We All “Get” Firewalls…right?

There is no judgment being passed here, but it often is the case that security departments don’t have much to offer when it comes to firewall configuration and placement. Network and IT operations teams will try perhaps a couple of times to get some direction with firewalls, but usually what comes back is a check list of “best practices” and “deny all services that are not needed”, some will even take the extraordinary measure of reminding their colleagues about the default-deny, “catch all” rule. But very few security departments will get more involved than this.

IT and network ops teams, by the year 2012 AD, are quite averse in the wily ways of the firewall, and without any further guidance they will do a reasonable job of firewall configuration – but 9 times out of 10 there will be shortcomings. Ops peeps are rarely schooled on the art of technical risks. Its not part of their training. If they do understand the tech risk aspects of network access control, it will have been self-taught. Even if they have attended a course by a vendor, the course will cover the usage aspects, as in navigating GUIs and so on, and little of any significance to keeping bad guys out.

Ops teams generally configure fairly robust ingress filtering, but rarely is there any attention given to egress (more on that in part 2 of this offering), and the importance of other aspects such as whether services are UDP or TCP (with the result that one or other other is left open).

Generally, up to now, there are still some gaps and areas where businesses fall short in their configuration efforts, whereas I am convinced that in many cases attention moved away from firewalls many years ago – as if it’s an area that we have aced and so we can move on to other things.

So where next?

I would like to bring this diatribe to a close for now, until part 2. In the interim I would also like to point budding, enthusiastic analysts, SMEs, Senior *, and Evangelists in the direction of some rather nice reads. Try out TCP/IP Illustrated, at least Volume 1. Then O’ Reilly’s “Building Internet Firewalls”. The latter covers the in and outs of network architecture and how to firewall specific commonly used application layer protocols. This is a good starting point. Also, try some hands-on demo work (sorry – this involves using command shells) with IPtables – you’ll love it (I swear by this), and pay some attention to packet logging.

In Part 2 I will go over some of my experiences as a consultant with a roaming disposition, related to firewall configuration analysis, and I will cover some guiders related to classic misconfigurations – some of which may not be so obvious to the reader.