How To Break Into Security – Planet Earth Edition

The venerable Brian Krebs has recently been running some stories from various demigods of the infosec world, aimed at those wishing to enter the information security field – aspiring graduate ninjas, and others seeking the mythical pot of gold at the end of the rainbow.

First up there was there was Tomas Ptacek’s edition, then we had some pearls of wisdom from Bruce Schneier, Jeremiah Grossman, Richard Bejtlich, and then Charlie Miller.

Thomas Ptacek claimed about the security field: “It’s one of the few technology jobs where the most fun roles are well compensated”, and “if you watched “Sneakers” and ideated a life spent breaking or defending software, great news: infosec can be more fun in real life, and it’s fairly lucrative.” Well…I am not refuting any of this, but it is certainly quite unusual for jobs in information security to be fun.

Thomas talks of the benefits of having extensive programming experience – and this is something I advocate myself quite strenuously (more on that later). Thomas’s viewpoint was centered around appsec, which is fine. I think for myself, in terms of defending networks, we need to look at two main areas: appsec and operating systems / databases. There is some good advice in Thomas’s article about breaking into application security, although I wouldn’t say that this area is everything. There’s a little too much religious fervor about appsec in the article for my liking, just as one often sees a lack of balance in other areas, such as CISSP-worship, and malware “reverse engineering” – basically – “my area is the alpha and omega – all that was, is, and ever will be”.

There are other areas that matter in security other than application security. I wouldn’t say its all about appsec, but I would say though that the two main areas are appsec and operating system and database configuration. “But operating systems and databases are also applications” I hear you say. Yes, but when we’re talking appsec in infosec, we’re usually talking about web applications. There are few times when we suffer web attacks where that one single exploit leads to something really bad happening, apart from perhaps a SQLi that in itself reveals sensitive database hosted information.

With regard web applications I think Thomas is spot-on with his comments about learning about web application security assessment, and how to get clued up in this area. Also the comments about Nessus and getting into penetration testing – sad but true.

With regard Bruce Schneier’s “breaking into” edition – nothing he says is factually incorrect (most of what we talk about is subjective, neither black or white, but grey) but the comments are not at all close to the coal-face realities of most business’s in-house or service providers’ practices. Wannabe security pros reading this will be sure to get grandiose visions of their future lives as a security pro – but in 90%-plus of cases the vocational activities of security professionals do not match the picture painted by Bruce Schneier. I’ll explain more on this later.

Richard Bejtlich’s response was centered around getting into penetration testing with Metasploit and Jeremiah Grossman’s was the most all-encompassing and in my opinion, the response that had the most value for security pro wannabes – although Charlie Millar’s wasn’t far behind. In particular Mr Grossman plays down the effectiveness of accreditation programs, in favor of practical experience – wise words indeed. Charlie Millar had similar opinions.

In Charlie Millar’s response there was a lot of talk of really specialized nieche areas like reverse engineering and so on, but he does temper this with “I really do a lot of reverse engineering and binary analysis, which is unusual.” and “for those starting out, it probably makes more sense to learn some languages more useful for web applications, like PHP or Java or something.  The majority of jobs I come across in application security are web applications, so unless you’re a dinosaur like me, you probably want to become a web app expert.  Web application security is a lot easier to get started in as well.”

Charlie Millar’s response sort of segues me into the wider scope here, and that is of the realities “out there”. The articles are based on responses by folk who’ve rightly become esteemed professionals in their field and there is some really valuable insight there. The thing is – there is a lot of talk of the security field being a place for artists and magicians, and of being technically demanding, but there are very few places where technical acrobatics skills in security are seen as having any value to businesses, or even security line managers for that matter – and therefore such intellectual capital just does not appear on the balance books of these businesses.

We’ve been through a vicious 70 to 80% of a sine wave of pain in security since the late 90s. The security world painted by the fellowship of five assembled by Brian Krebs (speaking of whom, it would be nice to hear his version of the “breaking into” story) seems much more like the world of the late 90s than today. Security was heavily de-engineered through the 2000s. Things started to change around 2010, but we’re still very much in non-tech territory, and many of the security line managers who will interview prospective Security Analysts will not have an IT background, and their security practice will be hands-off, non-tech, check-list based. Anything “tech” to do with information risk management will be handled by an ops team, but there won’t be any “reverse engineering” or “fuzzing” over there…far from it. More like – firewall configuration, running bad vulnerability management suites, monitoring IDS/SIEM logs.

Picking up on some of Bruce Schneier’s comments: “You can be an expert in viruses, or policies, or cryptography.” Policies? OK, this part is true – if you want to be an expert in policies, whatever that is, you can certainly find this in 90%+ of businesses – but is this something that is an economically viable and sustainable position, or even for that matter anything that any homo sapien would ever really want to do? Probably accountancy would be a better bet.

“Viruses?” Hmm. There are increasing numbers of openings for “malware reverse engineers”, where really what they’re looking for is incident response – they want to know what happened after they discovered that some of their laptops were connecting out to various addresses in places they hadn’t heard of, prior to the click of doom. If you get interviewed for one of these positions, be prepared to answer questions about SIEM technologies and incident response. These openings are not usually associated with reverse engineering to the level of detail of those pattern-makers in the anti-virus software market- and if they are, they needn’t be, and the line manager will get to realize this after a while.

And “cryptography”? We have Bruce’s comments and then we have a title heading from a book by Shostack and Stewart: “Amateurs Study Cryptography; Professionals Study Economics”. So who do you believe here? To be fair, the book was published at the height of the de-engineering phase of security and it sort of fitted with the agenda of the times. I would go with Bruce Schneier again here, but with some qualification (the final paragraph also talks about math): most security departments won’t ever go anywhere near anything mathematical or even crypto-related, and when they do, it will be with a checklist approach that goes something like “is a strong key used?”, “yes”, “ok, good, tick in a box”, or “is DES used?”, “yes”, “ok, that’s bad, I think, anyway use triple DES please” – with no further assistance for the dev team.

With regard Cryptography, right or wrong, it’s really only on tiny islands where the math is seen as relevant – places where they code the apps and people are assigned to review the security of the app. And these territories are keenly disputed. Most of the concerns in the rest of the business world will never get more techy than discussions about key management – which is the more common challenge with crypto anyway (mostly we’re using public crypto algorithms in security, so the challenge is in protection of the key).

There were comments from all respondents about testing applications and breaking into networks. Again, the places where skills like reverse engineering are actually relevant are so small. Bruce Schneier painted a grand picture of thinking like a hacker, not just a mere engineer, in order to be able to create systems that are difficult to compromise by the most advanced hackers. But most humans who design systems are not even thinking about security, or they’re on such a tight deadline (with related KPIs and bonuses) that they side-step security. So as a pen test guru wannabe, you may possess extremely high levels of fuzzing, exploit coding, and reversing skills, but you will never get to use them, in fact you will intimidate most interviewers, and you’ll be over-qualified. There will be easier ways to break into systems in most cases. In fact in general, as I commented in an earlier post, security is insufficiently mature in most organizations to warrant any manual penetration testing whatsoever.

So really, what I have had to say here may sound harsh or “negative”, but I would hate for anyone to get into a field that they thought was challenging, only to discover that it’s anything but. I believe things are changing, but it’s at rather a slow pace, and the field of security has been broken for so long, that there are very few around who know how to fix it. Security is getting more challenging, that’s for sure, but for the security pro who goes looking for a job in this field because of the tech challenge aspect: just be very careful about what you’re getting into. Many jobs sound great from the job descriptions posted to recruitment agents, but this is only a show. The reality inside the team is that you may be sent to Siberia if you so much as use a tech-sounding word like “computer” or “IP address” – while this sounds unreal I can assure the reader that such a scenario is most certainly real, although it is of course more often the case that the job would just not be offered to a “techie”.

Its impossible to cover the jobs aspect of information security in just this article. I had a more comprehensive stab at it in Chapter Six of Security De-engineering. I would say to the prospective security pro though, that the advice given by the five mentioned in this article is not bad advice at all – it’s just that you may push yourself to higher levels, and not see significant benefit from it in your careers any time soon. There will always be some benefit, just not as much as you might expect. Certainly, you will have more confidence, but also probably over-qualify yourself for your current position.

As a security pro with a tech inclination, getting into security might not be as hard as you thought. Thomas Ptacek mentioned “A good way to move into penetration testing: grab some industry standard tools and use an Amazon EC2 account to set up a “shooting range” to attack. Some of the best-known tools are available for free: the Nessus scanner, for instance, while not an application security tool, is free and can land you a network penetration testing role that you can use as a springboard to breaking applications.” Believe me, this is not a difficult target, but because of the way the security industry is, you could very well land a penetration testing job with the preparation as described by Mr Ptacek.

All I had to say here is aimed at managing expectations. You may well find that you have to market yourself down a bit in order just to get a foothold in the industry. Once there, by pushing yourself to learn more and get more advanced skills, it could be that you would eventually osmosize towards the ideal job of your dreams. However, these positions are so rare in reality. Many of the folk I worked with in the earlier days of my career had these ninja skills that have been discussed in the five articles mentioned here. Once we got to the early 2000s they realized that security was no longer a place for them. Has the demand for these advanced skills returned? It has, to some extent, but still the demand is miniscule compared to the usual skills required the vast majority of businesses.

Blame The CEO?

I would like to start by issuing a warning about the content in this article. I will be taking cynicism to the next level, so the baby-eyed, and “positive” among us should avert their gaze after this first paragraph. For those in tune with their higher consciousness, I will summarise: Can we blame the C-levels for our problems? Answer: no. Ok, pass on through now. More positive vibes may be found in the department of delusion down the hall.

The word “salt” was for the first time ever inserted into the hall of fame of Information Security buzzwords after the Linkedin hack infamy, and then Yahoo came along and spoiled the ridicule-fest by showing to the world that they could do even better than Linkedin by not actually using any password hashing at all.

There is a tendency among the masses to latch onto little islands of intellectual property in the security world. Just as we see with “cloud”, the “salt” element of the Linkedin affair was given plenty of focus, because as a result of the incident, many security professionals had learned something new – a rare occurrence in the usual agenda of tick-in-box-marking that most analysts are mandated to follow.

With Linkedin, little coverage was given to the tedious old nebulous “compromise” element, or “how were the passwords compromised?”. No – the “salt” part was much more exciting to hose into blogs and twitter – but with hundreds of analysts talking about the value of “salting”, the value of this pearl of wisdom was falling exponentially with time – there was a limited amount of time in which to become famous. If you were tardy in showing to the world that you understood what “salting” means, your tweet wouldn’t be favourite’d or re-tweeted, and the analyst would have to step back off the stage and go back to their usual humdrum existence of entering ticks in boxes, telling devs to use two-factor authentication as a matter of “best practices”, “run a vulnerability scanner against it”, and such ticks related matters.

Infosec was down and flailing around helplessly, then came the Linkedin case. The inevitable fall-out from the “salting” incident (I don’t call it the Linkedin incident any more) was a kick of sand in the face of the already writhing information security industry. Although I don’t know of any specific cases, based on twelve happy years of marriage with infosec, i’m sure they’re as abundant as the stars and occurring as I write this. I am sure that nine times out of ten, whenever devs need to store a password, they are told by CISSP-toting self-righteous analysts (and blindly backed up by their managers) that it is “best practice” and “mandatory” to use salting with passwords – regardless of all the other factors that go into making up the full picture of risk, the operational costs, and other needless over-heads. There will be times when salting is a good idea. Other times not. There cannot be a zero-value proposition here – but blanket, parrot-fashion advisories are exactly that.

The subject matter of the previous four paragraphs serves as a recent illustrator of our plight in security. My book covers a much larger piece of the circus-o-sphere and its certainly too much to even to try to summarise here, but we are epic-failing on a daily basis. One of the subjects I cover in Security De-engineering is the role of C-Level executives in security, and I ask the question “can we blame the C-levels” for the broken state of infosec?

Let’s take a trip down memory lane. The heady days of the late 90s were owned by technical wizards, sometimes known as Hackers. They had green hair and piercings. If a CEO ran some variant of a Windows OS on her laptop, she was greeted with a stream of expletives. Ok, “best practices” was nowhere to be seen in the response, and it is a much more offensive swear-phrase than any swear word I can think of, but the point is that the Hacker’s reposte could be better.

Hackers have little or no business acumen. They have the tech talent that the complexities of information security afford, but back when they worked in infosec in the late 90s, they were poorly managed. Artists need an agent to represent them, and there were no agents.

Hackers could theoretically be locked in a room with a cat-flap for food and drink, no email, and no phone. The only person they should be allowed to communicate with is their immediate security line manager. They could be used as a vault of intellectual capital, or a swiss army knife in the organisation. Problem was – the right kind of management was always lacking. Organisations need an interface between themselves and the Hackers. No such interface ever existed unfortunately.

The upper levels of management gave up working with Hackers for various reasons, not just for scaring the living daylights out of their normal earthling colleagues. Then came the early noughties. Hackers were replaced by respectable analysts with suits and ties, who sounded nice, used the words “governance” and “non-repudiation” a lot, and didn’t swear at their managers regardless of ineptitude levels. The problem with the latter CASE (Checklist and Standards Evangelist) were illustrated with the “salting” debacle and Linkedin.

There is a link between information and information security (did you notice the play on words there – information was used in…”information”… and also in… “information security” – thereby implicating that there might just be a connection). The CASE successor to the realm actually managed to convince themselves (but few others in the business world) that security actually has nothing to do with information technology. It is apparently all about “management” and “processes”. So – every analyst is now a “manager”?! So who in the organisation is going to actually talk to ops and devs and solve the risk versus cost of safeguard puzzles? There are no foot soldiers, only a security department composed entirely of managers.

Another side of our woes is the security products space. Products have been lobbied by fierce marketing engines and given ten-out-of-ten ratings by objective information security publications. The products supposedly can automate areas of information risk management, and tell us things we didn’t already know about our networks. The problem is when you automate processes, you’re looking for accurate results. Right? Well, in certain areas such as vulnerability assessment, we don’t even get close to accurate results – and vulnerability assessment is one area where accuracy is sorely needed – especially if we are using automation to assess vulnerability in critical situations.

Some product classes do actually make some sense to deploy in some business cases, but the number of cases where something like SIEM (for example) actually make sense as as an investment is a small number of the whole.

Security line managers feel the pressure of compliance as the main part of their function. In-house advice is pretty much of the out-house variety in most cases, and service providers aren’t always so objective when it comes to technology acquisition. Products are purchased as a show of diligence for clueless auditors and a short cut to a tick-in-a-box.

So the current security landscape is one of a lack of appropriate skills, especially at security line-management level, which in turn leads to market support for whatever bone-headed product idea can be dreamed up next. The problems come in two boxes then – skills and products.

Is it the case that security analysts and line managers are all of the belief that everything is fine in their corner? The slew of incidents, outgoing connections to strange addresses in eastern Europe, and the loss of ownership of workstation subnets – it’s not through any fault of information security professionals? I have heard some use the excuse “we can never keep out bad guys all the time” – which actually is true, but there is little real confidence in the delivery of this message. Even with the most confidence – projecting among us, there is an inward sense of disharmony with things. We all know, just from intuition, that security is about IT (not just business) and that the value we offer to businesses is extremely limited in most cases.

CEOs and other silver-heads read non-IT publications, and often-times incidents will be reported, even in publications such as the Financial Times. Many of them are genuinely concerned about their information assets, and they will ask for updates from someone like a CISO. It is unlikely the case, as some suggest, they don’t care about information security and it is also unlikely, as is often claimed, that security budgets are rejected minus any consideration.

CEOs will make decisions on security spending based on available information. Have they ever been in a position where they can trust us with our line reporting? Back in the 90s they were sworn at with business-averse rhetoric. Later they were bombarded with IT-averse rhetoric, green pie charts from expensive vulnerability management suites, delivered with a perceptible lack of confidence in analyst skills and available tools.

So can we blame CEOs? Of course not, and our prerogative now should be re-engineering of skills, with a better system of “graduation” through the “ranks” in security, and an associated single body of accreditation (Chapter 11 of Security De-engineering covers this in more detail). With better skills, the products market would also follow suit and change radically. All of this would enable CISOs to report on security postures with confidence, which in turn enables trust at the next level up the ladder.

The idea that CEOs are responsible for all our problems is one of the sacred holy cows of the security industry (along with some others that I will be covering). Ladies and gentlemen: security analysts, managers, self-proclaimed “Evangelists”, “Subject Matter Experts”, and other ego-packing gurus of our time are responsible for the problems.

The Perils Of Automation In Vulnerability Assessment

Those who have read my book will be familiar with this topic, but really speaking even if literally everyone had read the book already, I would still be covering this matter because the magnitude of the problem demands coverage, and more coverage. Even when we’re at the point of “we the 99% do understand that we really shouldn’t be doing this stuff any more”, the severity of the issue demands that even if there should still be a lingering one per cent, yet further coverage is warranted.

The specific area of information security in which automation fails completely (yet we still persist in engaging with such technology) is in the area of vulnerability scanning, in particular unauthenticated vulnerability scanning, in relation to black box scanning of web applications and networks. “Run a scanner by it” still appears in so many articles and sound bytes in security – its still very much part of the furniture. Very expensive, software suites are built on the use of automated unauthenticated scanning – in some cases taking an open source scanning engine, wrapping a nice GUI around it with pie charts, and slapping a 25K USD price tag on it.

As of 2012 there are still numerous supporters of vulnerability scanning. The majority still seem to really believe the premise that it is possible (or worse…”best practices”), by use of unauthenticated vulnerability scanning, to automatically deduce a picture of vulnerability on a target – a picture that does not come with a bucket load of condiments in the way of significant false negatives.

False positives are a drain on resources – and yes, there’s a bucket load of those too, but false negatives, in critical situations, is not what the doctor ordered.

Even some of the more senior folk around (note: I did not use the word “Evangelist”) support the use of these tools. Whereas none of them would ever advocate substituting manual penetration testing for an auto-scan, there does seem to be a great deal of “positivity” around the scanning scene. I think this is all just the zen talking to be honest, but really when we engage with zen, we often disengage with reality and objectivity. Its ok to say bad stuff occasionally, who knows, it might even be in line with the direction given to one’s life by one’s higher consciousness.

Way back in the day, when we started off on our path of self-destruction, I ran a pressie on auto-scanning and false expectations, and I duly suffered the ignominy of the accusation of carrying Luddite tendencies. But…thing is see: we had already outsourced our penetration testing to some other firm somewhere – so what was it that I was afraid of losing? Yes, I was a manual tester person, but it was more than 12 months since we outsourced all that jazz – and I wasn’t about to start fighting to get it back. Furthermore, there were no actual logical objections put forward. The feedback was little more than just primordial groans and remote virtual eye rolling – especially when I displayed a chart that showed unauthenticated scanning carrying similar value to port scanning. Yes – it is almost that bad.

It could be because of my exposure to automated scanners that I was able to see the picture as clearly as I did. Actually in the first few runs of a scanning tool (it was the now retired Cybercop Scanner – it actually displayed a 3D rotating map of a network – well, one subnet anyway) I wasn’t aware myself of the lack of usefulness of these tools. I also used other tools to check results, but most of the time they all returned similar results.

Over the course of two years I conducted more than one hundred scans of client perimeters and internal subnets, all with similar results. During this time I was sifting thru the endless detritus of false positives with the realization that in some cases I was spending literally hours dissecting findings. In many cases it was first necessary to figure out what the tool was actually doing in deducing its findings, and for this I used a test Linux box and Ethereal (now Wireshark).

I’m not sure that “testing” as in the usage of a verb is appropriate because it was clear that the tool wasn’t actually doing any testing. In most cases, especially with listening services such as Apache and other webservers, the tool just grabs a banner, finds a version string, and then does a correlation look-up in its database of public declared vulnerability. What is produced is a list of public declared vulnerability for the detected version. No actual “probing” is conducted, or testing as such.

The few tests that produce reasonably reliable returns are those such as SNMP community strings tests (or as reliable as UDP allows) or another Blast From The Past – finger service “intelligence” vulnerability (no comment). The tools now have four figure numbers of testing patterns, less than 10% of which constitute acceptably accurate tests. These tools should be able to conduct some FTP configuration tests because it can all be done with politically correct “I talk to you, you talk to me, I ask some questions, you give me answers” type of testing. But no. Something like a test for anonymous FTP enabled – works for a few FTP servers, but not for some of the other more popular FTP packages. They all return different responses to the same probe you see…

I mentioned Cybercop Scanner before but its important not to get hung up on product names. The key is the nature of the scanning itself and its practical limitations. Many of our beloved security softwares are not coded by devs who have any inkling whatsoever of anything to do with security, but really, we can have a tool deduced and produced with all the miracles that human ingenuity affords, but at some point we always hit a very low and very hard ceiling, in terms of what we can achieve with unauthenticated vulnerability assessment.

With automated vulnerability assessment we’re not doing anything that can destabilize a service (there are some DoS tests and “potentially disruptive tests” but these are fairly useless). We do not do something like running an exploit and making shell connection attempts, or anything of the sort. So what we can really achieve will always be extremely limited. Anyway, why would we want to do any of this when we have a perfectly fine root account to use? Or is that not something we really do in security (get on boxes and poke around as uid=0)? Is that ops ninja territory specifically (See my earlier article on OS Security, and as was said recently by a famous commentator in our field: “Platforms bitches!”)?

The possibility exists to check everything we ever needed to check with authenticated scanning but here, as of 2012, we are still some way short – and that is largely because of a lack of client demand (crikey)! Some spend a cajillion on a software package that does authenticated testing of most popular OSs, plus unauthenticated false positive generation, and _only_ use the sophisticated resource intensive false positives generation engine – “that fixes APTs”.

The masses seem to be more aware of the shortcomings with automated web application vulnerability scanners, but anyway, yes, the picture here is similarly harsh on the eye. Spend a few thousand dollars on these tools? I can’t see why anyone would do that. Perhaps because the tool was given 5 star ratings by unbiased infosec publications? Meanwhile many firms continue to bet their crown jewels on the use of automated vulnerability assessment.

The automobile industry gradually phased in automation over a few decades but even today there are still plenty of actual homo sapiens working in car factories. We should only ever be automating processes when we can get results that are accurate within the bounds of acceptable risks. Is it acceptable that we use unauthenticated automated scanning as the sole means of vulnerability assessment with the top 20% of our most critical devices? It is true that we can never detect every problem and what is safe today, maybe not safe tomorrow. But also we don’t want to miss the most glaring critical vulnerabilities either – but this is exactly the current practice of the majority of businesses.

A Tribute To Our Oldest And Dearest Of Friends – The Firewall (Part 2)

In the first part of my coverage on firewalls I mentioned about the usefulness of firewalls, and apart from being one of the few commercial offerings to actually deliver in security, the firewall really does do a great deal for our information security posture when its configured well.

Some in the field have advocated that the firewall has seen its day and its time for the knackers yard, but these opinions are borne from a considerable distance from the coal face in this business. Firewalls, when seen as something as in the movies, as in “breaking through”, “punching through” the firewall, can be seen as useless when bad folk have compromised networks seemingly effortlessly. One doesn’t “break through” a firewall. Your profile is assessed. If you fit a certain profile you are allowed through. If not, you absolutely shall not pass.

There have been counters to these arguments in support of firewalls, but the extent of the efficacy of well-configured firewalls has only been covered with some distance from the nuts and bolts, and so is not fully appreciated. What about segmentation for example? Are there any other security controls and products that can undisputedly be linked with cost savings? Segmentation allows us to devote more resources to more critical subnets, rather than blanket measures across a whole network. As a contractor with a logistics multinational in Prague, I was questioned a few times as to why I was testing all internal Linux resources, on a standard issue UK contract rate. The answer? Because they had a flat, wide open internal network with only hot swap redundant firewalls on the perimeter. Regional offices connecting into the data centre had frequent malware problems with routable access to critical infrastructure.

Back in the late 90s, early noughties, some service providers offered a firewall assessment service but the engagements lacked focus and direction, and then this service disappeared altogether…partly because of the lack of thought that went into preparation and also because many in the market really did believe they had nailed firewall configuration. These engagements were delivered in a way that was something like “why do you leave these ports open?”, “because this application X needs those ports open”…and that would be the end of that, because the service providers didn’t know application X, or where its IT assets were located, or the business importance of application X. After thirty minutes into the engagement there were already “why are we here?” faces in the room.

As a roaming consultant, I would always ask to see firewall configurations as part of a wider engagement – usually an architecture workshop whiteboard session, or larger scale risk assessment. Under this guise, there is license to use firewall rulebases to tell us a great deal about the organisation, rather than querying each micro-issue.

Firewall rulebases reveal a large part of the true “face” of an organization. Political divisions are revealed, along with the old classic: opening social networks, betting sites (and such-like) only for senior management subnets, and often times some interesting ports are opened only for manager’s secretaries.

Nine times out of ten, when you ask to see firewall rules, faces will change in the room from “this is a nice time wasting meeting, but maybe I’ll learn something about security” to mild-to-severe discomfort. Discomfort – because there is no hiding place any more. Network and IT ops will often be aware that there are some shortcomings, but if we don’t see their firewall rules, they can hide and deflect the conversation in subtle ways. Firewall rulebases reveal all manner of architectural and application – related issues.

To illustrate some firewall configuration and data flow/architectural issues, here are some examples of common issues:

– Internal private resources 1-to-1 NAT’d to pubic IP addresses: an internal device with a private RFC 1918 address (something like 10. or 192.168. …) has been allocated a public IP address that is routable from the public Internet and clearly “visible” on the perimeter. Why is this a problem? If this device is compromised, the attacker has compromised an internal device and therefore has access to the internal network. What they “see” (can port scan) from there depends on internal network segmentation but if they upload and run their own tools and warez on the compromised device, it won’t take long to learn a great deal about the internal network make-up in these cases. This NAT’ing problem would be a severe problem for most businesses.

– A listening service was phased out, but the firewall still considers the port to be open: this is a problem, the severity of which is usually quite high but just like everything else in security, it depends on a lot of factors. Usually, even in default configurations, firewalls “silently drop” packets when they are denied. So there is no answer to a TCP SYN request from a port scanner trying to fire-up some small talk of a long winter evening. However, when there is no TCP service listening on a higher port (for example) but the firewall also doesn’t block access to this port – there will be a quick response to the effect “I don’t want to talk, I don’t know how to answer you, or maybe you’re just too boring” – this is bad but at least there’s a response. Let’s say port 10000 TCP was left unfiltered. A port scanner like nmap will report other ports as “filtered” but 10000 as “closed”. “Closed” sounds bad but the attacker’s eye light up when seeing this…because they have a port with which to bind their shell – a port that will be accessible remotely. If all ports other than listening services are filtered, this presents a problem for the attacker, it slows them down, and this is what we’re trying to achieve ultimately.

– Dual-homed issues: Sometimes you will see internal firewalls with rules for source addresses that look out of place. For example most of the rules are defined with 10.30.x.x and then in amidst them you see a 172.16.x.x. Oh oh. Turns out this is a source address for a dual-homed host. One NIC has an address for a subnet on one side of a firewall, plus one other NIC on the other side of the firewall. So effectively the dual-homed device is bypassing firewall controls. If this device is compromised, the firewall is rendered ineffective. Nine times out of ten, this dual homing is only setup as a short cut for admins to make their lives easier. I did see this once for a DMZ, where the internal network NIC address was the same subnet as a critical Oracle database.

– VPN gateways in inappropriate places: VPN services should usually be listening on a perimeter firewall. This enables firewalls to control what a VPN user can “see” and cannot see once they are authenticated. Generally, the resources made available to remote users should be in a VPN DMZ – at least give it some consideration. It is surprising (or perhaps not) how often you will see VPN services on internal network devices. So on firewalls such as the inner firewall of a DMZ, you will see classic VPN TCP services permitted to pass inbound! So the VPN client authenticates and then has direct access to the internal network – a nice encrypted tunnel for syphoning off sensitive data.

Outbound Rules

Outbound filtering is often ignored, usually because the business is unaware of the nature of attacks and technical risks. Inbound filtering is usually quite decent, but its still the case as of 2012 that many businesses do not filter any outbound traffic – as in none whatsoever. There are several major concerns when it comes to egress considerations:

– Good netizen: if there is no outbound filtering, your site can be broadcasting all kinds of traffic to all networks everywhere. Sometimes there is nothing malicious in this…its just seen as incompetence by others. But then of course there is the possibility of internal staff hacking other sites, or your site can be used as a base from which to launch other attacks – with a source IP address registered under the ownership of the source of the attack – and this is no small matter.

– Your own firewall can be DOS’d: Border firewalls NAT outgoing traffic, with address translation from private to public space. With some malware outbreaks that involve a lot of traffic generation, the NAT pool can fill quickly and the firewall NAT’ing can fail to service legitimate requests. This wouldn’t happen if these packets are just dropped.

– It will be an essential function of most malware and manual attacks to be able to dial home once “inside” the target – for botnets for example, this is essential. Plus, some publicly available exploits initiate outbound connections rather than fire up listening shells.

Generally, as with ingress, take the standard approach: start with deny-all, then figure out which internal DNS and SMTP servers need to talk to which external devices, and take the same approach with other services. Needless to say, this has to be backed by corporate security standards, and made into a living process.

Some specifics on egress:

– Netbios broadcasts reveal a great deal about internal resources – block them. In fact for any type of broadcast – what possible reason can there be for allowing them outside your network? There are other legacy protocols which broadcast nice information for interested parties – Cisco Discovery Protocol for example.

– Related to the previous point: be as specific as possible with subnet masks. Make these as “micro” as possible.

– There is a general principle around proxies for web access and other services. The proxy is the only device that needs access to the Internet, others can be blocked.

– DNS: Usually there will be an internal DNS server in private space which forwards queries to a public Internet DNS service. Make sure the DNS server is the only device “allowed out”. Direct connections from other devices to public Internet services should be blocked.

– SMTP: Access to mail services is important for many malware variants, or there is mail client functionality in the malware. Internal mail servers should be the only devices permitted to connect to external SMTP services.

As a final note, for those wishing to find more detail, the book I mentioned in part 1 of this diatribe, “Building Internet Firewalls” illustrates some different ways to set up services such as FTP and mail, and explains very well the principles of segregated subnets and DMZs.

A Tribute To Our Oldest And Dearest Of Friends – The Firewall (Part 1)

In my previous article I covered OS and database security in terms of the neglect shown to this area by the information security industry. In the same vein I now take a look at another blast from the past – firewalls. The buzz topics these days are cloud, big data, APT, “cyber”* and BYOD. Firewall was a buzz topic a very long time ago, but the fact that we moved on from that buzz topic, doesn’t mean we nailed it. And guess what? The newer buzz topics all depend heavily on the older ones. There is no cloud security without properly configured firewalls (and moving assets off-campus means even more thought has to be put into this area), and there shouldn’t be any BYOD if there is no firewall(s) between workstation subnets and critical infrastructure. Good OS/DB security, plus thoughtful firewall configs sets the stage on which the new short-sighted strategies are played out and retrenched.

We have a lot of bleeding edge software and hardware products in security backed by fierce marketing engines which set unrealistic expectations, advertised with 5 gold star ratings in infosec publications, coincidentally next to a full page ad for the vendor. Out of all these products, the oldest carries the highest bang for our bucks – the firewall. In fact the firewall is one of the few that actually gives us what we expect to get – network access control, and by and large, as a technology it’s mature and it works. At least when we buy a firewall looking for packet filtering, we get packet filtering, unlike another example where we buy a product which allegedly manages vulnerability, but doesn’t even detect vulnerability, let alone “manage” it.

Passwords, crypto, filesystem permissions – these are old concepts. The firewall arrived on the scene some considerable number of years after the aforementioned, but before some of the more recent marketing ideas such as IdM, SIEM, UTM etc. The firewall, along with anti-virus, formed the basis of the earliest corporate information security strategies.

Given the nature of TCP/IP, the next step on from this creation was quite an intuitive one to take. Network access control – not a bad idea! But the fact that firewalls have been around corporate networks for two decades doesn’t mean we have perfected our approach to configuration and deployment of firewalls – far from it.

What this article is not..

“I’m a firewall, I decide which packets are dropped or passed based on source and destination addresses and services”.

Let’s be clear, this article is not about which firewall is the best. New firewall, new muesli. How does one muesli differ from another? By the definition of muesli, not much, or it’s not muesli any more.

Some firewalls have exotic features – even going back 10 years, Checkpoint Firewall-1 had application layer trackers such as FTP passive mode trackers, earlier versions of which crashed the firewall if enabled – thereby introducing DoS as an innovative add-on. In most cases firewalls need to be able to track conversations and deny/pass packets based on unqualified TCP flags (for example) – but these days they all do this. Firewalls are not so CPU intensive but they can be memory-intensive if conversations are being monitored and we’re being DoS’d – but being a firewall doesn’t make a node uniquely vulnerable to SYN-Flood and so on. The list of considerations in firewall design goes on and on but by 2012 we have covered off most of the more important, and you will find the must-haves and the most useful features in any modern commercial firewall…although I wouldn’t be sure that this covers some of the UTM all-in-one matchbox size offerings.

Matters such as throughput and bandwidth are matters for network ops in reality. Our concern in security should be more about configuration and placement.

On the matter of which firewall to use, we can go back to the basic tenet of a firewall as in the first paragraph of this subsection – sometimes it is perfectly fine to cobble together an old PC, install Linux on it, and use iptables – but probably not for a perimeter choke point firewall that has to handle some considerable throughput. Likewise, do you want the latest bright flashing lights, bridge of the Starship Enterprise enterprise box for the firewall which separates a 10-node development subnet from the commercial business production subnets? Again, probably not – let’s just keep an open mind. Sometimes cheap does what we need. I didn’t mention the term “open source” here because it does tend to evoke quite emotional responses – ok well i did mention it actually, sorry, just couldn’t help myself there. There are the usual issues with open source such as lack of support, but apart from bandwidth, open source is absolutely fine in many cases.

Are firewalls still important?

All attack efforts will be successful given sufficient resources. What we need to do is slow down these efforts such that the resources required outweighs the potential gains from owning the network. Effective firewall configuration helps a great deal in this respect. I still meet analysts who underestimate the effect of a firewall on the security posture.

Taking the classic segregated subnet as in a DMZ type configuration, by now most of us are aware at least that a DMZ is in most cases advisable, and most analysts can draw a DMZ network diagram on a white board. But why DMZ? Chiefly we do this to prevent direct connections from untrusted networks to our most valuable information assets. When an outsider port scans us, we want them to “see” only the services we intend the outside world to see, which usually will be the regular candidates: HTTPs, VPN, etc. So the external firewall blocks access to all services apart from those required, and more importantly, it only allows access to very specific DMZ hosts, certainly no internal addresses should be directly accessible.

Taking the classic example of a DMZ web server application that connects to an internal database. Using firewalls and sensible OS and database configuration, we can create a situation where we can add some considerable time on an attack effort aimed at compromising the database. Having compromised the DMZ webserver, port scanning should then reveal only one or two services on the internal database server, and no other IP addresses need to be visible (usually). The internal firewall limits access from the source address of the DMZ webserver, to only the listening database service and the IP address of the destination database server. This is a considerably more challenging situation for attackers, as compared with a scenario where the internal private IP space is fully accessible…perhaps one where DMZ servers are not at all segregated and their “real” IP addresses are private RFC 1918 addresses, NAT’d to public Internet addresses to make them routable for clients.

Firewalls are not a panacea, especially with so many zero days in circulation, but in an era where even automated attacks can lead to our most financially critical assets disappearing via the upstream link, they can, and regularly do, make all the difference.

We All “Get” Firewalls…right?

There is no judgment being passed here, but it often is the case that security departments don’t have much to offer when it comes to firewall configuration and placement. Network and IT operations teams will try perhaps a couple of times to get some direction with firewalls, but usually what comes back is a check list of “best practices” and “deny all services that are not needed”, some will even take the extraordinary measure of reminding their colleagues about the default-deny, “catch all” rule. But very few security departments will get more involved than this.

IT and network ops teams, by the year 2012 AD, are quite averse in the wily ways of the firewall, and without any further guidance they will do a reasonable job of firewall configuration – but 9 times out of 10 there will be shortcomings. Ops peeps are rarely schooled on the art of technical risks. Its not part of their training. If they do understand the tech risk aspects of network access control, it will have been self-taught. Even if they have attended a course by a vendor, the course will cover the usage aspects, as in navigating GUIs and so on, and little of any significance to keeping bad guys out.

Ops teams generally configure fairly robust ingress filtering, but rarely is there any attention given to egress (more on that in part 2 of this offering), and the importance of other aspects such as whether services are UDP or TCP (with the result that one or other other is left open).

Generally, up to now, there are still some gaps and areas where businesses fall short in their configuration efforts, whereas I am convinced that in many cases attention moved away from firewalls many years ago – as if it’s an area that we have aced and so we can move on to other things.

So where next?

I would like to bring this diatribe to a close for now, until part 2. In the interim I would also like to point budding, enthusiastic analysts, SMEs, Senior *, and Evangelists in the direction of some rather nice reads. Try out TCP/IP Illustrated, at least Volume 1. Then O’ Reilly’s “Building Internet Firewalls”. The latter covers the in and outs of network architecture and how to firewall specific commonly used application layer protocols. This is a good starting point. Also, try some hands-on demo work (sorry – this involves using command shells) with IPtables – you’ll love it (I swear by this), and pay some attention to packet logging.

In Part 2 I will go over some of my experiences as a consultant with a roaming disposition, related to firewall configuration analysis, and I will cover some guiders related to classic misconfigurations – some of which may not be so obvious to the reader.

The Place of Pen Testing In The Infosec Strategy

The subject of network penetration testing, as distinct from application security testing, has been given petabytes of coverage since the late 90s, but in terms of how businesses approach network penetration testing, there are still severe shortcomings in terms of return on investment.

A pre-qualifier to save the reader some time: if your concern in security is purely compliance, you need not read on.

Going back to the birth of network penetration testing as a monetized service, we have gone through a transition period of good ground level skills in the mid to late 90s but poor management skills, to just, well…poor everything. This is in no way a reflection on the individuals involved. With the analysts doing the testing, as a result of modern testing methodology and conditions, the tests are not conducive to driving the analyst to learn deep analytical skills. For managers – the industry as a whole hasn’t identified the need to acquire managers who have “graduated” from tech-centric infosec backgrounds. The industry is still young and still making mistakes.

One thing has remained constant through the juvenile years of the industry and that has been poor management. The erosion of decent analytical skills from network penetration testing is ubiquitous apart from a few niche areas (and the dark side) – but this is by and large a consequence of bad management – and again is more of a reflection on the herd-mentality / downward momentum of the industry in general rather than the individuals. Managers need to have something like a good balance of business acumen and knowledge of technical risks, but in security, most of us still think it’s OK to have managers who are heavily weighted towards the business end of the scale.

Several changes took place in service delivery around the early 2000s. One was the imposition of testing restrictions which reduced the effectiveness of testing. When the analysts explained the negative impact of the restrictions (such as limited testing IP ranges, limited use of exploits on production systems, and fixed source IP ranges), the message was either misunderstood by their managers, and/or mis-communicated to the clients who were imposing the restrictions. So the restrictions took hold. A penetration test that is so heavily restricted can in no way come even close to a simulated attack or even a base level test.

The other factor was improved firewall configurations. There was one major aspect of network security that did improve from the mid 90s until today, and that was firewall configurations. With improved firewall configurations came fewer attack channels, but the testing restrictions had a larger impact on the perceived value of the remote testing service. Improved firewalls may have partly been a result of the earlier penetration tests, but the restrictions turned the testing engagements into an unfair fight.

There were wider forces at work in the security world in the early 2000s which also contributed to the loss of quality from penetration testing delivery, but these are beyond the scope of this article. For all intents and purposes, penetration testing became such a low quality affair that clients stopped paying for it unless they were driven by regulations to perform periodic tests of their perimeter “by an independent third party” – and the situation that arose was one where clients cared not a jot about quality. This lack of interest was passed on to service providers who in some cases actually reprimanded analysts for trying to be well…analytical. Reason? To be analytical is to retard, and to retard is to reduce profits. Service providers were now a production line for poor quality penetration tests.

So i explained enough about the problems and how they came to be. To be fair, i am not the only one who has identified these issues, its just that there aren’t so many of us around these days who were pen testers back in the 90s, and who are willing to put pen to paper on these issues.

I think it should be clear by now that a penetration test with major restrictions applied has only the value that comes from passing the audit. Apart from that? It’s a port scan. Anything else? Not in most cases. Automated tools are used heavily and tools such as vulnerability scanners never were more than glorified port scanners anyway. This is not because the vendors have done a poor job (although in some cases they have), it comes from the nature of remote unauthenticated vulnerability assessment – it’s almost impossible to deduce anything about the target, aside from port scanning and grabbing a few service banners.

But…perhaps with the spate of incidents that has been reported as being a 2010-on phenomena (which has really always been prevalent all through the 2000s) there might be some interest in passing the audit, PLUS getting something else in return for the investment. For this discussion we need to assume utopic conditions. Anything other than unrestricted testing (which also includes use of zero days – another long topic which i’ll side-step for now), delivered by highly skilled testers (with hacker-like skills but not necessarily Hackers), will always, without fail, be a waste of resources.

The key here is really in the level of knowledge of internal IT and security staff at the target network under testing. They realistically have to know everything about their network – every nook and cranny, every router, firewall, application, OS, and how they’re all connected. A penetration test should never be used to substitute this knowledge. Typical testing engagements from my experience are carried out with 3 to 4 analysts over a period of a maximum of 2 weeks. This isn’t enough time to have some outside party teach target staff all they need to know about their own private network. Indeed in most cases, one thousand such tests would be insufficient.

In the scenario where both client staff and testers are sufficiently skilled-up, then a penetration test has at least the potential of delivering good value on top of just base compliance. A test under these conditions can then perhaps find the slightest cracks in the armor – areas where the client’s IT and security staff may have missed something – a misconfiguration, unauthorized change, signs of a previous incident that went undetected, a previously unknown local privilege escalation vector – but the important point is that in most cases there won’t be the white noise of findings that comes from the case where there are huge holes in the network. Under these conditions, the test also delivers value, but the results are deceptive. Huge holes are uncovered, with huge holes probably still remaining.

The perimeter has now shifted. User workstation subnets are rightly being seen by many as having been owned by the bad guys, with the result that the perimeter has now shifted into RFC 1918 private address space. So now there can also be an emphasis on penetration testing of critical infrastructure from user workstation subnets. But again, lack of knowledge of internal configurations and controls just won’t do. Whatever resources are devoted to having external third parties doing penetration testing will have been wasted if there is little awareness of internal networks on behalf of the testing subject. There at least has to be detailed awareness of available OS and database security controls and the degree to which they have been applied. Application security – it’s a story for another day, and one which doesn’t massively affect any of my conclusions here.

Of course there’s a gaping hole in this story. I have spoken of skill levels on behalf of the penetration testing analysts and the analysts on the side of the testing subject. But how do we know who is qualified and who is not qualified? Well, this is the root of all of our problems today. Without this there is no trust. Without a workable accreditation structure, testers can fail to find any reportable findings and accordingly be labelled clowns. Believe it or not, there is a simple solution here, but this also is too wide a subject to cover here….later on that one!