What Is Your VA Scanner Really Doing?

It’s clear from social media and first hand reports, that the awareness of what VA (Vulnerability Assessment) scanners are really doing in testing scenarios is quite low. So I setup up a test box with Ubuntu 18 and exposed some services which are well known to the hacker community and also still popular in production business use cases: Secure Shell (SSH) and an Apache web service.

This post isn’t an attack on VA products at all. It’s aimed at setting a more healthy expectation, and I will cover a test scenario with a packet sniffer (Wireshark), Nessus Professional, and OpenVAS, that illustrates the point.

I became aware 20 years ago, from validating VA scanner output, that a lot of what VA scanners barf out is alarmist (red flags, CRITICAL [fix NOW!]) and also based purely on guesswork – when the scanner “sees” a service, it grabs a service banner (e.g. “OpenSSH 7.6p1 Ubuntu 4ubuntu0.3”), looks in its database for public disclosed vulnerability with that version, and flags vulnerability if there are any associated CVEs. Contrary to popular belief, there is no actual interaction in the way of further investigating or validating vulnerability. All vulnerability reporting is based on the service banner. So if i change my banner to “hi OpenVAS”, nothing will be reported. And in security, we like to advise hiding product names and versions – this helps with drive-by style automated attacks, in a much more effective way than for example, changing default service ports.

This article then demonstrates the VA scanner behaviour described above and covers developments over the past 20 years (did things improve?) with the two most commonly found scanners: Nessus and OpenVAS, which even if are not used directly, are used indirectly (vendors in this space do not recreate the wheel, they take existing IP – all legal I’m sure – and create their own UI for it). It was fairly well-known that Nessus was the basis of most commercial VAs in the 00s, and it seems unlikely that scenario has changed a great deal.

Test Setup

So if I look at my test box setup I see from port scan results (nmap):

PORT STATE SERVICE VERSION
22/tcp open ssh OpenSSH 7.6p1 Ubuntu 4ubuntu0.3 (Ubuntu Linux; protocol 2.0)
25/tcp open smtp Postfix smtpd
80/tcp open http Apache httpd 2.4.29 ((Ubuntu))
139/tcp open netbios-ssn Samba smbd 3.X - 4.X (workgroup: WORKGROUP)
445/tcp open netbios-ssn Samba smbd 3.X - 4.X (workgroup: WORKGROUP)
3000/tcp open http Apache httpd 2.4.29 ((Ubuntu))
5000/tcp open http Docker Registry (API: 2.0)
8000/tcp open http Apache httpd 2.4.29

So…naughty, naughty. Apache is not so old but still I’d expect to see some CVEs flagged, and I can say the same for the SSH service. Samba is there too in a default format. Samba is Linux’s implementation of MS Windows SMB (Server Message Block) and is full of holes. The Postfix mail service is also quite old, and there’s a Docker API exposed! All this would get an attacker quite excited, and indeed there’s plenty of automated attack scenarios which would work here.

There was also an EOL Phpmyadmin and EOL jQuery wrapped up in the web service.

Developments in Two Decades

So there has been some changes. For want of a better word, there’s now more honesty. In the case of OpenVAS, for vulnerability that involves grabbing a banner and assuming vulnerability based on this, there is a Quality of Detection (QoD) rating, which is set as default at around 70%. This is a kind of probability rating for a finding not being a false positive. Interestingly those findings that involve a banner grab are way down there under 50, and most are no longer flagged as “critical”.

Nessus, for its banner-grabbed vulnerabilities, is more explicit and it is report will state “Note that Nessus has not tested for this issue but has instead relied only on the application’s self-reported version number.”

Even 7 years ago, there would be lots of issues reported for an outdated Apache or SSH service, many of which would be flagged wrongly as CRITICAL, but not necessarily exploitable, and the existance of the vulnerability was based only on a text banner. So these more recent VA versions are an improvement, but its clear the awareness out there of these issues is still quite low. The problem is now – we do want to see if services are downlevel, so please $VENDOR, don’t hide them (more on this later).

First Scan – Banners On Display

So using Wireshark, sniffing HTTP on port 80 (plain text) we have the following…

Wireshark window showing the OpenVAS interaction with the text box target

The packets highlighted in black are the only two of any interest, wherein OpenVAS has used the HTTP GET method to request for “/”, and receives a response where the header shows the product (Apache) and version (2.4.29).

Note the Wireshark filter used (tcp.port == 80 and http). Other than the initial exchange where a banner was grabbed, there was no further interaction. This was the same for Nessus.

What was reported? Well, for OpenVAS, a handful of potential CVEs were reported but I had to lower the QoD to see them! Which is interesting. If anything this is moving the bar too far in the opposite direction. I mean as an owner of this system, I do want to know if i am running old warez!

For Nessus, 6 Apache CVEs were reported with either critical or “high” severity. Overall, I had a similar experience with that of OpenVAS except to even see the Apache issues reported I had to beg the scanner with the following scan configuration setup:

  • Settings –> Assessment –> Override normal accuracy and show potential false alarms
  • Settings –> Assessment –> perform thorough tests
  • Settings –> Advanced –> enable safe checks on (and i also tried the “off” option)
  • Settings –> Advanced –> plugins –> web servers –> enabled. This is the Apache vulnerability section

For the SSH service, OpenVAS reported 3 medium issues which is roughly what i was expecting. Nessus did not report any at all! Answers on a postcard for that one.

Banners Concealed

What was interesting was that the Secure Shell service doesn’t present an option to hide the banner any more, and on investigation, the majority-held community-version of this story is that the banner is needed in some cases.

Apache however did present a banner obfuscation option. For Ubuntu 18 and Apache 2.4.29, this involved:

  • apt install libapache2-mod-security2
  • a2enmod security2
  • edit /etc/apache2/conf-available/security.conf
  • ServerTokens set to “Prod”
  • systemctl restart apache2

This setup results in the following banner for Apache: Apache httpd – so no version number.

The outcome? As expected, all mention of Apache has now ended. Neither OpenVAS or Nessus reported anything to do with Apache of any note.

What DID The Scanners Find?

Just to summarise the findings when the banners were fully on display…it wasn’t a blank slate. There were some findings. Here are the highlights – for OpenVAS:

  • All Critical issues detected were related to PHPMyAdmin, plus one related to jQuery being EOL, but not stating any particular vulnerability. These version numbers are remotely queriable and this is the basis on which these issues were reported.
  • The SSH and Apache issues.
  • Other lower criticality issues were around certificate ciphers.
  • Some CVSS 6, medium issues with Samba – again these are banner-grabbed guesswork findings.

Nessus didn’t report anything outside of what OpenVAS flagged. OpenVAS reported significantly more issues.

It should be said that both scanners did a lot of querying for HTTP application layer issues that could be seen in the packet sniffer output. For example, queries were made for Python/Django settings.py (database password), and other HTTP gotchas.

Unauthenticated Versus Credentialed Testing

With VA Scanners, the picture hasn’t really changed in 20 years. If anything the picture is worse now because the balance with banner-grabbing guesswork has swung too far the other way, and we have to plead with the scanners to tell us about downlevel software versions. This is presumably an effort to reduce the number of false positives, but its not an advisable strategy. It’s perfectly ok to let us know we are running old wares and if we want, we should be able to see the CVEs associated with our listening services, even if many of them are false positives (and I can say from 20 years of network penetration testing, there will be plenty).

With this type of unauthenticated VA scanning though, the real problem has always been false negatives (to the extent that an open Docker API wasn’t flagged as a problem by either scanner), but none of the other commercial tools out there (I have tried a few in recent years) will be in a better position, because there is hard-limit that can be achieved non-locally with no adminstrative authentication credentials.

Both Nessus and OpenVAS allow use of credentialled based testing but its clear this aspect was never a part of the core design. Nessus has expanded its portfolio of credentialed tests but in the time allocated I could not get it to work with SSH public key authentication. In any case, a CIS benchmark approach will always be not-so-great, for reasons outside the scope of this article. We also have to be careful about where authentication credentials are stored. In the case of SSH keys, this means storing a private key, and with some vendors the key will be stored in their cloud somewhere out there.

Conclusion

This post focusses on one major aspect of VA scanning that is grabbing banners and reporting on vulnerability based on the findings from the banner. This is better than nothing but its futility is hopefully illustrated here, and this approach is core to most of what VA scanners do for us.

The market priority has always been towards unauthenticated scanning. Little focus was ever given to credentialed scanning. This has to change because the unauthenticated approach is like trying to diagnose a problem with your car without ever lifting the bonnet/hood, and moreover we could be moving into an era where accreditation bodies mandate credentialed scanning.

Fintechs and Security – Part Three

  • Prologue – covers the overall challenge at a high level
  • Part One – Recruiting and Interviews
  • Part Two – Threat and Vulnerability Management – Application Security
  • Part Three – Threat and Vulnerability Management – Other Layers
  • Part Four – Logging
  • Part Five – Cryptography and Key Management, and Identity Management
  • Part Six – Trust (network controls, such as firewalls and proxies), and Resilience
Threat and Vulnerability Management (TVM) – Other Layers

This article covers the key principles of vulnerability management for cloud, devops, and devsecops, and herein addresses the challenges faced by fintechs.

The previous post covered TVM from the application security point of view, but what about everything else? Being cloud and “dynamic”, even with Kubernetes and the mythical Immutable Architecture, doesn’t mean you don’t have to worry about the security of the operating systems and many devices in your cloud. The devil loves to hear claims to the effect that devops never SSHs to VM instances. And does SaaS help? Well that depends if SaaS is a good move – more on that later.

Fintechs are focussing on application security, which is good, but not so much in the security of other areas such as containers, IaaS/SaaS VMs, and little thought is ever given to the supply of patches and container images (they need to come from an integral source – preferably not involving pulling from the public Internet, and the patches and images need to be checked for integrity themselves).

And in general with vulnerability assessment (VA), we in infosec are still battling a popular misconception, which after a quarter of a decade is still a popular misconception – and that is the value, or lack of, of unauthenticated scanners such as OpenVAS and Nessus. More on this later.

The Overall Approach

The design process for a TVM capability was covered in Part One. Capabilities are people, process, and technology. They’re not just technology. So the design of TVM is not as follows: stick an OpenVAS VM in a VPC, fill it with target addresses, send the auto-generated report to ops. That is actually how many fintechs see the TVM challenge, or they just see it as being a purely application security show.

So there is a vulnerability reported. Is it a false positive? If not, then what is the risk? And how should the risk be treated? In order to get a view of risk, security professionals with an attack mindset need to know

  • the network layout and data flows – think from the point of view of an attacker – so for example if a front end web micro-service is compromised, what can the attacker can do from there? Can they install recon tools such as a port scanner or sniffer locally and figure out where the back end database is? This is really about “trust relationships”. That widget that routes connections may in itself seem like a device that isn’t worthy of attention, but it routes connections to a database hosting crown jewels…you can see its an important device and its configuration needs some intense scrutiny.
  • the location and sensitivity of critical information assets.
  • The ease and result of an exploit – how easy is it to gain a local shell presence and then what is the impact?

The points above should ideally be covered as part of threat modelling, that is carried out before any TVM capability design is drafted.

if the engineer or analyst or architect has the experience in CTF or simulated attack, they are in a good position to speak confidently about risk.

Types of Tool

I covered appsec tools in part two.

There are two types: unauthenticated and credentialed or authenticated scanners.

Many years ago i was an analyst running VA scans as part of an APAC regional accreditation service. I was using Nessus mostly but some other tools also. To help me filter false positives, I set up a local test box with services like Apache, Sendmail, etc, pointed Nessus at the box, then used Ethereal (now Wireshark) to figure out what the scanner was actually doing.

What became abundantly obvious with most services, is that the scanner wasn’t actually doing anything. It grabs a service banner and then …nothing. tumbleweed

I thought initially there was a problem with my setup but soon eliminated that doubt. There are a few cases where the scanner probes for more information but those automated efforts are somewhat ineffectual and in many cases the test that is run, and then the processing of the result, show a lack of understanding of the vulnerability. A false negative is likely to result, or at best a false positive. The scanner sees a text banner response such as “apache 2.2.14”, looks in its database for public disclosed vulnerability for that version, then barfs it all out as CRITICAL, red colour, etc.

Trying to assess vulnerability of an IaaS VM with unauthenticated VA scanners is like trying to diagnose a problem with your car without ever lifting the hood/bonnet.

So this leads us to credentialed scanners. Unfortunately the main players in the VA space pander to unauthenticated scans. I am not going to name vendors here, but its clear the market is poorly served in the area of credentialed scanning.

It’s really very likely that sooner rather than later, accreditation schemes will mandate credentialed scanning. It is slowly but surely becoming a widespread realisation that unauthenticated scanners are limited to the above-mentioned testing methodology.

So overall, you will have a set of Technical Security Standards for different technologies such as Linux, Cisco IoS, Docker, and some others. There are a variety of tools out there that will get part of the job done with the more popular operating systems and databases. But in order to check compliance to your Technical Security Standards, expect to have to bridge the gap with your own scripting. With SSH this is infinitely feasible. With Windows, it is harder, but check Ansible and how it connects to Windows with Python.

Asset Management

Before you can assess for vulnerability, you need to know what your targets are. Thankfully Cloud comes with fewer technical barriers here. Of course the same political barriers exist as in the on-premise case, but the on-premise case presents many technical barriers in larger organisations.

Google Cloud has a built-in feature, and with AWS, each AWS Service (eg Amazon EC2, Amazon S3) have their own set of API calls and each Region is independent. AWS Config is highly useful here.

SaaS

I covered this issue in more detail in a previous post.

Remember the old times of on-premise? Admins were quite busy managing patches and other aspects of operating systems. There are not too many cases where a server is never accessed by an admin for more than a few weeks. There were incompatibilities and patch installs often came with some banana skins around dependencies.

The idea with SaaS is you hand over your operating systems to the CSP and hope for the best. So no access to SMB, RDP, or SSH. You have no visibility of patches that were installed, or not (!), and you have no idea which OS services are enabled or not. If you ask your friendly CSP for more information here, you will not get a reply, and if you do they will remind you that handed over your 50-million-lines-of-source-code OSes to them.

Here’s an example – one variant of the Conficker virus used the Windows ‘at’ scheduling service to keep itself prevalent. Now cloud providers don’t know if their customers need this or not. So – they verge on the side of danger and assume that they do. They will leave it enabled to start at VM boot up.

Note that also – SaaS instances will be invisible to credentialed VA scanners. The tool won’t be able to connect to SSH/RDP.

I am not suggesting for a moment that SaaS is bad. The cost benefits are clear. But when you moved to cloud, you saved on managing physical data centers. Perhaps consider that also saving on management of operating systems maybe taking it too far.

Patching

Don’t forget patching and look at how you are collecting and distributing patches. I’ve seen some architectures where the patching aspect is the attack vector that presents the highest danger, and there have been cases where malicious code was introduced as a result of poor patching.

The patches need to come from an integral source – this is where DNSSEC can play a part but be aware of its limitations – e.g. update.microsoft.com does not present a ‘dnskey’ Resource Record. Vendors sometimes provide a checksum or PGP cryptogram.

Some vendors do not present any patch integrity checksums at all and will force users to download a tarball. This is far from ideal and a workaround will be critical in most cases.

Redhat has their Satellite Network which will meet most organisations’ requirements.

For cloud, the best approach will usually be to ingress patches to a management VPC/Vnet, and all instances (usually even across differing code maturity level VPCs), can pull from there.

Delta Testing

Doing something like scanning critical networks for changes in advertised listening services is definitely a good idea, if not for detecting hacker shells, then for picking up on unauthorised changes. There is no feasible means to do this manually with nmap, or any other port scanner – the problem is time-outs will be flagged as a delta. Commercial offerings are cheap and allow tracking over long histories, there’s no false positives, and allow you to create your own groups of addresses.

Penetration Testing

There’s ideal state, which for most orgs is going to be something like mature vulnerability management processes (this is vulnerability assessment –> deduce risk with vulnerability –> treat risk –> repeat), and the red team pen test looks for anything you may have missed. Ideally, internal sec teams need to know pretty much everything about their network – every nook and cranny, every switch and firewall config, and then the pen test perhaps tells them things they didn’t already know.

Without these VM processes, you can still pen test but the test will be something like this: you find 40 holes of the 1000 in the sieve. But it’s worse than that, because those 40 holes will be back in 2 years.

There can be other circumstances where the pen test by independent 3rd party makes sense:

  • Compliance requirement.
  • Its better than nothing at all. i.e. you’re not even doing VA scans, let alone credentialed scans.

Wrap-up

  • It’s far from all about application security. This area was covered in part two.
  • Design a TVM capability (people, process, technology), don’t just acquire a technology (Qualys, Rapid 7, Tenable SC. etc), fill it with targets, and that’s it.
  • Use your VA data to formulate risk, then decide how to treat the risk. Repeat. Note that CVSS ratings are not particularly useful here. You need to ascertain risk for your environment, not some theoretical environment.
  • Credentialed scanning is the only solution worth considering, and indeed it’s highly likely that compliance schemes will soon start to mandate credentialed scanning.
  • Use a network delta tester to pick up on hacker shells and unauthorised changes in network services and firewalls.
  • Being dynamic with Kubernetes and microservices has not yet killed your platform risk or the OS in general.
  • SaaS may be a step too far for many, in terms of how much you can outsource.
  • When you SaaS’ify a service, you hand over the OS to a CSP, and also remove it from the scope of your TVM VA credentialed scanning.
  • Penetration testing has a well-defined place in security, which isn’t supposed to be one where it is used to inform security teams about their network! Think compliance, and what ideal state looks like here.

Fintechs and Security – Part Two

  • Prologue – covers the overall challenge at a high level
  • Part One – Recruiting and Interviews
  • Part Two – Threat and Vulnerability Management – Application Security
  • Part Three – Threat and Vulnerability Management – Other Layers
  • Part Four – Logging
  • Part Five – Cryptography and Key Management, and Identity Management
  • Part Six – Trust (network controls, such as firewalls and proxies), and Resilience

Threat and Vulnerability Management (TVM) – Application Security

This part covers some high-level guider points related to the design of the application security side of TVM (Threat and Vulnerability Management), and the more common pitfalls that plague lots of organisations, not just fintechs. I won’t be covering different tools in the SAST or DAST space apart from one known-good. There are some decent SAST tools out there but none really stand out. The market is ever-changing. When i ask vendors to explain what they mean by [new acronym] what usually results is nothing, or a blast of obfuscation. So I’m not here to talk about specific vendor offerings, especially as the SAST challenge is so hard to get even close to right.

With vulnerability management in general, ${VENDOR} has succeeded in fouling the waters by claiming to be able to automate vulnerability management. This is nonsense. Vulnerability assessment can to some limited degree be automated with decent results, but vulnerability management cannot be automated.

The vulnerability management cycle has also been made more complicated by GRC folk who will present a diagram representing a cycle with 100 steps, when really its just assess –> deduce risk –> treat risk –> GOTO 1. The process is endless, and in the beginning it will be painful, but if handled without redundant theory, acronyms-for-the-sake-of-acronyms-for-the-same-concept-that-already-has-lots-of-acronyms, rebadging older concepts with a new name to make them seem revolutionary, or other common obfuscation techniques, it can be easily integrated as an operational process fairly quickly.

The Dawn Of Application Security

If you go back to the heady days of the late 90s, application security was a thing, it just wasn’t called “application security”. It was called penetration testing. Around the early 2000s, firewall configurations improved to the extent that in a pen test, you would only “see” port 80 and/or 443 exposing a web service on Apache, Internet Information Server, or iPlanet (those were the days – buffer overflow nirvana). So with other attack channels being closed from the perimeter perspective, more scrutiny was given to web-based services.

Attackers realised you can subvert user input by intercepting it with a proxy, modifying some fields, perhaps inject some SQL or HTML, and see output that perhaps you wouldn’t expect to see as part of the business goals of the online service.

At this point the “application security” world was formed and vulnerabilities were classified and given new names. The OWASP Top Ten was born, and the world has never been the same since.

SAST/DAST

More acronyms have been invented by ${VENDOR} since the early early pre-holocene days of appsec, supposedly representing “brand new” concepts such as SAST (Static Application Security Testing) and DAST (Dynamic Application Security Testing), which is the new equivalent of white box and black box testing respectively. The basic difference is about access to the source code. SAST is source code testing while DAST is an approach that will involve testing for OWASP type vulnerabilities while the software is running and accepting client connection requests.

The SAST scene is one that has been adopted by fintechs in more recent times. If you go back 15 years, you would struggle to find any real commercial interest in doing SAST – so if anyone ever tells you they have “20” or even “10” years of SAST experience, suggest they improve their creativity skills. The general feeling, not unjustified, was that for a large, complex application, assessing thousands of lines of source code at a vital organ/day couldn’t be justified.

SAST is more of a common requirement these days. Why is that? The rise of fintechs, whose business is solely about generation of applications, is one side of it, and fintechs can (and do) go bust if they suffer a breach. Also – ${VENDOR}s have responded to the changing Appsec landscape by offering “solutions”. To be fair, the offerings ARE better than 10 years ago, but it wouldn’t take much to better those Hello World scripts. No but seriously, SAST assessment tools are raved about by Gartner and other independent sources, and they ARE better than offerings from the Victorian era, but only in certain refined scenarios and with certain programming languages.

If it was possible to be able to comprehensively assess lots of source code for vulnerability and get accurate results, then theoretically DAST would be harder to justify as a business undertaking. But as it is, SAST + DAST, despite the extensive resources required to do this effectively, can be justified in some cases. In other cases it can be perfectly fine to just go with DAST. It’s unlikely ever going to be ok to just go with SAST because of the scale of the task with complex apps.

Another point here – i see some fintechs using more than one SAST tool, from different vendors. There’s usually not much to gain from this. Some tools are better with some programming languages than others, but there is nothing cast in stone or any kind of majority-view here. The costs of going with multiple vendors is likely going to be harder and harder to justify as time goes on.

Does Automated Vulnerability Assessment Help?

The problem of appsec is still too complex for decent returns from automation. Anyone who has ever done any manual testing for issues such as XSS knows the vast myriad of ways in which such issues can be manifested. The blackbox/blind/DAST scene is still not more than Burp, Dirbuster, but even then its mostly still manual testing with proxies. Don’t expect to cover all OWASP top 10 issues for a complex application that presents an admin plus a user interface, even in a two-week engagement with four analysts.

My preferred approach is still Fred Flinstone’y, but since the automation just isn’t there yet, maybe its the best approach? This needs to happen when an application is still in the conceptual white board architecture design phase, not a fully grown [insert Hipster-given-name], and it goes something like this: white board, application architect – zero in on the areas where data flows involve transactions with untrusted networks or users. Crpyto/key management is another area to zoom in on.

Web Application Firewall

The best thing about WAFs, is they only allow propagation of the most dangerous attacks. But seriously, WAF can help, and in some respects, given the above-mentioned challenges of automating code testing, you need all the help you can get, but you need to spend time teaching the WAF about your expected URL patterns and tuning it – this can be costly. A “dumb” default-configured WAF can probably catch drive-by type issues for public disclosed vulnerabilities as long as you keep it updated. A lot depends on your risk profile, but note that you don’t need a security engineer to install a WAF and leave it in default config. Pretty much anyone can do this. You _do_ need an experienced security engineer or two to properly understand an application and configure a WAF accordingly.

Python and Ruby – Web Application Frameworks

Web application frameworks such as Ruby on Rails (RoR) and Django are in common usage in fintechs, and are at least in some cases, developed with security in mind in that they do offer developers features that are on by default. For example, with Django, if you design a HTML form for user input, the server side will have been automagically configured with the validation on the server side, depending on the model field type. So an email address will be validated client and server-side as an email address. Most OWASP issues are the result of failures to validate user input on the server side.

Note also though that with Django you can still disable HTML tag filtering of user input with a “| safe” in the template. So it’s dangerous to assume that all user input is sanitised.

In Django Templates you will also see a CSRF token as a hidden form field if you include a Form object in your template.

The point here is – the root of all evil in appsec is server-side validation, and much of your server-side validation effort in development will be covered by default if you go with RoR or Django. That is not the end of the story though with appsec and Django/RoR apps. Vulnerability of the host OS and applications can be problematic, and it’s far from the case that use of either Django or RoR as a dev framework eliminates the need for DAST/SAST. However the effort will be significantly reduced as compared to the C/Java/PHP cases.

Wrap-up

Overall i don’t want to too take much time bleating about this topic because the take away is clear – you CAN take steps to address application security assessment automation and include the testing as part of your CI/CD pipeline, but don’t expect to catch all vulnerabilities or even half of what is likely an endless list.

Expect that you will be compromised and plan for it – this is cheaper than spending zillions (e.g. by going with multiple SAST tools as i’ve seen plenty of times) on solving an unsolvable problem – just don’t let an incident result in a costly breach. This is the purpose of security architecture and engineering. It’s more to deal with the consequences of an initial exploit of an application security fail, than to eliminate vulnerability.

#WannaCry and The Rise and Fall of the Firewall

The now infamous WannaCry Ransomware outbreak was the most widespread malware outbreak since the early 2000s. There was a very long gap between the early 2000s “worm” outbreaks (think Sasser, Blaster, etc) and this latest 2017 WannaCry outbreak. The usage of the phrase “worm” was itself widespread, especially as it was included in CISSP exam syllabuses, but then it died out. Now its seeing a resurgence, that started last weekend – but why? Why is the worm turning for the worm (I know – it’s bad – but it had to go in here somewhere)?

As far as WannaCry goes, there has been some interesting developments over the past few days – contrary to popular belief, it did not affect Windows XP, the most commonly affected was Windows 7, and according to some experts, the leading suspect in the case is the Lazarus group with ties to North Korea.

But this post is not about WannaCry. I’m going to say it: I used WannaCry to get attention (and with this statement i’m already more honest than the numerous others who jumped on the WannaCry bandwagon, including our beloved $VENDOR). But I have been meaning to cover the rise and fall of the firewall for some time now, and this instance of a widespread and damaging worm, that spreads by exploiting poor firewall configurations, brought this forward by a few months.

A worm is malware that “uses a computer network to spread itself, relying on security failures on the target computer”. If we think of malware delivery and propagation as two different things – lots of malware since 2004 used email (think Phishing) as a delivery mechanism but spread using an exploit once inside a private network. Worms use network propagation to both deliver and spread. And that is the key difference. WannaCry is without doubt a Worm. There is no evidence to suggest WannaCry was delivered on the back of successful Phishing attacks – as illustrated by the lack of WannaCry home user victims (who sit behind the protection of NAT’ing home routers). Most of the early WannaCry posts were covering Phishing, mostly because of the lack of belief that Server Message Block ports would never be exposed to the public Internet.

The Infosec sector is really only 20 years old in terms of the widespread adoption of security controls in larger organisations. So we have only just started to have a usable, relatable history in infosec. Firewalls are still, in 2017, the security control that delivers most value for investment, and they’ve been around since day one. But in the past 20 years I have seen firewall configurations go thru a spectacular rise in the early 2000s, to a spectacular fall a decade later.

Late 90s Firewall

If we’re talking late 90s, even with some regional APAC banks, you would see huge swaths of open ports in port scan results. Indeed, a firewall to many late 90s organisations was as in the image to the left.

However – you can ask a firewall what it is, even a “Next Gen” firewall, and it will answer “I’m a firewall, i make decisions on accepting or rejecting packets based on source and destination addresses and services”. Next Gen firewall vendors tout the ability of firewalls to do layer 7 DPI stuff such as IDS, WAF, etc, but from what I am hearing, many organisations don’t use these features for one reason or another. Firewalls are quite a simple control to understand and organisations got the whole firewall thing nailed quite early on in the game.

When we got to 2002 or so, you would scan a perimeter subnet and only see VPN and HTTP ports. Mind you, egress controls were still quite poor back then, and continue to be lacking to the present day, as is also the case with internal firewalls other than a DMZ (if there are any). 2002 was also the year when application security testing (OWASP type vulnerability testing) took off, and I doubt it would ever have evolved into a specialised area if organisations had not improved their firewalls. Ultimately organisations could improve their firewalls but they still had to expose web services to the planet. As Marcus Ranum said, when discussing the “ultimate firewall”, “You’ll notice there is a large hole sort of in the centre [of the ultimate firewall]. That represents TCP Port 80. Most firewalls have a big hole right about there, and so does mine.”

During testing engagements for the next decade, it was the case that perimeter firewalls would be well configured in the majority of cases. But then we entered an “interesting” period. It started for me around 2012. I was conducting a vulnerability scan of a major private infrastructure facility in the UK…and “what the…”! RDP and SMB vulnerabilities! So the target organisation served a vital function in national infrastructure and they expose databases, SMB, and terminal services ports to the planet. In case there’s any doubt – that’s bad. And since 2012, firewall configs have fallen by the wayside.

WannaCry is delivered and spreads using a SMB vulnerability, as did Blaster and Sasser all those years ago. If we look at Shodan results for Internet exposure of SMB we find 1.5 million cases. That’s a lot.

So how did we get here? Well there are no answers born out of questionnaires and research but i have my suspicions:

  • All the talk of “Next Generation” firewalls and layer 7 has led to organisations taking their eye off the ball when it comes to layer 3 and 4.
  • All the talk of magic $VENDOR snake oil silver bullets in general has led organisations away from the basics. Think APT-Buster ™.
  • All the talk of outsourcing has led some organisations, as Dr Anton Chuvakin said, to outsource thinking.
  • Talk of “distortion” of the perimeter (as in “in this age of mobile workforces, where is our perimeter now?”). Well the perimeter is still the perimeter – the clue is in the name. The difference is now there are several perimeters. But WannaCry has reminded us that the old perimeter is still…yes – a perimeter.
  • There are even some who advocated losing the firewall as a control, but one of the rare success stories for infosec was the subsequent flaming of such opinions. BTW when was that post published? Yes – it was 2012.

So general guidelines:

  • The Internet is an ugly place with lots of BOTs and humans with bad intentions, along with those who don’t intend to be bad but just are (I bet there are lots of private org firewall logs which show connection attempts of WannaCry from other organisations).
  • Block incoming for all ports other than those needed as a strict business requirement. Default-deny is the easiest way to achieve this.
  • Workstations and mobile devices can happily block all incoming connections in most cases.
  • Egress is important – also discussed very eloquently by Dave Piscitello. Its not all about ingress.
  • Other pitfalls with firewalls involve poor usage of NAT and those pesky network dudes who like to bypass inner DMZ firewalls with dual homing.
  • Watch out for connections from any internal subnet from which human-used devices derive to critical infrastructure such as databases. Those can be blocked in most cases.
  • Don’t focus on WannaCry. Don’t focus on Ransomware. Don’t focus on malware. Focus on Vulnerability Management.

So then perimeter firewall configurations, it seems, go through the same cycles that economies and seasonal temperature variations go through. When will the winter pass for firewall configurations?

Clouds and Vulnerability Management

In the world of Clouds and Vulnerability Management, based on observations, it seems like a critical issue has slipped under the radar: if you’re running with PaaS and SaaS VMs, you cannot deliver anything close to a respectable level of vulnerability management with these platforms. This is because to do effective vulnerability management, the first part of that process – the vulnerability assessment – needs to be performed with administrative access (over SSH/SMB), and with PaaS and SaaS, you do not, as a customer, have such access (this is part of your agreement with the cloud provider). The rest of this article explains this issue in more detail.

The main reason for the clouding (sorry) of this issue, is what is still, after 20+ years, a fairly widespread lack of awareness of the ineffectiveness of unauthenticated vulnerability scanning. More and more security managers are becoming aware that credentialed scans are the only way to go. However, with a lack of objective survey data available, I can only draw on my own experiences. See – i’m one of those disgraceful contracting/consultant types, been doing security for almost 20 years, and been intimate with a good number of large organisations, and with each year that passes I can say that more organisations are waking up to the limitations of unauthenticated scanning. But there are also still lots more who don’t clearly see the limitations of unauthenticated scanning.

The original Nessus from the late 90s, now with Tenable, is a great product in terms of doing what it was intended to do. But false negatives were never a concern in with the design of Nessus. OpenVAS is still open source and available and it is also a great tool from the point of view of doing what it was intended to do. But if these tools are your sole source of vulnerability data, you are effectively running blind.

By the way Tenable do offer a product that covers credentialed scans for enterprises, but i have not had any hands-on experience with this tool. I do have hands on experience with the other market leaders’ products. By in large they all fall some way short but that’s a subject for another day.

Unauthenticated scanners all do the same thing:

  • port scan to find open ports
  • grab service banners – this is the equivalent of nmap -sV, and in fact as most of these tools use nmap libraries, is it _exactly_ that
  • lets say our tool finds Apache HTTP 14.x, it looks in its database of public disclosed vulnerability with that version of Apache, and spews out everything it finds. The tools generally do little in the way of actually probing with HTTP Methods for example, and they certainly were not designed to try, for example, a buffer overflow exploit attempt. They report lots of ‘noise’ in the way of false positives, but false negatives are the real concern.

So really the tools are doing a port scan, and then telling you you’re running old warez. Conficker is still very widespread and is the ultimate player in the ‘Pee’ arena (the ‘Pee’ in APT). An unauthenticated scanner doesn’t have enough visibility ‘under the hood’ to tell you if you are going to be the next Conficker victim, or the next ransomware victim. Some of the Linux vulnerabilities reported in the past few years – e.g. Heartbleed, Ghost, DirtyCOW – very few can be detected with an unauthenticated scanner, and none of these 3 examples can be detected with an unauthenticated scanner.

Credentialed scanning really is the only way to go. Credentialed based scanners are configured with root/administrative access to targets and are therefore in a position to ‘see’ everything.

The Connection With PaaS and SaaS

So how does this all relate to Cloud? Well, there two of the three cloud types where a lack of access to the operating system command shell becomes a problem – and from this description its fairly clear these are PaaS and SaaS.

 There are two common delusions abound in this area:

  • [Cloud maker] handles platform configuration and therefore vulnerability for me, so that’s ok, no need to worry:
    • Cloud makers like AWS and Azure will deal with patches, but concerns in security are much wider and operating systems are big and complex. No patches exist for 0days, and in space, nobody can hear you scream.
    • Many vulnerabilities arise from OS configuration aspects that cannot be removed with a patch – e.g. Conficker was mentioned above: some Conficker versions (yes its managed very professionally) use ‘at’ job scheduling to remain present even after MS08-067 is patched. If for example you use Azure, Microsoft manage your PaaS and SaaS but they don’t know if you want to use ‘at’ or not. Its safer for them to assume that you do want to use it, so they leave it enabled (when you sign up for PaaS or SaaS you are removed from the decision making here). Same applies to many other local services and file system permissions that are very popular with the dark side.
  • ‘Unauthenticated scanning gets me some of the way, its good enough’ – how much of the way does it get you? Less than half way? its more like 5% really. Remember its little more than a port scan, and you shouldn’t need a scanner to tell you you’re running old software. Certainly for critical cloud VMs, this is a problem.

With PaaS and SaaS, you are handing over the management of large and complex operating systems to cloud providers, who are perfectly justified, and also in many cases perfectly wise, in leaving open large security holes in your platforms, and as part of your agreement with them, there’s not a thing you can do about it (other than switch to IaaS or on-premise).

Information Security Pseudo-skills and the Power of 6

How many Security Analysts does it take to change a light bulb? The answer should be one but it seems organisations are insistent on spending huge amounts of money on armies of Analysts with very niche “skills”, as opposed to 6 (yes, 6!) Analysts with certain core skills groups whose abilities complement each other. Banks and telcos with 300 security professionals could reduce that number to 6.

Let me ask you something: is Symantec Control Compliance Suite (CCS) a “skill” or a product or both? Is Vulnerability Management a skill? Its certainly not a product. Is HP Tippingpoint IPS a product or a skill?

Is McAfee Vulnerability Manager 7.5 a skill whereas the previous version is another skill? So if a person has experience with 7.5, they are not qualified to apply for a shop where the previous version is used? Ok this is taking it to the extreme, but i dare say there have been cases where this analogy is applicable.

How long does it take a person to get “skilled up” with HP Arcsight SIEM? I was told by a respected professional who runs his own practice that the answer is 6 months. My immediate thought is not printable here. Sorry – 6 months is ridiculous.

So let me ask again, is Symantec CCS a skill? No – its a product. Its not a skill. If you take a person who has experience in operational/technical Vulnerability Management – you know, vulnerability assessment followed by the treatment of risk, then they will laugh at the idea that CCS is a skill. Its only a skill to someone who has never seen a command shell before, tested manually for a false positive, or taken part in an unrestricted manual network penetration test.

Being a software product from a major vendor means the GUI has been designed to make the software intuitive to use. I know that in vulnerability assessment, i need to supply the tool with IP addresses of targets and i need to tell the tool which tests I want to run against those targets. Maybe the area where I supply the addresses of targets is the tab which has “targets” written on it? And I don’t want to configure the same test every time I run it, maybe this “templates” tab might be able to help me? Do i need a $4000 2-week training course and a nice certificate to prove to the world that I can work effectively with such a product? Or should there be an effective accreditation program which certifies core competencies (including evidence of the ability to adapt fast to new tools) in security? I think the answer is clear.

A product such as a Vulnerability Management product is only a “Window” to a Vulnerability Management solution. Its only a GUI. It has been tailored to be intuitive to use. Its the thin layer on top of the Vulnerability Management solution. The solution itself is much bigger than this. The product only generates list of vulnerabilities. Its how the organisation treats those vulnerabilities that is key – and the product does not help too much with the bigger picture.

Historically vulnerability management has been around for years. Then came along commercial products, which basically just slapped a GUI on processes and practices that existed for 20 years+, after which the jobs market decided to call the product the solution. The product is the skill now, whereas its really vulnerability management that is the skill.

The ability to adapt fast to new tools is a skill in itself but it also is a skill that should be built-in by default: a skill that should be inherent with all professionals who enter the field. Flexibility is the key.

The real skills are those associated with areas for large volumes of intellectual capital. These are core technologies. Say a person has 5 years+ experience of working in Unix environments as a system administrator and has shown interest in scripting. Then they learn some aspects of network penetration testing and are also not afraid of other technologies (such as Windows). I can guarantee that they will be up and running in less than one day with any new Vulnerability Management tool, or SIEM tool, or [insert marketing buzzphrase here] that vendors can magic up.

Different SIEM tools use different terms and phrases for the same basic idea. HP uses “flex connectors” whilst Splunk talks about “Forwarders” and “Heavy Forwarders” and so on. But guess what? I understand English but If i don’t know what the words mean, i can check in an online dictionary. I know what a SIEM is designed to do and i get the data flows and architecture concept. Network aggregation of syslog and Windows Events is not an alien concept to me, and neither are all layers of the TCP/IP stack (a really basic requirement for all Analysts – or should be). Therefore i can adapt very easily to new tools in this space.

IPS/IDS and Firewalls? Well they’re not even very functional devices. If you have ever setup Snort or iptables you’ll be fine with whatever product is out there. Recently myself and another Consultant were asked to configure a Tippingpoint device. We were up and running in 10 minutes. There were a few small items that we needed to check against the product documentation. I have 15 years+ experience in the field but the other guy is new. Nonetheless he had configured another IPS product before. He was immediately up and running with the product – no problem. Of course what to configure in the rule base – that is a bigger story and it requires knowledge of threats, attack techniques and vulnerabilities – but that area is GENERIC to security – its not specific to a product.

I’ve seen some truly crazy job specifications. One i saw was Websense Specialist!! Come on – its a web proxy! Its Squid with extra cosmetic functions. The position would be filled by a Websense “Olympian” probably. But what sort of job is that? Carpe Diem my friends, Carpe Diem.

If you run a security consultancy and you follow the usual market game of micro-boxed, pigeon-holed security skills, i don’t know how you can survive. A requirement comes up for a project that involves a number of different products. Your existing consultants don’t have those products written anywhere on their CVs, so you go to market looking for contractors at 600 USD per day. You either find the people somehow, or you turn the project down.  Either way you lose out massively. Or – you could have a base of 6 (its that number again) consultants with core skills that complement each other.

If the over-specialisation issue were addressed, businesses would save considerably on human resource and also find it easier to attract the right people. Pigeon-holed jobs are boring. It is possible and advisable to acquire human resource able to cover more bases in risk management.

There are those for and against accreditation in security. I think there is a solution here which is covered in more detail of Chapter 11 of Security De-engineering.

So how many Security Analysts does it take to change a light bulb? The answer is 6, but typically in real life the number is the mark of the beast: 666.

Information Security Careers: The Merits Of Going In-house

Job hunting in information security can be a confusing game. The lack of any standard nomenclature across the sector doesn’t help in this regard. Some of the terms used to describe open positions can be interpreted in wildly different ways. “Architect” is a good example. This term can have a non-technical connotation with some, and a technical connotation with others.

There are plenty of pros who came into security, perhaps via the network penetration testing route, who only ever worked for consultancies that provide services, mainly for businesses such as banks and telcos. The majority of such “external” services are centered around network penetration testing and application testing.

I myself started out in infosec on the consultancy path. My colleagues were whiz kids and some were well known in the field. Some would call them “hackers”, others “ethical” or “white hat” network penetration testers. This article does not cover ethics or pander to some of the verdicts that tend to be passed outside of the law.

Many Analysts and Consultants will face the decision to go in-house at some point in their careers, or remain in a service provider capacity. Others may be in-house and considering the switch to a consultancy. This post hopefully can help the decision making process.

The idea of going in-house and, for example, taking up an Analyst position with a big bank – it usually doesn’t hold much appeal with external consultants. The idea prevails that this type of position is boring or unchallenging. I also had this viewpoint and it was largely derived from the few visions and sound bytes I had witnessed behind the veil. However, what I discovered when I took up an analyst position with a large logistics firm was that nothing could be further from the truth. Going in-house can benefit one’s career immensely and open the eyes to the real challenges in security.

Of course my experiences do not apply across the whole spectrum of in-house security positions. Some actually are boring for technically oriented folk. Different organisations do things in different ways. Some just use their security department for compliance purposes with no attention to detail. However there are also plenty that engage effectively with other teams such as IT operations and development project teams.

As an Analyst in a large, complex environment, the opportunity exists to learn a great deal more about security than one could as an external consultant.  An external consultant’s exposure to an organisation’s security challenges will only usually come in the form of a network or application assessment, and even if the testing is conducted thoroughly and over a period of weeks, the view will be extremely limited. The test report is sent to the client, and its a common assumption that all of the problems described in the report can be easily addressed. In the vast majority of cases, nothing could be further from the truth. What becomes apparent at a very early stage in one’s life as an in-house Analyst, is that very few vulnerabilities can be mitigated easily.

One of the main pillars of a security strategy is Vulnerability Management. The basis of any vulnerability management program is the security standard – the document that depicts how, from a security perspective, computer operating systems, DBMS, network devices, and so on, should be configured. So an Analyst will put together a list of configuration items and compose a security standard. Next they will meet with another team, usually IT operations, in an attempt to actually implement the standard in existing and future platforms. For many, this will be the point where they realize the real nature of the challenges.

Taking an example, the security department at a bank is attempting to introduce a Redhat Enterprise Linux security standard as a live document. How many of the configuration directives can be implemented across the board with an acceptable level of risk in terms of breaking applications or impacting the business in any way? The answer is “not many”. This will come as a surprise for many external consultants. Limiting factors can come from surprising sources. Enlightened IT ops and dev teams can open security’s eyes in this regard and help them to understand how the business really functions.

The whole process of vulnerability management, minus VM product marketeers’ diatribe, is basically detection, then deduce the risk, then take decisions on how to address the risk (i.e. don’t address the vulnerability and accept the risk, or address / work around the vulnerability and mitigate the risk). But as an external consultant, one will only usually get to hand a client a list of vulnerabilities and that will be the end of the story. As an in-house Security Analyst, one gets to take the process from start to finish and learn a great deal more in the process.

As a security consultant passing beyond the iron curtain, the best thing that can possibly happen to their careers is that they find themselves in a situation where they get to interface with the enlightened ones in IT operations, network operations (usually there are a few in net ops who really know their security quite well), and application architects (and that’s where it gets to be really fun).

For the Security Consultant who just metamorphosized into an in-house Analyst, it may well be the first time in their careers that they get to encounter real business concerns. IT operations teams live in fear of disrupting applications that generate huge revenues per minute. The fear will be palpable and it encourages the kind of professionalism that one may never have a direct need to have as an external consultant. Generally, the in-house Analyst gets to experience in detail how the business translates into applications and then into servers, databases, and data flows. Then the risks to information assets seem much more real.

The internal challenge versus the external challenge in security is of course one of protection versus breaking-in. Security is full of rock stars who break into badly defended customer networks and then advertise the feat from the roof tops. In between commercial tests and twittering school yard insults, the rock stars are preparing their next Black Hat speech with research into the latest exotic sploit technique that will never be used in a live test, because the target can easily be compromised with simple methods.

However the rock stars get all the attention and security is all about reversing and fuzzing so we hear. But the bigger challenge is not breaking in, its protection, but then protection is a lot less exotic and sexy than breaking in. So there lies the main disadvantage of going in-house. It could mean less attention for the gifted Analyst. But for many, this won’t be such an issue, because the internal job is much more challenging and interesting, and it also lights up a CV, especially if the names are those in banking and telecoms.

How about going full circle? How about 3 years with a service provider, then 5 years in-house, then going back to consulting? Such a consultant is indeed a powerful weapon for consultancies and adds a whole new dimension for service providers (and their portfolio of services can be expanded). In fact such a security professional would be well positioned to start their own consultancy at this stage.

So in conclusion: going in-house can be the best thing that a Security Consultant can do with their careers. Is going in-house less interesting? Not at all. Does it mean you will get less attention? You can still speak at conferences probably.

Scangate Re-visited: Vulnerability Scanners Uncovered

I have covered VA tools before but I feel that one year later, the same misconceptions prevail. The notion that VA tools really can be used to give a decent picture of vulnerability is still heavily embedded, and that notion in itself presents a serious vulnerability for businesses.

A more concise approach at a run down on the functionality of VA warez may be worth a try. At least lets give it one last shot. On second thoughts, no, don’t shoot anything.

Actually forget “positive” or “negative” views on VAs before reading this. I am just going to present the facts based on what I know myself and of course I’m open to logical, objective discussion. I may have missed something.

Why the focus on VA? Well, the tools are still so commonplace and heavily used and I don’t believe that’s in our best interests.

What I discovered many years ago (it was actually 2002 at first) was that discussions around these tools can evoke some quite emotional responses. “Emotional” you quiz? Yes. I mean when you think about it, whole empires have been built using these tools. The tools are so widespread in security and used as the basis of corporate VM programs. VM market revenues runs at around 1 billion USD annually. Songs and poems have been written about VAs – OK I can’t back that up, but careers have been built and whole enterprise level security software suites built using a nasty open source VA engine.

I presented on the subject of automation in VA all those years ago, and put forward a notion that running VA tools doesn’t carry much more value as compared to something like this: nmap -v -sS -sV <targets> . Any Security Analyst worth their weight in spam would see open ports and service banners, and quickly deduce vulnerability from this limited perspective. “Limited”, maybe, but is a typical VA tool in a better position to interrogate a target autotragically?

One pre-qualifier I need to throw out is that the type of scanners I will discuss here are Nessus-like scanners, the modus operandi of which is to use unauthenticated means to scan a target. Nessus itself isn’t the main focus but it’s the tool that’s most well known and widely used. The others do not present any major advantages over Nessus. In fact Nessus is really as good as it gets. There’s a highly limited potential with these tools and Nessus reaches that limit.

Over the course of my infosec career I have had the privilege to be in a position where I have been coerced into using VAs extensively, and spent many long hours investigating false positives. In many cases I set up a dummy Linux target and used a packet sniffer to deduce what the tool was doing. As a summary, the findings were approximately:

  • Out of the 1000s of tests, or “patterns”, configured in the tools, only a few have the potential to result in accurate/useful findings. Some examples of these are SNMP community string tests, and tests for plain text services (e.g. telnet, FTP).
  • The vast majority of the other tests merely grab a service “banner”. For example, the tool port scans, finds an open port 80 TCP, then runs a test to grab a service banner (e.g. Apache 2.2.22, mickey mouse plug-in, bla bla). I was sort of expecting the tool to do some more probing having found a specific service and version, but in most cases it does not.
  • The tool, having found what it thinks is a certain application layer service and version, then correlates its finding with its database of public disclosed vulnerabilities for the detected service.

Even for some of the plan text services, some of the tests which have the potential to reveal useful findings have been botched by the developers. For example, tests for anonymous FTP only work with a very specific flavour of FTP. Other FTP daemons return different messages for successful anonymous logins and the tool does not accommodate this.

Also what happens if a service is moved from its default port? I had some spectacular failures with running Nessus against a FTP service on port 1980 TCP (usually it is listening on port 21). Different timing options were tested. Nessus uses a nmap engine for port scanning, but nmap by itself is usually able to find non-default port services using default settings.

So in summary, what the VA tools do is mostly just report that you are running ridiculous unencrypted blast-from-the-past services or old, down-level services – maybe. Really I would hope security teams wouldn’t need to spend 25K USD on an enterprise solution to tell them this.

False positives is one thing, but false negatives is quite another. Popular magazines always report something like 50% success rate in finding vulnerabilities in staged tests. Why is it always 50%? Remember also that the product under testing is usually one from a vendor who pays for a full spread ad in that magazine.

Putting numbers to false negatives makes little sense with huge, complex software packages of millions of lines of source code. However, it occurred to me not so long ago whilst doing some white box testing on a client’s critical infrastructure: how many of the vulnerabilities under testing could possibly be discovered by use of a VA tool? In the case of Oracle Database the answer was less than 5%. And when we’re talking Oracle, we’re usually talking critical, as in crown jewels critical.

If nothing else, the main aspect I would hope the reader would take out of this discussion is about expectation. The expectation that is set by marketing people with VA tools is that the tools really can be used to accurately detect a wide range of vulnerability, and you can bet your business on the tools by using them to test critical infrastructure. Ladies and gentlemen: please don’t be deceived by this!

Can you safely replace manual testing with use of these tools? Yes, but only if the target has zero value to the business.

 

Security in Virtual Machine Environments. And the planet.

This post is based on a recent article on the CIO.com site.

I have to say, when I read the title of the article, the cynic in me once again prevailed. And indeed there will be some cynicism and sarcasm in this article, so if that offends the reader, i would like to suggest other sources of information: those which do not accurately reflect the state of the information security industry. Unfortunately the truth is often accompanied by at least cynicism. Indeed, if I meet an IT professional who isn’t cynical and sarcastic, I do find it hard to trust them.

Near the end of the article there will be a quiz with a scammed prize offering, just to take the edge of the punishment of the endless “negativity” and abject non-MBA’edness.

“While organizations have been hot to virtualize their machine operations, that zeal hasn’t been transferred to their adoption of good security practices”. Well you see they’re two different things. Using VMs reduces power and physical space requirements. Note the word “physical” here and being physical, the benefits are easier to understand.

Physical implies something which takes physical form – a matter energy field. Decision makers are familiar with such energy fields. There are other examples in their lives such as tables, chairs, other people, walls, cars. Then there is information in electronic form – that’s a similar thing (also an energy field) but the hunter/gatherer in some of us doesn’t see it that way, and still as of 2013, the concept eludes many IT decision makers who have fought their way up through the ranks as a result of excellent performance in their IT careers (no – it’s not just because they have a MBA, or know the right people).

There is a concept at board level of insuring a building (another matter energy field) against damages from natural causes. But even when 80% of information assets are in electronic form, there is still a disconnect from the information. Come on chaps, we’ve been doing this for 20 years now!

Josh Corman recently tweeted “We depend on software just as much as steel and concrete, its just that software is infinitely more attack-able!”. Mr Corman felt the need to make this statement. Ok, like most other wise men in security, it was intended to boost his Klout score, but one does not achieve that by tweeting stuff that everybody already knows. I would trust someone like Mr Corman to know where the gaps are in the mental portfolios of IT decision makers.

Ok, so moving on…”Nearly half (42 percent) of the 346 administrators participating in the security vendor BeyondTrust‘s survey said they don’t use any security tools regularly as part of operating their virtual systems…”

What tools? You mean anti-virus and firewalls, or the latest heuristic HIDS box of shite? Call me business-friendly but I don’t want to see endless tools on end points, regardless of their function. So if they’re not using tools, is it not at this point good journalism to comment on what tools exactly? Personally I want to see a local firewall and the obligatory and increasingly less beneficial anti-virus (and i do not care as to where, who, whenceforth, or which one…preferably the one where the word “heuristic” is not used in the marketing drivel on the box). Now if you’re talking system hardening and utilizing built-in logging capability – great, that’s a different story, and worthy of a cuddly toy as a prize.

“Insecure practices when creating new virtual images is a systemic problem” – it is, but how many security problems can you really eradicate at build-time and be sure that the change won’t break an application or introduce some other problem. When practical IT-oriented security folk actually try to do this with skilled and experienced ops and devs, they realise that less than 50% of their policies can be implemented safely in a corporate build image. Other security changes need to be assessed on a per-application basis.

Forget VMs and clouds for a moment – 90%+ of firms are not rolling out effectively hardened build images for any platform. The information security world is still some way off with practices in the other VM field (Vulnerability Management).

“If an administrator clones a machine or rolls back a snapshot,”… “the security risks that those machines represent are bubbled up to the administrator, and they can make decisions as to whether they should be powered on, off or left in state.”

Ok, so “the security risks that those machines represent are bubbled up to the administrator”!!?? [Double-take] Really? Ok, this whole security thing really can be automated then? In that case, every platform should be installed as a VM managed under VMware vCenter with the BeyondTrust plugin. A tab that can show us our risks? There has to be a distinction between vulnerability and risk here, because they are two quite different things. No but seriously, I would want to know how those vulnerabilities are detected because to date the information security industry still doesn’t have an accurate way to do this for some platforms.

Another quote: “It’s pretty clear that virtualization has ripped up operational practices and that security lags woefully behind the operational practice of managing the virtual infrastructure,”. I would edit that and just the two words “security” and “lags”. What with visualized stuff being a subset of the full spectrum of play things and all.

“Making matters worse is that traditional security tools don’t work very well in virtual environments”. In this case i would leave remaining five words. A Kenwood Food Mixer goes to the person who can guess which ones those are. See? Who said security isn’t fun?

“System operators believe that somehow virtualization provides their environments with security not found in the world of physical machines”. Now we’re going Twilight Zone. We’ve been discussing the inter-cluster sized gap between the physical world and electronic information in this article, and now we have this? Segmentation fault, core dumped.

Anyway – virtualization does increase security in some cases. It depends how the VM has been configured and what type of networking config is used, but if we’re talking virtualised servers that advertise services to port scanners, and / or SMB shares with their hosts, then clearly the virtualised aspect is suddenly very real. VM guests used in a NAT’ing setup is a decent way to hide information on a laptop/mobile device or anything that hooks into an untrusted network (read: “corporate private network”).

The vendor who was being interviewed finished up with “Every product sounds the same,” …”They all make you secure. And none of them deliver.” Probably if i was a vendor I might not say that.

Sorry, I just find discussions of security with “radical new infrastructure” to be something of a waste of bandwidth. We have some very fundamental, ground level problems in information security that are actually not so hard to understand or even solve, at least until it comes to self-reflection and the thought of looking for a new line of work.

All of these “VM” and “cloud” and “BYOD” discussions would suddenly disappear with the introduction of integrity in our little world because with that, the bigger picture of skills, accreditation, and therefore trust would be solved (note the lack of a CISSP/CEH dig there).

I covered the problems and solutions in detail in Security De-engineering, but you know what? The solution (chapter 11) is no big secret. It comes from the gift of intuition with which many humans are endowed. Anyway – someone had to say it, now its in black and white.

Hardening is Hard If You’re Doing it Right

Yes, ladies and gentlemen, hardening is hard. If its not hard, then there are two possibilities. One is that the maturity of information security in the organization is at such a level that security happens both effectively and transparently – its fully integrated into the fabric of BAU processes and many of said processes are fully automated with accurate results. The second (far more likely given the reality of security in 2013) is that the hardening is not well implemented.

For the purpose of this diatribe, let us first define “hardening” so that we can all be reading from the same hymn sheet. When I’m talking about hardening here, the theme is one of first assessing vulnerability, then addressing the business risk presented by the vulnerability. This can apply to applications, or operating systems, or any aspect of risk assessment on corporate infrastructure.

In assessing vulnerability, if you’re following a check list, hardening is not hard – in fact a parrot can repeat pearls of wisdom from a check list. But the result of merely following a check list will be either wide open critical hosts or over-spending on security – usually the former. For sure, critical production systems will be impacted, and I don’t mean in a positive way.

You see, like most things in security, some thinking is involved. It does suit the agenda of many in this field to pretend that security analysis can be reduced down to parrot-fashion recital of a check list. Unfortunately though, some neural activity is required, at least if gaining the trust of our customers (C-levels, other business units, home users, etc) is important to us.

The actual contents of the check list should be the easy part, although unfortunately as of 2013, we all seem to be using different versions of the check list, and some versions are appallingly lacking. The worst offenders here deliver with a quality that is inversely proportional to the prices they charge – and these are usually external auditors from big 4 consultancies, all of whom have very decent check lists, but who also fail to ensure that Consultants use said check list. There are plenty of cases where the auditor knocks up their own garage’y style shell script for testing. In one case i witnessed not so long ago, the script for testing RedHat Enterprise Linux consisted of 6 tests (!) and one of the tests showed a misunderstanding of the purpose of the /etc/ftpusers file.

But the focus here is not on the methods deployed by auditors, its more general than that. Vulnerability testing in general is not a small subject. I have posted previously on the subject of “manual” network penetration testing. In summary: there will be a need for some businesses to show auditors that their perimeter has been assessed by a “trusted third party”, but in terms of pure value, very few businesses should be paying for the standard two week delivery with a four person team. For businesses to see any real value in a network penetration test, their security has to be at a certain level of maturity. Most businesses are nowhere near that level.

Then there is the subject of automated, unauthenticated “scanning” techniques which I have also written about extensively, both in an earlier post and in Chapter Five of Security De-engineering. In summary, the methodology used in unauthenticated vulnerability scanning results in inaccuracy, large numbers of false positives, wasted resources, and annoyed operations and development teams. This is not a problem with any particular tool, although some of them are especially bad. It is a limitation of the concept of unauthenticated testing, which amounts to little more than pure guesswork in vulnerability assessment.

How about the growing numbers of “vulnerability management” products out there (which do not “manage” vulnerability, they make an attempt at assessing vulnerability)? Well, most of them are either purely an expensive graphical interface to [insert free/open source scanner name], or if the tool was designed to make a serious attempt at accurate vulnerability assessment (more of them do not), then the tests will be lacking or over-done, inaccurate, and / or doing the scanning in an insecure way (e.g. the application is run over a public URL, with the result that all of your configuration data, including admin passwords, are held by an untrusted third party).

In one case, a very expensive VM product literally does nothing other than port scan. It is configured with hundreds of “test” patterns for different types of target (MS Windows, *nix, etc) but if you’re familiar with your OS configurations,you will look at the tool output and be immediately suspicious. I ran the tool against a Linux and Windows test target and “packet sniffed” the scanning engine’s probe attempts. In summary, the tool does nothing. It just produces a long list of configuration items (so effectively a kind of Security Standard for the target) without actually testing for the existence of vulnerability.

So the overall principle: the company [hopefully] has a security standard for each major operating system and database on their network and each item in the standard needs to be tested for all, or some of the information asset hosts in the organization, depending on the overall strategy and network architecture. As of the time of writing, there will need to be some manual / scripted augmentation of automatic vulnerability assessment processes.

So once armed with a list of vulnerabilities, what does one do with it? The vulnerability assessment is the first step. What has to happen after that? Can Security just toss the report over to ops and hope for the best? Yes, they can, but this wouldn’t make them very popular and also there needs to be some input from security regarding the actual risk to the business. Taking the typical function of operations teams (I commented on the functions and relationships between security and operations in an earlier post), if there is no input from security, then every risk mitigation that meets any kind of an impact will be blocked.

There are some security service providers/consultancies who offer a testing AND a subsequent hardening service. They want to offer both detection AND a solution, and this is very innovative and noble of them. However, how many security vulnerabilities can be addressed blindly without impacting critical production processes? Rhetorical question: can applications be broken by applying security fixes? If I remove the setuid bit from a root owned X Window related binary, it probably has no effect on business processes. Right? What if operations teams can no longer authenticate via their usual graphical interface? This is at least a little bit disruptive.

In practice, as it turns out, if you look at a Security Standard for a core technology, lets take Oracle 11g as an example: how many of the numerous elements of a Security Standard can we say can be implemented without fear of breaking applications, limiting access for users or administrators, or generally just making trouble-shooting of critical applications a lot less efficient? The answer is: not many. Dependencies and other problems can come from surprising sources.

Who in the organization knows about dependencies and the complexities of production systems? Usually that would be IT / Network Operations. And how about application – related dependencies? That would be application architects, or just generally we’ll say “dev teams” as they’re so affectionately referred to these days. So the point: even if security does have admin access to IT resources (rare), is the risk mitigation/hardening a job purely for security? Of course the answer is a resounding no, and the same goes for IT Operations.

So, operations and applications architects bring knowledge of the complexities of apps and infrastructure to the table. Security brings knowledge of the network architecture (data flows, firewall configurations, network device configurations), the risk of each vulnerability (how hard is to exploit and what is the impact?), and the importance to the business of information assets/applications. Armed with the aforementioned knowledge, informed and sensible decisions on what to do with the risk (accept, mitigate, work around, or transfer) can be made by the organization, not by security, or operations.

The early days of deciding what to do with the risk will be slow and difficult and there might even be some feisty exchanges, but eventually, addressing the risk becomes a mature, documented process that almost melts into the background hum of the machinery of a business.