What Is Your VA Scanner Really Doing?

It’s clear from social media and first hand reports, that the awareness of what VA (Vulnerability Assessment) scanners are really doing in testing scenarios is quite low. So I setup up a test box with Ubuntu 18 and exposed some services which are well known to the hacker community and also still popular in production business use cases: Secure Shell (SSH) and an Apache web service.

This post isn’t an attack on VA products at all. It’s aimed at setting a more healthy expectation, and I will cover a test scenario with a packet sniffer (Wireshark), Nessus Professional, and OpenVAS, that illustrates the point.

I became aware 20 years ago, from validating VA scanner output, that a lot of what VA scanners barf out is alarmist (red flags, CRITICAL [fix NOW!]) and also based purely on guesswork – when the scanner “sees” a service, it grabs a service banner (e.g. “OpenSSH 7.6p1 Ubuntu 4ubuntu0.3”), looks in its database for public disclosed vulnerability with that version, and flags vulnerability if there are any associated CVEs. Contrary to popular belief, there is no actual interaction in the way of further investigating or validating vulnerability. All vulnerability reporting is based on the service banner. So if i change my banner to “hi OpenVAS”, nothing will be reported. And in security, we like to advise hiding product names and versions – this helps with drive-by style automated attacks, in a much more effective way than for example, changing default service ports.

This article then demonstrates the VA scanner behaviour described above and covers developments over the past 20 years (did things improve?) with the two most commonly found scanners: Nessus and OpenVAS, which even if are not used directly, are used indirectly (vendors in this space do not recreate the wheel, they take existing IP – all legal I’m sure – and create their own UI for it). It was fairly well-known that Nessus was the basis of most commercial VAs in the 00s, and it seems unlikely that scenario has changed a great deal.

Test Setup

So if I look at my test box setup I see from port scan results (nmap):

PORT STATE SERVICE VERSION
22/tcp open ssh OpenSSH 7.6p1 Ubuntu 4ubuntu0.3 (Ubuntu Linux; protocol 2.0)
25/tcp open smtp Postfix smtpd
80/tcp open http Apache httpd 2.4.29 ((Ubuntu))
139/tcp open netbios-ssn Samba smbd 3.X - 4.X (workgroup: WORKGROUP)
445/tcp open netbios-ssn Samba smbd 3.X - 4.X (workgroup: WORKGROUP)
3000/tcp open http Apache httpd 2.4.29 ((Ubuntu))
5000/tcp open http Docker Registry (API: 2.0)
8000/tcp open http Apache httpd 2.4.29

So…naughty, naughty. Apache is not so old but still I’d expect to see some CVEs flagged, and I can say the same for the SSH service. Samba is there too in a default format. Samba is Linux’s implementation of MS Windows SMB (Server Message Block) and is full of holes. The Postfix mail service is also quite old, and there’s a Docker API exposed! All this would get an attacker quite excited, and indeed there’s plenty of automated attack scenarios which would work here.

There was also an EOL Phpmyadmin and EOL jQuery wrapped up in the web service.

Developments in Two Decades

So there has been some changes. For want of a better word, there’s now more honesty. In the case of OpenVAS, for vulnerability that involves grabbing a banner and assuming vulnerability based on this, there is a Quality of Detection (QoD) rating, which is set as default at around 70%. This is a kind of probability rating for a finding not being a false positive. Interestingly those findings that involve a banner grab are way down there under 50, and most are no longer flagged as “critical”.

Nessus, for its banner-grabbed vulnerabilities, is more explicit and it is report will state “Note that Nessus has not tested for this issue but has instead relied only on the application’s self-reported version number.”

Even 7 years ago, there would be lots of issues reported for an outdated Apache or SSH service, many of which would be flagged wrongly as CRITICAL, but not necessarily exploitable, and the existance of the vulnerability was based only on a text banner. So these more recent VA versions are an improvement, but its clear the awareness out there of these issues is still quite low. The problem is now – we do want to see if services are downlevel, so please $VENDOR, don’t hide them (more on this later).

First Scan – Banners On Display

So using Wireshark, sniffing HTTP on port 80 (plain text) we have the following…

Wireshark window showing the OpenVAS interaction with the text box target

The packets highlighted in black are the only two of any interest, wherein OpenVAS has used the HTTP GET method to request for “/”, and receives a response where the header shows the product (Apache) and version (2.4.29).

Note the Wireshark filter used (tcp.port == 80 and http). Other than the initial exchange where a banner was grabbed, there was no further interaction. This was the same for Nessus.

What was reported? Well, for OpenVAS, a handful of potential CVEs were reported but I had to lower the QoD to see them! Which is interesting. If anything this is moving the bar too far in the opposite direction. I mean as an owner of this system, I do want to know if i am running old warez!

For Nessus, 6 Apache CVEs were reported with either critical or “high” severity. Overall, I had a similar experience with that of OpenVAS except to even see the Apache issues reported I had to beg the scanner with the following scan configuration setup:

  • Settings –> Assessment –> Override normal accuracy and show potential false alarms
  • Settings –> Assessment –> perform thorough tests
  • Settings –> Advanced –> enable safe checks on (and i also tried the “off” option)
  • Settings –> Advanced –> plugins –> web servers –> enabled. This is the Apache vulnerability section

For the SSH service, OpenVAS reported 3 medium issues which is roughly what i was expecting. Nessus did not report any at all! Answers on a postcard for that one.

Banners Concealed

What was interesting was that the Secure Shell service doesn’t present an option to hide the banner any more, and on investigation, the majority-held community-version of this story is that the banner is needed in some cases.

Apache however did present a banner obfuscation option. For Ubuntu 18 and Apache 2.4.29, this involved:

  • apt install libapache2-mod-security2
  • a2enmod security2
  • edit /etc/apache2/conf-available/security.conf
  • ServerTokens set to “Prod”
  • systemctl restart apache2

This setup results in the following banner for Apache: Apache httpd – so no version number.

The outcome? As expected, all mention of Apache has now ended. Neither OpenVAS or Nessus reported anything to do with Apache of any note.

What DID The Scanners Find?

Just to summarise the findings when the banners were fully on display…it wasn’t a blank slate. There were some findings. Here are the highlights – for OpenVAS:

  • All Critical issues detected were related to PHPMyAdmin, plus one related to jQuery being EOL, but not stating any particular vulnerability. These version numbers are remotely queriable and this is the basis on which these issues were reported.
  • The SSH and Apache issues.
  • Other lower criticality issues were around certificate ciphers.
  • Some CVSS 6, medium issues with Samba – again these are banner-grabbed guesswork findings.

Nessus didn’t report anything outside of what OpenVAS flagged. OpenVAS reported significantly more issues.

It should be said that both scanners did a lot of querying for HTTP application layer issues that could be seen in the packet sniffer output. For example, queries were made for Python/Django settings.py (database password), and other HTTP gotchas.

Unauthenticated Versus Credentialed Testing

With VA Scanners, the picture hasn’t really changed in 20 years. If anything the picture is worse now because the balance with banner-grabbing guesswork has swung too far the other way, and we have to plead with the scanners to tell us about downlevel software versions. This is presumably an effort to reduce the number of false positives, but its not an advisable strategy. It’s perfectly ok to let us know we are running old wares and if we want, we should be able to see the CVEs associated with our listening services, even if many of them are false positives (and I can say from 20 years of network penetration testing, there will be plenty).

With this type of unauthenticated VA scanning though, the real problem has always been false negatives (to the extent that an open Docker API wasn’t flagged as a problem by either scanner), but none of the other commercial tools out there (I have tried a few in recent years) will be in a better position, because there is hard-limit that can be achieved non-locally with no adminstrative authentication credentials.

Both Nessus and OpenVAS allow use of credentialled based testing but its clear this aspect was never a part of the core design. Nessus has expanded its portfolio of credentialed tests but in the time allocated I could not get it to work with SSH public key authentication. In any case, a CIS benchmark approach will always be not-so-great, for reasons outside the scope of this article. We also have to be careful about where authentication credentials are stored. In the case of SSH keys, this means storing a private key, and with some vendors the key will be stored in their cloud somewhere out there.

Conclusion

This post focusses on one major aspect of VA scanning that is grabbing banners and reporting on vulnerability based on the findings from the banner. This is better than nothing but its futility is hopefully illustrated here, and this approach is core to most of what VA scanners do for us.

The market priority has always been towards unauthenticated scanning. Little focus was ever given to credentialed scanning. This has to change because the unauthenticated approach is like trying to diagnose a problem with your car without ever lifting the bonnet/hood, and moreover we could be moving into an era where accreditation bodies mandate credentialed scanning.

Fintechs and Security – Part Two

  • Prologue – covers the overall challenge at a high level
  • Part One – Recruiting and Interviews
  • Part Two – Threat and Vulnerability Management – Application Security
  • Part Three – Threat and Vulnerability Management – Other Layers
  • Part Four – Logging
  • Part Five – Cryptography and Key Management, and Identity Management
  • Part Six – Trust (network controls, such as firewalls and proxies), and Resilience

Threat and Vulnerability Management (TVM) – Application Security

This part covers some high-level guider points related to the design of the application security side of TVM (Threat and Vulnerability Management), and the more common pitfalls that plague lots of organisations, not just fintechs. I won’t be covering different tools in the SAST or DAST space apart from one known-good. There are some decent SAST tools out there but none really stand out. The market is ever-changing. When i ask vendors to explain what they mean by [new acronym] what usually results is nothing, or a blast of obfuscation. So I’m not here to talk about specific vendor offerings, especially as the SAST challenge is so hard to get even close to right.

With vulnerability management in general, ${VENDOR} has succeeded in fouling the waters by claiming to be able to automate vulnerability management. This is nonsense. Vulnerability assessment can to some limited degree be automated with decent results, but vulnerability management cannot be automated.

The vulnerability management cycle has also been made more complicated by GRC folk who will present a diagram representing a cycle with 100 steps, when really its just assess –> deduce risk –> treat risk –> GOTO 1. The process is endless, and in the beginning it will be painful, but if handled without redundant theory, acronyms-for-the-sake-of-acronyms-for-the-same-concept-that-already-has-lots-of-acronyms, rebadging older concepts with a new name to make them seem revolutionary, or other common obfuscation techniques, it can be easily integrated as an operational process fairly quickly.

The Dawn Of Application Security

If you go back to the heady days of the late 90s, application security was a thing, it just wasn’t called “application security”. It was called penetration testing. Around the early 2000s, firewall configurations improved to the extent that in a pen test, you would only “see” port 80 and/or 443 exposing a web service on Apache, Internet Information Server, or iPlanet (those were the days – buffer overflow nirvana). So with other attack channels being closed from the perimeter perspective, more scrutiny was given to web-based services.

Attackers realised you can subvert user input by intercepting it with a proxy, modifying some fields, perhaps inject some SQL or HTML, and see output that perhaps you wouldn’t expect to see as part of the business goals of the online service.

At this point the “application security” world was formed and vulnerabilities were classified and given new names. The OWASP Top Ten was born, and the world has never been the same since.

SAST/DAST

More acronyms have been invented by ${VENDOR} since the early early pre-holocene days of appsec, supposedly representing “brand new” concepts such as SAST (Static Application Security Testing) and DAST (Dynamic Application Security Testing), which is the new equivalent of white box and black box testing respectively. The basic difference is about access to the source code. SAST is source code testing while DAST is an approach that will involve testing for OWASP type vulnerabilities while the software is running and accepting client connection requests.

The SAST scene is one that has been adopted by fintechs in more recent times. If you go back 15 years, you would struggle to find any real commercial interest in doing SAST – so if anyone ever tells you they have “20” or even “10” years of SAST experience, suggest they improve their creativity skills. The general feeling, not unjustified, was that for a large, complex application, assessing thousands of lines of source code at a vital organ/day couldn’t be justified.

SAST is more of a common requirement these days. Why is that? The rise of fintechs, whose business is solely about generation of applications, is one side of it, and fintechs can (and do) go bust if they suffer a breach. Also – ${VENDOR}s have responded to the changing Appsec landscape by offering “solutions”. To be fair, the offerings ARE better than 10 years ago, but it wouldn’t take much to better those Hello World scripts. No but seriously, SAST assessment tools are raved about by Gartner and other independent sources, and they ARE better than offerings from the Victorian era, but only in certain refined scenarios and with certain programming languages.

If it was possible to be able to comprehensively assess lots of source code for vulnerability and get accurate results, then theoretically DAST would be harder to justify as a business undertaking. But as it is, SAST + DAST, despite the extensive resources required to do this effectively, can be justified in some cases. In other cases it can be perfectly fine to just go with DAST. It’s unlikely ever going to be ok to just go with SAST because of the scale of the task with complex apps.

Another point here – i see some fintechs using more than one SAST tool, from different vendors. There’s usually not much to gain from this. Some tools are better with some programming languages than others, but there is nothing cast in stone or any kind of majority-view here. The costs of going with multiple vendors is likely going to be harder and harder to justify as time goes on.

Does Automated Vulnerability Assessment Help?

The problem of appsec is still too complex for decent returns from automation. Anyone who has ever done any manual testing for issues such as XSS knows the vast myriad of ways in which such issues can be manifested. The blackbox/blind/DAST scene is still not more than Burp, Dirbuster, but even then its mostly still manual testing with proxies. Don’t expect to cover all OWASP top 10 issues for a complex application that presents an admin plus a user interface, even in a two-week engagement with four analysts.

My preferred approach is still Fred Flinstone’y, but since the automation just isn’t there yet, maybe its the best approach? This needs to happen when an application is still in the conceptual white board architecture design phase, not a fully grown [insert Hipster-given-name], and it goes something like this: white board, application architect – zero in on the areas where data flows involve transactions with untrusted networks or users. Crpyto/key management is another area to zoom in on.

Web Application Firewall

The best thing about WAFs, is they only allow propagation of the most dangerous attacks. But seriously, WAF can help, and in some respects, given the above-mentioned challenges of automating code testing, you need all the help you can get, but you need to spend time teaching the WAF about your expected URL patterns and tuning it – this can be costly. A “dumb” default-configured WAF can probably catch drive-by type issues for public disclosed vulnerabilities as long as you keep it updated. A lot depends on your risk profile, but note that you don’t need a security engineer to install a WAF and leave it in default config. Pretty much anyone can do this. You _do_ need an experienced security engineer or two to properly understand an application and configure a WAF accordingly.

Python and Ruby – Web Application Frameworks

Web application frameworks such as Ruby on Rails (RoR) and Django are in common usage in fintechs, and are at least in some cases, developed with security in mind in that they do offer developers features that are on by default. For example, with Django, if you design a HTML form for user input, the server side will have been automagically configured with the validation on the server side, depending on the model field type. So an email address will be validated client and server-side as an email address. Most OWASP issues are the result of failures to validate user input on the server side.

Note also though that with Django you can still disable HTML tag filtering of user input with a “| safe” in the template. So it’s dangerous to assume that all user input is sanitised.

In Django Templates you will also see a CSRF token as a hidden form field if you include a Form object in your template.

The point here is – the root of all evil in appsec is server-side validation, and much of your server-side validation effort in development will be covered by default if you go with RoR or Django. That is not the end of the story though with appsec and Django/RoR apps. Vulnerability of the host OS and applications can be problematic, and it’s far from the case that use of either Django or RoR as a dev framework eliminates the need for DAST/SAST. However the effort will be significantly reduced as compared to the C/Java/PHP cases.

Wrap-up

Overall i don’t want to too take much time bleating about this topic because the take away is clear – you CAN take steps to address application security assessment automation and include the testing as part of your CI/CD pipeline, but don’t expect to catch all vulnerabilities or even half of what is likely an endless list.

Expect that you will be compromised and plan for it – this is cheaper than spending zillions (e.g. by going with multiple SAST tools as i’ve seen plenty of times) on solving an unsolvable problem – just don’t let an incident result in a costly breach. This is the purpose of security architecture and engineering. It’s more to deal with the consequences of an initial exploit of an application security fail, than to eliminate vulnerability.