What Is Your VA Scanner Really Doing?

It’s clear from social media and first hand reports, that the awareness of what VA (Vulnerability Assessment) scanners are really doing in testing scenarios is quite low. So I setup up a test box with Ubuntu 18 and exposed some services which are well known to the hacker community and also still popular in production business use cases: Secure Shell (SSH) and an Apache web service.

This post isn’t an attack on VA products at all. It’s aimed at setting a more healthy expectation, and I will cover a test scenario with a packet sniffer (Wireshark), Nessus Professional, and OpenVAS, that illustrates the point.

I became aware 20 years ago, from validating VA scanner output, that a lot of what VA scanners barf out is alarmist (red flags, CRITICAL [fix NOW!]) and also based purely on guesswork – when the scanner “sees” a service, it grabs a service banner (e.g. “OpenSSH 7.6p1 Ubuntu 4ubuntu0.3”), looks in its database for public disclosed vulnerability with that version, and flags vulnerability if there are any associated CVEs. Contrary to popular belief, there is no actual interaction in the way of further investigating or validating vulnerability. All vulnerability reporting is based on the service banner. So if i change my banner to “hi OpenVAS”, nothing will be reported. And in security, we like to advise hiding product names and versions – this helps with drive-by style automated attacks, in a much more effective way than for example, changing default service ports.

This article then demonstrates the VA scanner behaviour described above and covers developments over the past 20 years (did things improve?) with the two most commonly found scanners: Nessus and OpenVAS, which even if are not used directly, are used indirectly (vendors in this space do not recreate the wheel, they take existing IP – all legal I’m sure – and create their own UI for it). It was fairly well-known that Nessus was the basis of most commercial VAs in the 00s, and it seems unlikely that scenario has changed a great deal.

Test Setup

So if I look at my test box setup I see from port scan results (nmap):

PORT STATE SERVICE VERSION
22/tcp open ssh OpenSSH 7.6p1 Ubuntu 4ubuntu0.3 (Ubuntu Linux; protocol 2.0)
25/tcp open smtp Postfix smtpd
80/tcp open http Apache httpd 2.4.29 ((Ubuntu))
139/tcp open netbios-ssn Samba smbd 3.X - 4.X (workgroup: WORKGROUP)
445/tcp open netbios-ssn Samba smbd 3.X - 4.X (workgroup: WORKGROUP)
3000/tcp open http Apache httpd 2.4.29 ((Ubuntu))
5000/tcp open http Docker Registry (API: 2.0)
8000/tcp open http Apache httpd 2.4.29

So…naughty, naughty. Apache is not so old but still I’d expect to see some CVEs flagged, and I can say the same for the SSH service. Samba is there too in a default format. Samba is Linux’s implementation of MS Windows SMB (Server Message Block) and is full of holes. The Postfix mail service is also quite old, and there’s a Docker API exposed! All this would get an attacker quite excited, and indeed there’s plenty of automated attack scenarios which would work here.

There was also an EOL Phpmyadmin and EOL jQuery wrapped up in the web service.

Developments in Two Decades

So there has been some changes. For want of a better word, there’s now more honesty. In the case of OpenVAS, for vulnerability that involves grabbing a banner and assuming vulnerability based on this, there is a Quality of Detection (QoD) rating, which is set as default at around 70%. This is a kind of probability rating for a finding not being a false positive. Interestingly those findings that involve a banner grab are way down there under 50, and most are no longer flagged as “critical”.

Nessus, for its banner-grabbed vulnerabilities, is more explicit and it is report will state “Note that Nessus has not tested for this issue but has instead relied only on the application’s self-reported version number.”

Even 7 years ago, there would be lots of issues reported for an outdated Apache or SSH service, many of which would be flagged wrongly as CRITICAL, but not necessarily exploitable, and the existance of the vulnerability was based only on a text banner. So these more recent VA versions are an improvement, but its clear the awareness out there of these issues is still quite low. The problem is now – we do want to see if services are downlevel, so please $VENDOR, don’t hide them (more on this later).

First Scan – Banners On Display

So using Wireshark, sniffing HTTP on port 80 (plain text) we have the following…

Wireshark window showing the OpenVAS interaction with the text box target

The packets highlighted in black are the only two of any interest, wherein OpenVAS has used the HTTP GET method to request for “/”, and receives a response where the header shows the product (Apache) and version (2.4.29).

Note the Wireshark filter used (tcp.port == 80 and http). Other than the initial exchange where a banner was grabbed, there was no further interaction. This was the same for Nessus.

What was reported? Well, for OpenVAS, a handful of potential CVEs were reported but I had to lower the QoD to see them! Which is interesting. If anything this is moving the bar too far in the opposite direction. I mean as an owner of this system, I do want to know if i am running old warez!

For Nessus, 6 Apache CVEs were reported with either critical or “high” severity. Overall, I had a similar experience with that of OpenVAS except to even see the Apache issues reported I had to beg the scanner with the following scan configuration setup:

  • Settings –> Assessment –> Override normal accuracy and show potential false alarms
  • Settings –> Assessment –> perform thorough tests
  • Settings –> Advanced –> enable safe checks on (and i also tried the “off” option)
  • Settings –> Advanced –> plugins –> web servers –> enabled. This is the Apache vulnerability section

For the SSH service, OpenVAS reported 3 medium issues which is roughly what i was expecting. Nessus did not report any at all! Answers on a postcard for that one.

Banners Concealed

What was interesting was that the Secure Shell service doesn’t present an option to hide the banner any more, and on investigation, the majority-held community-version of this story is that the banner is needed in some cases.

Apache however did present a banner obfuscation option. For Ubuntu 18 and Apache 2.4.29, this involved:

  • apt install libapache2-mod-security2
  • a2enmod security2
  • edit /etc/apache2/conf-available/security.conf
  • ServerTokens set to “Prod”
  • systemctl restart apache2

This setup results in the following banner for Apache: Apache httpd – so no version number.

The outcome? As expected, all mention of Apache has now ended. Neither OpenVAS or Nessus reported anything to do with Apache of any note.

What DID The Scanners Find?

Just to summarise the findings when the banners were fully on display…it wasn’t a blank slate. There were some findings. Here are the highlights – for OpenVAS:

  • All Critical issues detected were related to PHPMyAdmin, plus one related to jQuery being EOL, but not stating any particular vulnerability. These version numbers are remotely queriable and this is the basis on which these issues were reported.
  • The SSH and Apache issues.
  • Other lower criticality issues were around certificate ciphers.
  • Some CVSS 6, medium issues with Samba – again these are banner-grabbed guesswork findings.

Nessus didn’t report anything outside of what OpenVAS flagged. OpenVAS reported significantly more issues.

It should be said that both scanners did a lot of querying for HTTP application layer issues that could be seen in the packet sniffer output. For example, queries were made for Python/Django settings.py (database password), and other HTTP gotchas.

Unauthenticated Versus Credentialed Testing

With VA Scanners, the picture hasn’t really changed in 20 years. If anything the picture is worse now because the balance with banner-grabbing guesswork has swung too far the other way, and we have to plead with the scanners to tell us about downlevel software versions. This is presumably an effort to reduce the number of false positives, but its not an advisable strategy. It’s perfectly ok to let us know we are running old wares and if we want, we should be able to see the CVEs associated with our listening services, even if many of them are false positives (and I can say from 20 years of network penetration testing, there will be plenty).

With this type of unauthenticated VA scanning though, the real problem has always been false negatives (to the extent that an open Docker API wasn’t flagged as a problem by either scanner), but none of the other commercial tools out there (I have tried a few in recent years) will be in a better position, because there is hard-limit that can be achieved non-locally with no adminstrative authentication credentials.

Both Nessus and OpenVAS allow use of credentialled based testing but its clear this aspect was never a part of the core design. Nessus has expanded its portfolio of credentialed tests but in the time allocated I could not get it to work with SSH public key authentication. In any case, a CIS benchmark approach will always be not-so-great, for reasons outside the scope of this article. We also have to be careful about where authentication credentials are stored. In the case of SSH keys, this means storing a private key, and with some vendors the key will be stored in their cloud somewhere out there.

Conclusion

This post focusses on one major aspect of VA scanning that is grabbing banners and reporting on vulnerability based on the findings from the banner. This is better than nothing but its futility is hopefully illustrated here, and this approach is core to most of what VA scanners do for us.

The market priority has always been towards unauthenticated scanning. Little focus was ever given to credentialed scanning. This has to change because the unauthenticated approach is like trying to diagnose a problem with your car without ever lifting the bonnet/hood, and moreover we could be moving into an era where accreditation bodies mandate credentialed scanning.

Netdelta – Install and Configure

Netdelta is a tool for monitoring networks and flagging alerts upon changes in advertised services. Now – I like Python and especially Django, and around 2014 or so i was asked to setup a facility for monitoring for changes in that organisation’s perimeter. After some considerable digging, i found nada, as in nothing, apart from a few half-baked student projects. So i went off and coded Netdelta, and the world has never been the same since.

I guess when i started with Netdelta i didn’t see it as a solution that would be widely popular because i was under the impression you could just do some basic shell scripting with nmap, and ndiff is specifically designed for delta flagging. However what became apparent at an early stage was that timeouts are a problem. A delta will be flagged when a host or service times out – and this happens a lot, even on a gigabit LAN, and it happens even more in public clouds. I built some analytics into Netdelta that looks back over the scan history and data and makes a call on the likelihood of a false positive (red, amber green).

Most organisations i worked with would benefit from this. One classic example i can think of – a trading house that had been on an aggressive M&A spree, maybe it was Black Friday or…? Anyway – they fired some network engineers and hired some new and cheaper ones, exacerbating what was already a poorly managed perimeter scenario. CISO wanted to know what was going off with these Internet facing subnets – enter Netdelta. Unauthorised changes are a problem! I am directly aware of no fewer than 6 incidents that occurred as a result of exposed SSH, SMB (Wannacry), and more recently RDP, and indirectly aware of many more.

Anyway without further waffle, here’s how you get Netdelta up and running. Warning – there are a few moving parts, but if someone wants it in Docker, let me know.

I always go with Ubuntu. The differences between Linux distros are like the differences between mueslis. My build was on 18.04 but its highly likely 19 variants will be just fine.

apt-get update
apt-get install curl nmap apache2 python3 python3-pip python3-venv rabbitmq-server mysql-server libapache2-mod-wsgi-py3 git
apt-get -y install software-properties-common
add-apt-repository ppa:certbot/certbot
apt-get install -y python-certbot-apache
Clone the repository from github into your <netdelta root> 
git clone https://github.com/SevenStones/netdelta.git

Filesystem

Create the user that will own <netdelta root>

useradd -s /bin/bash -d /home/<user> -m <user>

Create the directory that will host the Netdelta Django project if necessary

Add the user to a suitable group, and strip world permissions from netdelta directories

 groupadd <group> 
 usermod -G  <group> <user>
 usermod -G  <group> www-data
 chown -R www-data:<group> /var/www
 chown -R <user>:<group> <netdelta root> 

Make the logs dir, e.g. /logs, and you will need to modify /nd/netdelta_logger.py to point to this location. Note the celery monitor logs go to /var/log/celery/celery-monitor.log …which of course you can change.

Strip world permissions from all netdelta and apache root dirs:

chmod -Rv o-rwx <web root>
chmod -Rv o-rwx <netdelta root>

Virtualenv

The required packages are in requirements.txt, in the root of the git repo. Your virtualenv build with Python 3 goes approximately like this …

python3 -m venv /path/to/new/virtual/environment

You activate thusly: source </path/to/new/virtual/environment/>/bin/activate

Then suck in the requirements as root, remembering to fix permissions after you do this.

pip3 install wheel
pip3 install -r <netdelta root>/requirements.txt

You can use whatever supported database you like. MySQL is assumed here.

The Python framework mysqlclient was used with earlier versions of Django and MySQL. but with Python 3 and later Django versions, the word on the street is PyMySQL is the way to go. With this though, it took some trickery to get the Django project up and running; in the form of init.py for the project (<netdelta root>/netdelta/.init.py) and adding a few lines …

import pymysql 
pymysql.install_as_MySQLdb()

While in virtualenv, and under your netdelta root, add a superuser for the DF

Patch libnmap

Two main mods to the libnmap in usage with Netdelta were necessary. First, with later versions of Celery (>3.1), there was a security issue with “deamonic processes are not allowed to have children”, for which an alternative fork of libnmap fixed the problem. Then we needed to return to Netdelta the process id of the running nmap port scanner process.

cd /opt
git clone https://github.com/pyoner/python-libnmap.git
cp ./python-libnmap/libnmap/process.py <virtualenv root>/lib/python3.x/site-packages/libnmap/

then patch libnmap to allow Netdelta to kill scanning processes

<netdelta_root>/scripts/fix-libnmap.bash

Change the environment variables to match your install and use the virtualenv name as a parameter

Database Setup

Create a database called netdetla and use whichever encoding snd collation you like.

CREATE DATABASE netdelta CHARACTER SET utf8 COLLATE utf8_general_ci;

Then from <netdelta root> with virtualenv engaged:

python manage.py makemigrations nd
python manage.py migrate

Web Server

I am assuming all you good security pros don’t want to use the development server? Well as you’re only dealing with port scan data then….your call. I’m assuming Apache as a production web server.

You will need to give Apache a stub web root and enable the wsgi module. For the latter i added this to apache2.conf – this gives you some control over the exact version of Python loaded.

LoadModule wsgi_module "/usr/lib/python3.7/site-packages/mod_wsgi/server/mod_wsgi-py37.cpython-37m-i386-linux-gnu.so"
WSGIPythonHome "/usr"

Under the DocumentRoot line in your apache config file, give the pointers for WSGI.

WSGIDaemonProcess <site> python-home=<virtualenv root> python-path=<netdelta root>  WSGIProcessGroup <site>  WSGIScriptAlias / <netdelta root>/netdelta/wsgi.py  Alias /static/ <netdelta root>/netdelta/

Note also you will need to adjust your wsgi.py under <netdelta root>/netdelta/ –

# Add the site-packages of the chosen virtualenv to work with site.addsitedir('<virtualenv root>/lib/python3.7/site-packages')

Celery

From the current shell
in …<virtualenv> ….under <netdelta root>

celery worker -E -A nd -n default -Q default --loglevel=info -B --logfile=<netdelta root>/logs/celery.log

Under systemd (you will almost certainly want to do this) with the root user. The script pointed to by systemd for

systemctl start celery

can have all the environment checking (this isn’t intended to be a tutorial in BASH scripting), but the core of it…

cd <netdelta root>
nohup $VIRTUALENV_DIR/bin/celery worker -E -A nd -n ${SITE} -Q ${SITE} --loglevel=info -B --logfile=${SITE_LOGS}/celery.log >/dev/null 2>&1 &

And then you can put together your own scripts for status, stop, restart.

#WannaCry and The Rise and Fall of the Firewall

The now infamous WannaCry Ransomware outbreak was the most widespread malware outbreak since the early 2000s. There was a very long gap between the early 2000s “worm” outbreaks (think Sasser, Blaster, etc) and this latest 2017 WannaCry outbreak. The usage of the phrase “worm” was itself widespread, especially as it was included in CISSP exam syllabuses, but then it died out. Now its seeing a resurgence, that started last weekend – but why? Why is the worm turning for the worm (I know – it’s bad – but it had to go in here somewhere)?

As far as WannaCry goes, there has been some interesting developments over the past few days – contrary to popular belief, it did not affect Windows XP, the most commonly affected was Windows 7, and according to some experts, the leading suspect in the case is the Lazarus group with ties to North Korea.

But this post is not about WannaCry. I’m going to say it: I used WannaCry to get attention (and with this statement i’m already more honest than the numerous others who jumped on the WannaCry bandwagon, including our beloved $VENDOR). But I have been meaning to cover the rise and fall of the firewall for some time now, and this instance of a widespread and damaging worm, that spreads by exploiting poor firewall configurations, brought this forward by a few months.

A worm is malware that “uses a computer network to spread itself, relying on security failures on the target computer”. If we think of malware delivery and propagation as two different things – lots of malware since 2004 used email (think Phishing) as a delivery mechanism but spread using an exploit once inside a private network. Worms use network propagation to both deliver and spread. And that is the key difference. WannaCry is without doubt a Worm. There is no evidence to suggest WannaCry was delivered on the back of successful Phishing attacks – as illustrated by the lack of WannaCry home user victims (who sit behind the protection of NAT’ing home routers). Most of the early WannaCry posts were covering Phishing, mostly because of the lack of belief that Server Message Block ports would never be exposed to the public Internet.

The Infosec sector is really only 20 years old in terms of the widespread adoption of security controls in larger organisations. So we have only just started to have a usable, relatable history in infosec. Firewalls are still, in 2017, the security control that delivers most value for investment, and they’ve been around since day one. But in the past 20 years I have seen firewall configurations go thru a spectacular rise in the early 2000s, to a spectacular fall a decade later.

Late 90s Firewall

If we’re talking late 90s, even with some regional APAC banks, you would see huge swaths of open ports in port scan results. Indeed, a firewall to many late 90s organisations was as in the image to the left.

However – you can ask a firewall what it is, even a “Next Gen” firewall, and it will answer “I’m a firewall, i make decisions on accepting or rejecting packets based on source and destination addresses and services”. Next Gen firewall vendors tout the ability of firewalls to do layer 7 DPI stuff such as IDS, WAF, etc, but from what I am hearing, many organisations don’t use these features for one reason or another. Firewalls are quite a simple control to understand and organisations got the whole firewall thing nailed quite early on in the game.

When we got to 2002 or so, you would scan a perimeter subnet and only see VPN and HTTP ports. Mind you, egress controls were still quite poor back then, and continue to be lacking to the present day, as is also the case with internal firewalls other than a DMZ (if there are any). 2002 was also the year when application security testing (OWASP type vulnerability testing) took off, and I doubt it would ever have evolved into a specialised area if organisations had not improved their firewalls. Ultimately organisations could improve their firewalls but they still had to expose web services to the planet. As Marcus Ranum said, when discussing the “ultimate firewall”, “You’ll notice there is a large hole sort of in the centre [of the ultimate firewall]. That represents TCP Port 80. Most firewalls have a big hole right about there, and so does mine.”

During testing engagements for the next decade, it was the case that perimeter firewalls would be well configured in the majority of cases. But then we entered an “interesting” period. It started for me around 2012. I was conducting a vulnerability scan of a major private infrastructure facility in the UK…and “what the…”! RDP and SMB vulnerabilities! So the target organisation served a vital function in national infrastructure and they expose databases, SMB, and terminal services ports to the planet. In case there’s any doubt – that’s bad. And since 2012, firewall configs have fallen by the wayside.

WannaCry is delivered and spreads using a SMB vulnerability, as did Blaster and Sasser all those years ago. If we look at Shodan results for Internet exposure of SMB we find 1.5 million cases. That’s a lot.

So how did we get here? Well there are no answers born out of questionnaires and research but i have my suspicions:

  • All the talk of “Next Generation” firewalls and layer 7 has led to organisations taking their eye off the ball when it comes to layer 3 and 4.
  • All the talk of magic $VENDOR snake oil silver bullets in general has led organisations away from the basics. Think APT-Buster ™.
  • All the talk of outsourcing has led some organisations, as Dr Anton Chuvakin said, to outsource thinking.
  • Talk of “distortion” of the perimeter (as in “in this age of mobile workforces, where is our perimeter now?”). Well the perimeter is still the perimeter – the clue is in the name. The difference is now there are several perimeters. But WannaCry has reminded us that the old perimeter is still…yes – a perimeter.
  • There are even some who advocated losing the firewall as a control, but one of the rare success stories for infosec was the subsequent flaming of such opinions. BTW when was that post published? Yes – it was 2012.

So general guidelines:

  • The Internet is an ugly place with lots of BOTs and humans with bad intentions, along with those who don’t intend to be bad but just are (I bet there are lots of private org firewall logs which show connection attempts of WannaCry from other organisations).
  • Block incoming for all ports other than those needed as a strict business requirement. Default-deny is the easiest way to achieve this.
  • Workstations and mobile devices can happily block all incoming connections in most cases.
  • Egress is important – also discussed very eloquently by Dave Piscitello. Its not all about ingress.
  • Other pitfalls with firewalls involve poor usage of NAT and those pesky network dudes who like to bypass inner DMZ firewalls with dual homing.
  • Watch out for connections from any internal subnet from which human-used devices derive to critical infrastructure such as databases. Those can be blocked in most cases.
  • Don’t focus on WannaCry. Don’t focus on Ransomware. Don’t focus on malware. Focus on Vulnerability Management.

So then perimeter firewall configurations, it seems, go through the same cycles that economies and seasonal temperature variations go through. When will the winter pass for firewall configurations?

The Art Of The Security Delta: NetDelta

In this article we’ll be answering the following questions:

  • What is NetDelta?
  • Why should i be monitoring for changes in my network(s)?
  • Challenges we faced along the way
  • Other ways of detecting deltas in networks
  • About NetDelta

What Is NetDelta?

NetDelta allows users to configure groups of IP addresses (by department, subnet, etc) and perform one-off or periodic port scans against the configured group, and have NetDelta send alerts when a change is detected.

There’s lot of shiny new stuff out there. APT-buster ™, Silver Bullet ™, etc. Its almost as though someone sits in a room and literally looks for combinations of words that haven’t been used yet and uses this as the driver for a new VC sponsored effort. “Ok ‘threat’ is the most commonly used buzzword currently. Has ‘threat’ been combined with ‘cyber’ and ‘buster’ yet?”, “No”, [hellllooow Benjamin]. The most positive spin we can place on this is that we got so excited about the future while ignoring the present.

These new products are seen as serving the modern day needs of information security, as though the old challenges, going back to day 0 in this sector, or “1998”, have been nailed. Well, how about the old stalwart of Vulnerability Management? The products do not “Manage” anything, they produce lists of vulnerability – this is “assessment”, not “management”. And the lists they produce are riddled with noise (false positives), and what’s worse is there’s a much bigger false negatives problem in that the tools do not cover whole swaths of corporate estates. Doesn’t sound like this is an area that is well served by open source or commercial offerings.

Why Do I Need To Monitor My Networks For Changes?

On the same theme of new products in infosec – how about firewalls (that’s almost as old as it gets)? Well we now have “next-gen” firewalls, but does that mean that old-gen firewalls were phased out, we solved the Network Access Control problem, and then moved on?

How about this: if there is a change in listening services, say in, for example – your perimeter DMZ (!), and you didn’t authorise it, that cannot be a good thing. Its one of either:

  • Hacker/malware activity, e.g. hacker’s connection service (e.g. root shell), or
  • Unauthorised change, e.g. networks ops changed firewall or DMZ host configuration outside of change control
  • You imagined it – perhaps lack of sleep or too much caffeine

Neither of these can be good. They can only be bad. And do we have a way to detect such issues currently?

How does NetDelta help us solve these problems?

Users can configure scans either on a one-off basis, or to be run periodically. So for example, as a user i can tell NetDelta to scan my DMZ perimeter every night at 2 AM and alert me by email if something changed from the previous night’s scan:

  • Previously unseen host comes online in a subnet, either as an unauthorised addition to the group, or unauthorised (rogue) firewall change or new host deployment (maybe an unsanctioned wifi access point or webcam, for example) – these concerns are becoming more valid in the age of Internet of Things (IoT) where devices are shipped with open telnets and so on.
  • Host goes offline – this could be something of interest from a service availability/DoS point of view, as well as the dreaded ‘unauthorised change’.
  • Change in the available services – e.g. hacker’s exploit is successful and manages to locally open a shell on an unfiltered higher port, or new service turned on outside of change control. NetDelta will alert if services are added or removed on a target host.

Host ‘state’ is maintained by NetDelta for as long as the retention period allows, and overall 10 status codes reflect the state of a host over successive periodic scans.

Challenges We Faced With NetDelta

The biggest and only major obstacle is the output of ‘noise’ that results from scan timeouts. With some of the earlier tests scans we noticed that sporadic scan time-outs would occur frequently. This presented a problem (its sort of a false positive) in that a delta is alerted on, but really there hasn’t been a change in listening services or hosts. We increased  the timeout options with nmap but it didn’t help much and only added masses of time on the scans.

The aforementioned issue is one of the issues holding back the nmap ndiff shell script wrapper option, and also ndiff works with XML text files (messy). Shell scripts can work in corporate situations sometimes, but there are problems around the longevity and reliability of the solution. NetDelta is a web-based database (currently MySQL but NoSQL is planned) driven solution with reports and statistics readily available, but the biggest problem with the ndiff option is the scan timeout issues mentioned in the previous paragraph.

NetDelta records host “up” and “down” states and allows the user to configure a number for the number of scans before being considered really down. So if the user chooses 3 as an option, if a target host is down for 3 consecutive scans, it is considered actually down, and a delta is flagged.

Overall the ‘state’ of a host is recorded in the backend database, and the state is a code that reflects a change in the availability or existence of a host. NetDelta has a total of 10 status codes.

Are There Other Ways To Detect NetDeltas?

Remember that we’re covering network services here, i.e. the ‘visibility’ of network services, as they appear to hackers and your customers alike. This is not the same as local host configuration. I can run a netstat command locally to get a list of listening services, but this doesn’t tell me how well my firewall(s) protect me.

  • The ndiff option was covered already
  • Firewall management suites. At least one of these can flag changes in firewall rules, but it still doesn’t give the user the actual “real” view of services. Firewalls can be misconfigured, and they can do unexpected things under certain conditions. The port scanner view of a network is the holy grail effectively – its the absolute/real view that leaves no further room for interpretation and does not require further processing
  • IDS – neither HIDS (host based intrusion detection) nor NIDS (network based intrusion detection) can give a good representation of the view.
  • SIEM – these systems take in logs from other sources so partly by extrapolating from the previous comments, and also with the capability of SIEM itself, it would seem to be a challenge to ask a SIEM to do acrobatics in this area. First of all SIEM is not a cheap solution, but of course this conversation only applies to a case where the organisation already owns a SIEM and can afford the added log storage space, and management overhead, and…of the SIEMs i know, none of them are sufficiently flexible to:
    • take in logs from a port scanning source – theoretically its possible if you could get nmap to speak rsyslogish though, and i’m sure there’s some other acrobatics that are feasible
    • perform delta analysis on those logs and raise an alert

About NetDelta

NetDelta is a Python/Django based project with a MySQL backend (we will migrate to MongoDB – watch this space). Currently at v 1.0, you are most welcome to take part in a trial. We stand on the shoulders of giants:

  • nmap (https://nmap.org/)
  • Python (https://www.python.org/)
  • Django (https://www.djangoproject.com/)
  • Celery (http://www.celeryproject.org/)
  • RabbitMQ (https://www.rabbitmq.com/)
  • libnmap – a Python framework for nmap – (https://github.com/savon-noir/python-libnmap)

Contact us for more info!