AuditpolCIS – Automating Windows SIEM CIS Benchmarks Testing

In the previous post on the subject of Windows SIEM, we covered the CIS benchmarks for Windows Auditing Policy in a spreadsheet, which was provided freely (really, actually free).

This week we introduce a python open source tool we have developed, to automate the CIS Benchmark testing.

Download AuditpolCIS

Meeting Regulatory / Compliance / Audit Requirements

The automated assessment results from AuditpolCIS, as it’s based on CIS Benchmarks, helps in the support of meeting audit requirements for a number of programs, not least PCI-DSS:

  • Audit account logon events: Helps in monitoring and logging all attempts to authenticate user credentials (PCI-DSS Requirement 10.2.4).
  • Audit object access: Monitors access to objects like files, folders, and registry keys that store cardholder data (PCI-DSS Requirement 10.2.1).
  • Audit privilege use: Logs any event where a user exercises a user right or privilege (PCI-DSS Requirement 10.2.2).
  • Local log files sizes and retention policies are useful in assessing compliance with e.g. 5.3.4 and 10.5.1 requirements (PCI-DSS 4). There should be a block of text after the audit policy results.

Usage / Setup

First you will to set up a Python Virtual Environment. Ensure that you have Python installed on your system (Python 3.10 was used in development). If not, download and install Python from the official website: https://www.python.org/downloads/

Open a Command Prompt or terminal window and navigate to the folder where you extracted the AuditpolCIS project.

Run the following command to create a new virtual environment:

python -m venv venv

Activate the virtual environment by running:

For Windows:

venv\Scripts\activate

For macOS/Linux:

source venv/bin/activate

Install the required Python packages from the requirements.txt file by running:

pip install -r requirements.txt

You will need a .env file in your project root. The contents relate to the target you wish to test:

HOSTNAME='<Windows box IP address or host name>'
USERNAME='<Windows user account name>'
PASSWORD='<account password>'


Make sure to assign the right ownership and permission on .env. Usually the permissions will be 600.

Once the virtualenv is enabled, you can run the code:

./auditpolcis.py


Feel free to branch or submit a PR.

Additional Points

The CIS benchmarks are based on Windows 2019 Server but they apply to other target varients on a Windows theme. I know none of you will have EOL Windows versions. <Sarcasm engaged>I mean in 22 years of consulting, i’ve never seen any out-of-support warez in critical business usage</Sarcasm engaged>.

Powershell is not required on the target but use of Powershell is also not a crime. Yes, that was a security person who said that.

Sustainability / Use of Regex

I had to use some fairly snazzy regex to pull out Categories (category_pattern = r'^(\w+.*?)(\r)?$') and Subcategories (subcategory_pattern = r'^( {2})([^ ]+.*?)(?=\s{3,})(.*\S)') from the auditpol command output. I did look at more sustainable ways of achieving the same goal, although admittedly i didn’t spend much time doing that. One thing has been clear for a long time with Windows – don’t go looking for registry keys because that can be very painful. Not only is documentation for a key location somewhat thin and erroneous, the key loation also often changes across Windows versions. ChatGPT‘s lack of knowledge of Windows reg keys bears testimony to the previous comments.

So there are two sources of Subcategory names – there is cis-benchmarks.yaml and there is the output of the auditpol /get /category:* command. If there are entries in the YAML file which are not in the auditpol output, they are flagged in the script output, and the same is true vice versa. So if you make spelling mistakes in the excel sheet or YAML file, it will be flagged. It can also happen that auditpol output subcategories do not reflect the CIS Benchmarks subcategories, perhaps with different Windows versions as targets. Any of these categories will be flagged by the script and listed below the pass/fail results.

If you want to change the verdicts or [Sub]Category names, you are of course free to do so. You can edit the cis-benchmarks.yaml file, or edit the included spreadsheet, followed by running the included genyaml.py.

Connection Method

The scripts works over SSH because other types of connection are a pain in the derriere and require you to radically increase your attack surface area, but if there’s a request for e.g. WinRM, please do let me know, or send out a Pull Request. Follow this link for more information about enabling the built-in SSH for Windows.

I know use of AutoAddPolicy with Paramiko in Python is not good form, but also assume that as an admin in the position of someone who performs daily tasks using administrative rights, that you know your hosts. Sometimes security people do get in the way of progress, when there’s low risk issues afoot. Use of RejectPolicy instead of auto-add would be one such case.

Tests Rationalisation

Some of the tests included are not a CIS Benchmark (out of 59 tests, 32 are CIS Benchmarks, whereas 27 are not). It’s not clear why the subcategories were omitted by CIS but anyway – in these cases we have made an assessment based on logging events volume for this subcategory, versus the security value of the subcategory. Most of these are just noise, and in many cases, very high volume noise, so we have advised “No Auditing”.

Customising Test Criteria

The testing template is formed of the YAML file cis-benchmarks.yaml. If you prefer to make changes to the testing template with Excel, the sheet is CIS-Audit-Reqs-Windows2019Server.xlsx in the code root. You can then use the python script genyaml.py to generate a new YAML file (you will need to use the right virtualenv, see above for usage instructions).

What Is Your VA Scanner Really Doing?

It’s clear from social media and first hand reports, that the awareness of what VA (Vulnerability Assessment) scanners are really doing in testing scenarios is quite low. So I setup up a test box with Ubuntu 18 and exposed some services which are well known to the hacker community and also still popular in production business use cases: Secure Shell (SSH) and an Apache web service.

This post isn’t an attack on VA products at all. It’s aimed at setting a more healthy expectation, and I will cover a test scenario with a packet sniffer (Wireshark), Nessus Professional, and OpenVAS, that illustrates the point.

I became aware 20 years ago, from validating VA scanner output, that a lot of what VA scanners barf out is alarmist (red flags, CRITICAL [fix NOW!]) and also based purely on guesswork – when the scanner “sees” a service, it grabs a service banner (e.g. “OpenSSH 7.6p1 Ubuntu 4ubuntu0.3”), looks in its database for public disclosed vulnerability with that version, and flags vulnerability if there are any associated CVEs. Contrary to popular belief, there is no actual interaction in the way of further investigating or validating vulnerability. All vulnerability reporting is based on the service banner. So if i change my banner to “hi OpenVAS”, nothing will be reported. And in security, we like to advise hiding product names and versions – this helps with drive-by style automated attacks, in a much more effective way than for example, changing default service ports.

This article then demonstrates the VA scanner behaviour described above and covers developments over the past 20 years (did things improve?) with the two most commonly found scanners: Nessus and OpenVAS, which even if are not used directly, are used indirectly (vendors in this space do not recreate the wheel, they take existing IP – all legal I’m sure – and create their own UI for it). It was fairly well-known that Nessus was the basis of most commercial VAs in the 00s, and it seems unlikely that scenario has changed a great deal.

Test Setup

So if I look at my test box setup I see from port scan results (nmap):

PORT STATE SERVICE VERSION
22/tcp open ssh OpenSSH 7.6p1 Ubuntu 4ubuntu0.3 (Ubuntu Linux; protocol 2.0)
25/tcp open smtp Postfix smtpd
80/tcp open http Apache httpd 2.4.29 ((Ubuntu))
139/tcp open netbios-ssn Samba smbd 3.X - 4.X (workgroup: WORKGROUP)
445/tcp open netbios-ssn Samba smbd 3.X - 4.X (workgroup: WORKGROUP)
3000/tcp open http Apache httpd 2.4.29 ((Ubuntu))
5000/tcp open http Docker Registry (API: 2.0)
8000/tcp open http Apache httpd 2.4.29

So…naughty, naughty. Apache is not so old but still I’d expect to see some CVEs flagged, and I can say the same for the SSH service. Samba is there too in a default format. Samba is Linux’s implementation of MS Windows SMB (Server Message Block) and is full of holes. The Postfix mail service is also quite old, and there’s a Docker API exposed! All this would get an attacker quite excited, and indeed there’s plenty of automated attack scenarios which would work here.

There was also an EOL Phpmyadmin and EOL jQuery wrapped up in the web service.

Developments in Two Decades

So there has been some changes. For want of a better word, there’s now more honesty. In the case of OpenVAS, for vulnerability that involves grabbing a banner and assuming vulnerability based on this, there is a Quality of Detection (QoD) rating, which is set as default at around 70%. This is a kind of probability rating for a finding not being a false positive. Interestingly those findings that involve a banner grab are way down there under 50, and most are no longer flagged as “critical”.

Nessus, for its banner-grabbed vulnerabilities, is more explicit and it is report will state “Note that Nessus has not tested for this issue but has instead relied only on the application’s self-reported version number.”

Even 7 years ago, there would be lots of issues reported for an outdated Apache or SSH service, many of which would be flagged wrongly as CRITICAL, but not necessarily exploitable, and the existance of the vulnerability was based only on a text banner. So these more recent VA versions are an improvement, but its clear the awareness out there of these issues is still quite low. The problem is now – we do want to see if services are downlevel, so please $VENDOR, don’t hide them (more on this later).

First Scan – Banners On Display

So using Wireshark, sniffing HTTP on port 80 (plain text) we have the following…

Wireshark window showing the OpenVAS interaction with the text box target

The packets highlighted in black are the only two of any interest, wherein OpenVAS has used the HTTP GET method to request for “/”, and receives a response where the header shows the product (Apache) and version (2.4.29).

Note the Wireshark filter used (tcp.port == 80 and http). Other than the initial exchange where a banner was grabbed, there was no further interaction. This was the same for Nessus.

What was reported? Well, for OpenVAS, a handful of potential CVEs were reported but I had to lower the QoD to see them! Which is interesting. If anything this is moving the bar too far in the opposite direction. I mean as an owner of this system, I do want to know if i am running old warez!

For Nessus, 6 Apache CVEs were reported with either critical or “high” severity. Overall, I had a similar experience with that of OpenVAS except to even see the Apache issues reported I had to beg the scanner with the following scan configuration setup:

  • Settings –> Assessment –> Override normal accuracy and show potential false alarms
  • Settings –> Assessment –> perform thorough tests
  • Settings –> Advanced –> enable safe checks on (and i also tried the “off” option)
  • Settings –> Advanced –> plugins –> web servers –> enabled. This is the Apache vulnerability section

For the SSH service, OpenVAS reported 3 medium issues which is roughly what i was expecting. Nessus did not report any at all! Answers on a postcard for that one.

Banners Concealed

What was interesting was that the Secure Shell service doesn’t present an option to hide the banner any more, and on investigation, the majority-held community-version of this story is that the banner is needed in some cases.

Apache however did present a banner obfuscation option. For Ubuntu 18 and Apache 2.4.29, this involved:

  • apt install libapache2-mod-security2
  • a2enmod security2
  • edit /etc/apache2/conf-available/security.conf
  • ServerTokens set to “Prod”
  • systemctl restart apache2

This setup results in the following banner for Apache: Apache httpd – so no version number.

The outcome? As expected, all mention of Apache has now ended. Neither OpenVAS or Nessus reported anything to do with Apache of any note.

What DID The Scanners Find?

Just to summarise the findings when the banners were fully on display…it wasn’t a blank slate. There were some findings. Here are the highlights – for OpenVAS:

  • All Critical issues detected were related to PHPMyAdmin, plus one related to jQuery being EOL, but not stating any particular vulnerability. These version numbers are remotely queriable and this is the basis on which these issues were reported.
  • The SSH and Apache issues.
  • Other lower criticality issues were around certificate ciphers.
  • Some CVSS 6, medium issues with Samba – again these are banner-grabbed guesswork findings.

Nessus didn’t report anything outside of what OpenVAS flagged. OpenVAS reported significantly more issues.

It should be said that both scanners did a lot of querying for HTTP application layer issues that could be seen in the packet sniffer output. For example, queries were made for Python/Django settings.py (database password), and other HTTP gotchas.

Unauthenticated Versus Credentialed Testing

With VA Scanners, the picture hasn’t really changed in 20 years. If anything the picture is worse now because the balance with banner-grabbing guesswork has swung too far the other way, and we have to plead with the scanners to tell us about downlevel software versions. This is presumably an effort to reduce the number of false positives, but its not an advisable strategy. It’s perfectly ok to let us know we are running old wares and if we want, we should be able to see the CVEs associated with our listening services, even if many of them are false positives (and I can say from 20 years of network penetration testing, there will be plenty).

With this type of unauthenticated VA scanning though, the real problem has always been false negatives (to the extent that an open Docker API wasn’t flagged as a problem by either scanner), but none of the other commercial tools out there (I have tried a few in recent years) will be in a better position, because there is hard-limit that can be achieved non-locally with no adminstrative authentication credentials.

Both Nessus and OpenVAS allow use of credentialled based testing but its clear this aspect was never a part of the core design. Nessus has expanded its portfolio of credentialed tests but in the time allocated I could not get it to work with SSH public key authentication. In any case, a CIS benchmark approach will always be not-so-great, for reasons outside the scope of this article. We also have to be careful about where authentication credentials are stored. In the case of SSH keys, this means storing a private key, and with some vendors the key will be stored in their cloud somewhere out there.

Conclusion

This post focusses on one major aspect of VA scanning that is grabbing banners and reporting on vulnerability based on the findings from the banner. This is better than nothing but its futility is hopefully illustrated here, and this approach is core to most of what VA scanners do for us.

The market priority has always been towards unauthenticated scanning. Little focus was ever given to credentialed scanning. This has to change because the unauthenticated approach is like trying to diagnose a problem with your car without ever lifting the bonnet/hood, and moreover we could be moving into an era where accreditation bodies mandate credentialed scanning.

Netdelta – Install and Configure

Netdelta is a tool for monitoring networks and flagging alerts upon changes in advertised services. Now – I like Python and especially Django, and around 2014 or so i was asked to setup a facility for monitoring for changes in that organisation’s perimeter. After some considerable digging, i found nada, as in nothing, apart from a few half-baked student projects. So i went off and coded Netdelta, and the world has never been the same since.

I guess when i started with Netdelta i didn’t see it as a solution that would be widely popular because i was under the impression you could just do some basic shell scripting with nmap, and ndiff is specifically designed for delta flagging. However what became apparent at an early stage was that timeouts are a problem. A delta will be flagged when a host or service times out – and this happens a lot, even on a gigabit LAN, and it happens even more in public clouds. I built some analytics into Netdelta that looks back over the scan history and data and makes a call on the likelihood of a false positive (red, amber green).

Most organisations i worked with would benefit from this. One classic example i can think of – a trading house that had been on an aggressive M&A spree, maybe it was Black Friday or…? Anyway – they fired some network engineers and hired some new and cheaper ones, exacerbating what was already a poorly managed perimeter scenario. CISO wanted to know what was going off with these Internet facing subnets – enter Netdelta. Unauthorised changes are a problem! I am directly aware of no fewer than 6 incidents that occurred as a result of exposed SSH, SMB (Wannacry), and more recently RDP, and indirectly aware of many more.

Anyway without further waffle, here’s how you get Netdelta up and running. Warning – there are a few moving parts, but if someone wants it in Docker, let me know.

I always go with Ubuntu. The differences between Linux distros are like the differences between mueslis. My build was on 18.04 but its highly likely 19 variants will be just fine.

apt-get update
apt-get install curl nmap apache2 python3 python3-pip python3-venv rabbitmq-server mysql-server libapache2-mod-wsgi-py3 git
apt-get -y install software-properties-common
add-apt-repository ppa:certbot/certbot
apt-get install -y python-certbot-apache
Clone the repository from github into your <netdelta root> 
git clone https://github.com/SevenStones/netdelta.git

Filesystem

Create the user that will own <netdelta root>

useradd -s /bin/bash -d /home/<user> -m <user>

Create the directory that will host the Netdelta Django project if necessary

Add the user to a suitable group, and strip world permissions from netdelta directories

 groupadd <group> 
 usermod -G  <group> <user>
 usermod -G  <group> www-data
 chown -R www-data:<group> /var/www
 chown -R <user>:<group> <netdelta root> 

Make the logs dir, e.g. /logs, and you will need to modify /nd/netdelta_logger.py to point to this location. Note the celery monitor logs go to /var/log/celery/celery-monitor.log …which of course you can change.

Strip world permissions from all netdelta and apache root dirs:

chmod -Rv o-rwx <web root>
chmod -Rv o-rwx <netdelta root>

Virtualenv

The required packages are in requirements.txt, in the root of the git repo. Your virtualenv build with Python 3 goes approximately like this …

python3 -m venv /path/to/new/virtual/environment

You activate thusly: source </path/to/new/virtual/environment/>/bin/activate

Then suck in the requirements as root, remembering to fix permissions after you do this.

pip3 install wheel
pip3 install -r <netdelta root>/requirements.txt

You can use whatever supported database you like. MySQL is assumed here.

The Python framework mysqlclient was used with earlier versions of Django and MySQL. but with Python 3 and later Django versions, the word on the street is PyMySQL is the way to go. With this though, it took some trickery to get the Django project up and running; in the form of init.py for the project (<netdelta root>/netdelta/.init.py) and adding a few lines …

import pymysql 
pymysql.install_as_MySQLdb()

While in virtualenv, and under your netdelta root, add a superuser for the DF

Patch libnmap

Two main mods to the libnmap in usage with Netdelta were necessary. First, with later versions of Celery (>3.1), there was a security issue with “deamonic processes are not allowed to have children”, for which an alternative fork of libnmap fixed the problem. Then we needed to return to Netdelta the process id of the running nmap port scanner process.

cd /opt
git clone https://github.com/pyoner/python-libnmap.git
cp ./python-libnmap/libnmap/process.py <virtualenv root>/lib/python3.x/site-packages/libnmap/

then patch libnmap to allow Netdelta to kill scanning processes

<netdelta_root>/scripts/fix-libnmap.bash

Change the environment variables to match your install and use the virtualenv name as a parameter

Database Setup

Create a database called netdetla and use whichever encoding snd collation you like.

CREATE DATABASE netdelta CHARACTER SET utf8 COLLATE utf8_general_ci;

Then from <netdelta root> with virtualenv engaged:

python manage.py makemigrations nd
python manage.py migrate

Web Server

I am assuming all you good security pros don’t want to use the development server? Well as you’re only dealing with port scan data then….your call. I’m assuming Apache as a production web server.

You will need to give Apache a stub web root and enable the wsgi module. For the latter i added this to apache2.conf – this gives you some control over the exact version of Python loaded.

LoadModule wsgi_module "/usr/lib/python3.7/site-packages/mod_wsgi/server/mod_wsgi-py37.cpython-37m-i386-linux-gnu.so"
WSGIPythonHome "/usr"

Under the DocumentRoot line in your apache config file, give the pointers for WSGI.

WSGIDaemonProcess <site> python-home=<virtualenv root> python-path=<netdelta root>  WSGIProcessGroup <site>  WSGIScriptAlias / <netdelta root>/netdelta/wsgi.py  Alias /static/ <netdelta root>/netdelta/

Note also you will need to adjust your wsgi.py under <netdelta root>/netdelta/ –

# Add the site-packages of the chosen virtualenv to work with site.addsitedir('<virtualenv root>/lib/python3.7/site-packages')

Celery

From the current shell
in …<virtualenv> ….under <netdelta root>

celery worker -E -A nd -n default -Q default --loglevel=info -B --logfile=<netdelta root>/logs/celery.log

Under systemd (you will almost certainly want to do this) with the root user. The script pointed to by systemd for

systemctl start celery

can have all the environment checking (this isn’t intended to be a tutorial in BASH scripting), but the core of it…

cd <netdelta root>
nohup $VIRTUALENV_DIR/bin/celery worker -E -A nd -n ${SITE} -Q ${SITE} --loglevel=info -B --logfile=${SITE_LOGS}/celery.log >/dev/null 2>&1 &

And then you can put together your own scripts for status, stop, restart.

Fintechs and Security – Part Two

  • Prologue – covers the overall challenge at a high level
  • Part One – Recruiting and Interviews
  • Part Two – Threat and Vulnerability Management – Application Security
  • Part Three – Threat and Vulnerability Management – Other Layers
  • Part Four – Logging
  • Part Five – Cryptography and Key Management, and Identity Management
  • Part Six – Trust (network controls, such as firewalls and proxies), and Resilience

Threat and Vulnerability Management (TVM) – Application Security

This part covers some high-level guider points related to the design of the application security side of TVM (Threat and Vulnerability Management), and the more common pitfalls that plague lots of organisations, not just fintechs. I won’t be covering different tools in the SAST or DAST space apart from one known-good. There are some decent SAST tools out there but none really stand out. The market is ever-changing. When i ask vendors to explain what they mean by [new acronym] what usually results is nothing, or a blast of obfuscation. So I’m not here to talk about specific vendor offerings, especially as the SAST challenge is so hard to get even close to right.

With vulnerability management in general, ${VENDOR} has succeeded in fouling the waters by claiming to be able to automate vulnerability management. This is nonsense. Vulnerability assessment can to some limited degree be automated with decent results, but vulnerability management cannot be automated.

The vulnerability management cycle has also been made more complicated by GRC folk who will present a diagram representing a cycle with 100 steps, when really its just assess –> deduce risk –> treat risk –> GOTO 1. The process is endless, and in the beginning it will be painful, but if handled without redundant theory, acronyms-for-the-sake-of-acronyms-for-the-same-concept-that-already-has-lots-of-acronyms, rebadging older concepts with a new name to make them seem revolutionary, or other common obfuscation techniques, it can be easily integrated as an operational process fairly quickly.

The Dawn Of Application Security

If you go back to the heady days of the late 90s, application security was a thing, it just wasn’t called “application security”. It was called penetration testing. Around the early 2000s, firewall configurations improved to the extent that in a pen test, you would only “see” port 80 and/or 443 exposing a web service on Apache, Internet Information Server, or iPlanet (those were the days – buffer overflow nirvana). So with other attack channels being closed from the perimeter perspective, more scrutiny was given to web-based services.

Attackers realised you can subvert user input by intercepting it with a proxy, modifying some fields, perhaps inject some SQL or HTML, and see output that perhaps you wouldn’t expect to see as part of the business goals of the online service.

At this point the “application security” world was formed and vulnerabilities were classified and given new names. The OWASP Top Ten was born, and the world has never been the same since.

SAST/DAST

More acronyms have been invented by ${VENDOR} since the early early pre-holocene days of appsec, supposedly representing “brand new” concepts such as SAST (Static Application Security Testing) and DAST (Dynamic Application Security Testing), which is the new equivalent of white box and black box testing respectively. The basic difference is about access to the source code. SAST is source code testing while DAST is an approach that will involve testing for OWASP type vulnerabilities while the software is running and accepting client connection requests.

The SAST scene is one that has been adopted by fintechs in more recent times. If you go back 15 years, you would struggle to find any real commercial interest in doing SAST – so if anyone ever tells you they have “20” or even “10” years of SAST experience, suggest they improve their creativity skills. The general feeling, not unjustified, was that for a large, complex application, assessing thousands of lines of source code at a vital organ/day couldn’t be justified.

SAST is more of a common requirement these days. Why is that? The rise of fintechs, whose business is solely about generation of applications, is one side of it, and fintechs can (and do) go bust if they suffer a breach. Also – ${VENDOR}s have responded to the changing Appsec landscape by offering “solutions”. To be fair, the offerings ARE better than 10 years ago, but it wouldn’t take much to better those Hello World scripts. No but seriously, SAST assessment tools are raved about by Gartner and other independent sources, and they ARE better than offerings from the Victorian era, but only in certain refined scenarios and with certain programming languages.

If it was possible to be able to comprehensively assess lots of source code for vulnerability and get accurate results, then theoretically DAST would be harder to justify as a business undertaking. But as it is, SAST + DAST, despite the extensive resources required to do this effectively, can be justified in some cases. In other cases it can be perfectly fine to just go with DAST. It’s unlikely ever going to be ok to just go with SAST because of the scale of the task with complex apps.

Another point here – i see some fintechs using more than one SAST tool, from different vendors. There’s usually not much to gain from this. Some tools are better with some programming languages than others, but there is nothing cast in stone or any kind of majority-view here. The costs of going with multiple vendors is likely going to be harder and harder to justify as time goes on.

Does Automated Vulnerability Assessment Help?

The problem of appsec is still too complex for decent returns from automation. Anyone who has ever done any manual testing for issues such as XSS knows the vast myriad of ways in which such issues can be manifested. The blackbox/blind/DAST scene is still not more than Burp, Dirbuster, but even then its mostly still manual testing with proxies. Don’t expect to cover all OWASP top 10 issues for a complex application that presents an admin plus a user interface, even in a two-week engagement with four analysts.

My preferred approach is still Fred Flinstone’y, but since the automation just isn’t there yet, maybe its the best approach? This needs to happen when an application is still in the conceptual white board architecture design phase, not a fully grown [insert Hipster-given-name], and it goes something like this: white board, application architect – zero in on the areas where data flows involve transactions with untrusted networks or users. Crpyto/key management is another area to zoom in on.

Web Application Firewall

The best thing about WAFs, is they only allow propagation of the most dangerous attacks. But seriously, WAF can help, and in some respects, given the above-mentioned challenges of automating code testing, you need all the help you can get, but you need to spend time teaching the WAF about your expected URL patterns and tuning it – this can be costly. A “dumb” default-configured WAF can probably catch drive-by type issues for public disclosed vulnerabilities as long as you keep it updated. A lot depends on your risk profile, but note that you don’t need a security engineer to install a WAF and leave it in default config. Pretty much anyone can do this. You _do_ need an experienced security engineer or two to properly understand an application and configure a WAF accordingly.

Python and Ruby – Web Application Frameworks

Web application frameworks such as Ruby on Rails (RoR) and Django are in common usage in fintechs, and are at least in some cases, developed with security in mind in that they do offer developers features that are on by default. For example, with Django, if you design a HTML form for user input, the server side will have been automagically configured with the validation on the server side, depending on the model field type. So an email address will be validated client and server-side as an email address. Most OWASP issues are the result of failures to validate user input on the server side.

Note also though that with Django you can still disable HTML tag filtering of user input with a “| safe” in the template. So it’s dangerous to assume that all user input is sanitised.

In Django Templates you will also see a CSRF token as a hidden form field if you include a Form object in your template.

The point here is – the root of all evil in appsec is server-side validation, and much of your server-side validation effort in development will be covered by default if you go with RoR or Django. That is not the end of the story though with appsec and Django/RoR apps. Vulnerability of the host OS and applications can be problematic, and it’s far from the case that use of either Django or RoR as a dev framework eliminates the need for DAST/SAST. However the effort will be significantly reduced as compared to the C/Java/PHP cases.

Wrap-up

Overall i don’t want to too take much time bleating about this topic because the take away is clear – you CAN take steps to address application security assessment automation and include the testing as part of your CI/CD pipeline, but don’t expect to catch all vulnerabilities or even half of what is likely an endless list.

Expect that you will be compromised and plan for it – this is cheaper than spending zillions (e.g. by going with multiple SAST tools as i’ve seen plenty of times) on solving an unsolvable problem – just don’t let an incident result in a costly breach. This is the purpose of security architecture and engineering. It’s more to deal with the consequences of an initial exploit of an application security fail, than to eliminate vulnerability.

The Art Of The Security Delta: NetDelta

In this article we’ll be answering the following questions:

  • What is NetDelta?
  • Why should i be monitoring for changes in my network(s)?
  • Challenges we faced along the way
  • Other ways of detecting deltas in networks
  • About NetDelta

What Is NetDelta?

NetDelta allows users to configure groups of IP addresses (by department, subnet, etc) and perform one-off or periodic port scans against the configured group, and have NetDelta send alerts when a change is detected.

There’s lot of shiny new stuff out there. APT-buster ™, Silver Bullet ™, etc. Its almost as though someone sits in a room and literally looks for combinations of words that haven’t been used yet and uses this as the driver for a new VC sponsored effort. “Ok ‘threat’ is the most commonly used buzzword currently. Has ‘threat’ been combined with ‘cyber’ and ‘buster’ yet?”, “No”, [hellllooow Benjamin]. The most positive spin we can place on this is that we got so excited about the future while ignoring the present.

These new products are seen as serving the modern day needs of information security, as though the old challenges, going back to day 0 in this sector, or “1998”, have been nailed. Well, how about the old stalwart of Vulnerability Management? The products do not “Manage” anything, they produce lists of vulnerability – this is “assessment”, not “management”. And the lists they produce are riddled with noise (false positives), and what’s worse is there’s a much bigger false negatives problem in that the tools do not cover whole swaths of corporate estates. Doesn’t sound like this is an area that is well served by open source or commercial offerings.

Why Do I Need To Monitor My Networks For Changes?

On the same theme of new products in infosec – how about firewalls (that’s almost as old as it gets)? Well we now have “next-gen” firewalls, but does that mean that old-gen firewalls were phased out, we solved the Network Access Control problem, and then moved on?

How about this: if there is a change in listening services, say in, for example – your perimeter DMZ (!), and you didn’t authorise it, that cannot be a good thing. Its one of either:

  • Hacker/malware activity, e.g. hacker’s connection service (e.g. root shell), or
  • Unauthorised change, e.g. networks ops changed firewall or DMZ host configuration outside of change control
  • You imagined it – perhaps lack of sleep or too much caffeine

Neither of these can be good. They can only be bad. And do we have a way to detect such issues currently?

How does NetDelta help us solve these problems?

Users can configure scans either on a one-off basis, or to be run periodically. So for example, as a user i can tell NetDelta to scan my DMZ perimeter every night at 2 AM and alert me by email if something changed from the previous night’s scan:

  • Previously unseen host comes online in a subnet, either as an unauthorised addition to the group, or unauthorised (rogue) firewall change or new host deployment (maybe an unsanctioned wifi access point or webcam, for example) – these concerns are becoming more valid in the age of Internet of Things (IoT) where devices are shipped with open telnets and so on.
  • Host goes offline – this could be something of interest from a service availability/DoS point of view, as well as the dreaded ‘unauthorised change’.
  • Change in the available services – e.g. hacker’s exploit is successful and manages to locally open a shell on an unfiltered higher port, or new service turned on outside of change control. NetDelta will alert if services are added or removed on a target host.

Host ‘state’ is maintained by NetDelta for as long as the retention period allows, and overall 10 status codes reflect the state of a host over successive periodic scans.

Challenges We Faced With NetDelta

The biggest and only major obstacle is the output of ‘noise’ that results from scan timeouts. With some of the earlier tests scans we noticed that sporadic scan time-outs would occur frequently. This presented a problem (its sort of a false positive) in that a delta is alerted on, but really there hasn’t been a change in listening services or hosts. We increased  the timeout options with nmap but it didn’t help much and only added masses of time on the scans.

The aforementioned issue is one of the issues holding back the nmap ndiff shell script wrapper option, and also ndiff works with XML text files (messy). Shell scripts can work in corporate situations sometimes, but there are problems around the longevity and reliability of the solution. NetDelta is a web-based database (currently MySQL but NoSQL is planned) driven solution with reports and statistics readily available, but the biggest problem with the ndiff option is the scan timeout issues mentioned in the previous paragraph.

NetDelta records host “up” and “down” states and allows the user to configure a number for the number of scans before being considered really down. So if the user chooses 3 as an option, if a target host is down for 3 consecutive scans, it is considered actually down, and a delta is flagged.

Overall the ‘state’ of a host is recorded in the backend database, and the state is a code that reflects a change in the availability or existence of a host. NetDelta has a total of 10 status codes.

Are There Other Ways To Detect NetDeltas?

Remember that we’re covering network services here, i.e. the ‘visibility’ of network services, as they appear to hackers and your customers alike. This is not the same as local host configuration. I can run a netstat command locally to get a list of listening services, but this doesn’t tell me how well my firewall(s) protect me.

  • The ndiff option was covered already
  • Firewall management suites. At least one of these can flag changes in firewall rules, but it still doesn’t give the user the actual “real” view of services. Firewalls can be misconfigured, and they can do unexpected things under certain conditions. The port scanner view of a network is the holy grail effectively – its the absolute/real view that leaves no further room for interpretation and does not require further processing
  • IDS – neither HIDS (host based intrusion detection) nor NIDS (network based intrusion detection) can give a good representation of the view.
  • SIEM – these systems take in logs from other sources so partly by extrapolating from the previous comments, and also with the capability of SIEM itself, it would seem to be a challenge to ask a SIEM to do acrobatics in this area. First of all SIEM is not a cheap solution, but of course this conversation only applies to a case where the organisation already owns a SIEM and can afford the added log storage space, and management overhead, and…of the SIEMs i know, none of them are sufficiently flexible to:
    • take in logs from a port scanning source – theoretically its possible if you could get nmap to speak rsyslogish though, and i’m sure there’s some other acrobatics that are feasible
    • perform delta analysis on those logs and raise an alert

About NetDelta

NetDelta is a Python/Django based project with a MySQL backend (we will migrate to MongoDB – watch this space). Currently at v 1.0, you are most welcome to take part in a trial. We stand on the shoulders of giants:

  • nmap (https://nmap.org/)
  • Python (https://www.python.org/)
  • Django (https://www.djangoproject.com/)
  • Celery (http://www.celeryproject.org/)
  • RabbitMQ (https://www.rabbitmq.com/)
  • libnmap – a Python framework for nmap – (https://github.com/savon-noir/python-libnmap)

Contact us for more info!