Netdelta – Install and Configure

Netdelta is a tool for monitoring networks and flagging alerts upon changes in advertised services. Now – I like Python and especially Django, and around 2014 or so i was asked to setup a facility for monitoring for changes in that organisation’s perimeter. After some considerable digging, i found nada, as in nothing, apart from a few half-baked student projects. So i went off and coded Netdelta, and the world has never been the same since.

I guess when i started with Netdelta i didn’t see it as a solution that would be widely popular because i was under the impression you could just do some basic shell scripting with nmap, and ndiff is specifically designed for delta flagging. However what became apparent at an early stage was that timeouts are a problem. A delta will be flagged when a host or service times out – and this happens a lot, even on a gigabit LAN, and it happens even more in public clouds. I built some analytics into Netdelta that looks back over the scan history and data and makes a call on the likelihood of a false positive (red, amber green).

Most organisations i worked with would benefit from this. One classic example i can think of – a trading house that had been on an aggressive M&A spree, maybe it was Black Friday or…? Anyway – they fired some network engineers and hired some new and cheaper ones, exacerbating what was already a poorly managed perimeter scenario. CISO wanted to know what was going off with these Internet facing subnets – enter Netdelta. Unauthorised changes are a problem! I am directly aware of no fewer than 6 incidents that occurred as a result of exposed SSH, SMB (Wannacry), and more recently RDP, and indirectly aware of many more.

Anyway without further waffle, here’s how you get Netdelta up and running. Warning – there are a few moving parts, but if someone wants it in Docker, let me know.

I always go with Ubuntu. The differences between Linux distros are like the differences between mueslis. My build was on 18.04 but its highly likely 19 variants will be just fine.

apt-get update
apt-get install curl nmap apache2 python3 python3-pip python3-venv rabbitmq-server mysql-server libapache2-mod-wsgi-py3 git
apt-get -y install software-properties-common
add-apt-repository ppa:certbot/certbot
apt-get install -y python-certbot-apache
Clone the repository from github into your <netdelta root> 
git clone https://github.com/SevenStones/netdelta.git

Filesystem

Create the user that will own <netdelta root>

useradd -s /bin/bash -d /home/<user> -m <user>

Create the directory that will host the Netdelta Django project if necessary

Add the user to a suitable group, and strip world permissions from netdelta directories

 groupadd <group> 
 usermod -G  <group> <user>
 usermod -G  <group> www-data
 chown -R www-data:<group> /var/www
 chown -R <user>:<group> <netdelta root> 

Make the logs dir, e.g. /logs, and you will need to modify /nd/netdelta_logger.py to point to this location. Note the celery monitor logs go to /var/log/celery/celery-monitor.log …which of course you can change.

Strip world permissions from all netdelta and apache root dirs:

chmod -Rv o-rwx <web root>
chmod -Rv o-rwx <netdelta root>

Virtualenv

The required packages are in requirements.txt, in the root of the git repo. Your virtualenv build with Python 3 goes approximately like this …

python3 -m venv /path/to/new/virtual/environment

You activate thusly: source </path/to/new/virtual/environment/>/bin/activate

Then suck in the requirements as root, remembering to fix permissions after you do this.

pip3 install wheel
pip3 install -r <netdelta root>/requirements.txt

You can use whatever supported database you like. MySQL is assumed here.

The Python framework mysqlclient was used with earlier versions of Django and MySQL. but with Python 3 and later Django versions, the word on the street is PyMySQL is the way to go. With this though, it took some trickery to get the Django project up and running; in the form of init.py for the project (<netdelta root>/netdelta/.init.py) and adding a few lines …

import pymysql 
pymysql.install_as_MySQLdb()

While in virtualenv, and under your netdelta root, add a superuser for the DF

Patch libnmap

Two main mods to the libnmap in usage with Netdelta were necessary. First, with later versions of Celery (>3.1), there was a security issue with “deamonic processes are not allowed to have children”, for which an alternative fork of libnmap fixed the problem. Then we needed to return to Netdelta the process id of the running nmap port scanner process.

cd /opt
git clone https://github.com/pyoner/python-libnmap.git
cp ./python-libnmap/libnmap/process.py <virtualenv root>/lib/python3.x/site-packages/libnmap/

then patch libnmap to allow Netdelta to kill scanning processes

<netdelta_root>/scripts/fix-libnmap.bash

Change the environment variables to match your install and use the virtualenv name as a parameter

Database Setup

Create a database called netdetla and use whichever encoding snd collation you like.

CREATE DATABASE netdelta CHARACTER SET utf8 COLLATE utf8_general_ci;

Then from <netdelta root> with virtualenv engaged:

python manage.py makemigrations nd
python manage.py migrate

Web Server

I am assuming all you good security pros don’t want to use the development server? Well as you’re only dealing with port scan data then….your call. I’m assuming Apache as a production web server.

You will need to give Apache a stub web root and enable the wsgi module. For the latter i added this to apache2.conf – this gives you some control over the exact version of Python loaded.

LoadModule wsgi_module "/usr/lib/python3.7/site-packages/mod_wsgi/server/mod_wsgi-py37.cpython-37m-i386-linux-gnu.so"
WSGIPythonHome "/usr"

Under the DocumentRoot line in your apache config file, give the pointers for WSGI.

WSGIDaemonProcess <site> python-home=<virtualenv root> python-path=<netdelta root>  WSGIProcessGroup <site>  WSGIScriptAlias / <netdelta root>/netdelta/wsgi.py  Alias /static/ <netdelta root>/netdelta/

Note also you will need to adjust your wsgi.py under <netdelta root>/netdelta/ –

# Add the site-packages of the chosen virtualenv to work with site.addsitedir('<virtualenv root>/lib/python3.7/site-packages')

Celery

From the current shell
in …<virtualenv> ….under <netdelta root>

celery worker -E -A nd -n default -Q default --loglevel=info -B --logfile=<netdelta root>/logs/celery.log

Under systemd (you will almost certainly want to do this) with the root user. The script pointed to by systemd for

systemctl start celery

can have all the environment checking (this isn’t intended to be a tutorial in BASH scripting), but the core of it…

cd <netdelta root>
nohup $VIRTUALENV_DIR/bin/celery worker -E -A nd -n ${SITE} -Q ${SITE} --loglevel=info -B --logfile=${SITE_LOGS}/celery.log >/dev/null 2>&1 &

And then you can put together your own scripts for status, stop, restart.

#WannaCry and The Rise and Fall of the Firewall

The now infamous WannaCry Ransomware outbreak was the most widespread malware outbreak since the early 2000s. There was a very long gap between the early 2000s “worm” outbreaks (think Sasser, Blaster, etc) and this latest 2017 WannaCry outbreak. The usage of the phrase “worm” was itself widespread, especially as it was included in CISSP exam syllabuses, but then it died out. Now its seeing a resurgence, that started last weekend – but why? Why is the worm turning for the worm (I know – it’s bad – but it had to go in here somewhere)?

As far as WannaCry goes, there has been some interesting developments over the past few days – contrary to popular belief, it did not affect Windows XP, the most commonly affected was Windows 7, and according to some experts, the leading suspect in the case is the Lazarus group with ties to North Korea.

But this post is not about WannaCry. I’m going to say it: I used WannaCry to get attention (and with this statement i’m already more honest than the numerous others who jumped on the WannaCry bandwagon, including our beloved $VENDOR). But I have been meaning to cover the rise and fall of the firewall for some time now, and this instance of a widespread and damaging worm, that spreads by exploiting poor firewall configurations, brought this forward by a few months.

A worm is malware that “uses a computer network to spread itself, relying on security failures on the target computer”. If we think of malware delivery and propagation as two different things – lots of malware since 2004 used email (think Phishing) as a delivery mechanism but spread using an exploit once inside a private network. Worms use network propagation to both deliver and spread. And that is the key difference. WannaCry is without doubt a Worm. There is no evidence to suggest WannaCry was delivered on the back of successful Phishing attacks – as illustrated by the lack of WannaCry home user victims (who sit behind the protection of NAT’ing home routers). Most of the early WannaCry posts were covering Phishing, mostly because of the lack of belief that Server Message Block ports would never be exposed to the public Internet.

The Infosec sector is really only 20 years old in terms of the widespread adoption of security controls in larger organisations. So we have only just started to have a usable, relatable history in infosec. Firewalls are still, in 2017, the security control that delivers most value for investment, and they’ve been around since day one. But in the past 20 years I have seen firewall configurations go thru a spectacular rise in the early 2000s, to a spectacular fall a decade later.

Late 90s Firewall

If we’re talking late 90s, even with some regional APAC banks, you would see huge swaths of open ports in port scan results. Indeed, a firewall to many late 90s organisations was as in the image to the left.

However – you can ask a firewall what it is, even a “Next Gen” firewall, and it will answer “I’m a firewall, i make decisions on accepting or rejecting packets based on source and destination addresses and services”. Next Gen firewall vendors tout the ability of firewalls to do layer 7 DPI stuff such as IDS, WAF, etc, but from what I am hearing, many organisations don’t use these features for one reason or another. Firewalls are quite a simple control to understand and organisations got the whole firewall thing nailed quite early on in the game.

When we got to 2002 or so, you would scan a perimeter subnet and only see VPN and HTTP ports. Mind you, egress controls were still quite poor back then, and continue to be lacking to the present day, as is also the case with internal firewalls other than a DMZ (if there are any). 2002 was also the year when application security testing (OWASP type vulnerability testing) took off, and I doubt it would ever have evolved into a specialised area if organisations had not improved their firewalls. Ultimately organisations could improve their firewalls but they still had to expose web services to the planet. As Marcus Ranum said, when discussing the “ultimate firewall”, “You’ll notice there is a large hole sort of in the centre [of the ultimate firewall]. That represents TCP Port 80. Most firewalls have a big hole right about there, and so does mine.”

During testing engagements for the next decade, it was the case that perimeter firewalls would be well configured in the majority of cases. But then we entered an “interesting” period. It started for me around 2012. I was conducting a vulnerability scan of a major private infrastructure facility in the UK…and “what the…”! RDP and SMB vulnerabilities! So the target organisation served a vital function in national infrastructure and they expose databases, SMB, and terminal services ports to the planet. In case there’s any doubt – that’s bad. And since 2012, firewall configs have fallen by the wayside.

WannaCry is delivered and spreads using a SMB vulnerability, as did Blaster and Sasser all those years ago. If we look at Shodan results for Internet exposure of SMB we find 1.5 million cases. That’s a lot.

So how did we get here? Well there are no answers born out of questionnaires and research but i have my suspicions:

  • All the talk of “Next Generation” firewalls and layer 7 has led to organisations taking their eye off the ball when it comes to layer 3 and 4.
  • All the talk of magic $VENDOR snake oil silver bullets in general has led organisations away from the basics. Think APT-Buster ™.
  • All the talk of outsourcing has led some organisations, as Dr Anton Chuvakin said, to outsource thinking.
  • Talk of “distortion” of the perimeter (as in “in this age of mobile workforces, where is our perimeter now?”). Well the perimeter is still the perimeter – the clue is in the name. The difference is now there are several perimeters. But WannaCry has reminded us that the old perimeter is still…yes – a perimeter.
  • There are even some who advocated losing the firewall as a control, but one of the rare success stories for infosec was the subsequent flaming of such opinions. BTW when was that post published? Yes – it was 2012.

So general guidelines:

  • The Internet is an ugly place with lots of BOTs and humans with bad intentions, along with those who don’t intend to be bad but just are (I bet there are lots of private org firewall logs which show connection attempts of WannaCry from other organisations).
  • Block incoming for all ports other than those needed as a strict business requirement. Default-deny is the easiest way to achieve this.
  • Workstations and mobile devices can happily block all incoming connections in most cases.
  • Egress is important – also discussed very eloquently by Dave Piscitello. Its not all about ingress.
  • Other pitfalls with firewalls involve poor usage of NAT and those pesky network dudes who like to bypass inner DMZ firewalls with dual homing.
  • Watch out for connections from any internal subnet from which human-used devices derive to critical infrastructure such as databases. Those can be blocked in most cases.
  • Don’t focus on WannaCry. Don’t focus on Ransomware. Don’t focus on malware. Focus on Vulnerability Management.

So then perimeter firewall configurations, it seems, go through the same cycles that economies and seasonal temperature variations go through. When will the winter pass for firewall configurations?

Information Security Pseudo-skills and the Power of 6

How many Security Analysts does it take to change a light bulb? The answer should be one but it seems organisations are insistent on spending huge amounts of money on armies of Analysts with very niche “skills”, as opposed to 6 (yes, 6!) Analysts with certain core skills groups whose abilities complement each other. Banks and telcos with 300 security professionals could reduce that number to 6.

Let me ask you something: is Symantec Control Compliance Suite (CCS) a “skill” or a product or both? Is Vulnerability Management a skill? Its certainly not a product. Is HP Tippingpoint IPS a product or a skill?

Is McAfee Vulnerability Manager 7.5 a skill whereas the previous version is another skill? So if a person has experience with 7.5, they are not qualified to apply for a shop where the previous version is used? Ok this is taking it to the extreme, but i dare say there have been cases where this analogy is applicable.

How long does it take a person to get “skilled up” with HP Arcsight SIEM? I was told by a respected professional who runs his own practice that the answer is 6 months. My immediate thought is not printable here. Sorry – 6 months is ridiculous.

So let me ask again, is Symantec CCS a skill? No – its a product. Its not a skill. If you take a person who has experience in operational/technical Vulnerability Management – you know, vulnerability assessment followed by the treatment of risk, then they will laugh at the idea that CCS is a skill. Its only a skill to someone who has never seen a command shell before, tested manually for a false positive, or taken part in an unrestricted manual network penetration test.

Being a software product from a major vendor means the GUI has been designed to make the software intuitive to use. I know that in vulnerability assessment, i need to supply the tool with IP addresses of targets and i need to tell the tool which tests I want to run against those targets. Maybe the area where I supply the addresses of targets is the tab which has “targets” written on it? And I don’t want to configure the same test every time I run it, maybe this “templates” tab might be able to help me? Do i need a $4000 2-week training course and a nice certificate to prove to the world that I can work effectively with such a product? Or should there be an effective accreditation program which certifies core competencies (including evidence of the ability to adapt fast to new tools) in security? I think the answer is clear.

A product such as a Vulnerability Management product is only a “Window” to a Vulnerability Management solution. Its only a GUI. It has been tailored to be intuitive to use. Its the thin layer on top of the Vulnerability Management solution. The solution itself is much bigger than this. The product only generates list of vulnerabilities. Its how the organisation treats those vulnerabilities that is key – and the product does not help too much with the bigger picture.

Historically vulnerability management has been around for years. Then came along commercial products, which basically just slapped a GUI on processes and practices that existed for 20 years+, after which the jobs market decided to call the product the solution. The product is the skill now, whereas its really vulnerability management that is the skill.

The ability to adapt fast to new tools is a skill in itself but it also is a skill that should be built-in by default: a skill that should be inherent with all professionals who enter the field. Flexibility is the key.

The real skills are those associated with areas for large volumes of intellectual capital. These are core technologies. Say a person has 5 years+ experience of working in Unix environments as a system administrator and has shown interest in scripting. Then they learn some aspects of network penetration testing and are also not afraid of other technologies (such as Windows). I can guarantee that they will be up and running in less than one day with any new Vulnerability Management tool, or SIEM tool, or [insert marketing buzzphrase here] that vendors can magic up.

Different SIEM tools use different terms and phrases for the same basic idea. HP uses “flex connectors” whilst Splunk talks about “Forwarders” and “Heavy Forwarders” and so on. But guess what? I understand English but If i don’t know what the words mean, i can check in an online dictionary. I know what a SIEM is designed to do and i get the data flows and architecture concept. Network aggregation of syslog and Windows Events is not an alien concept to me, and neither are all layers of the TCP/IP stack (a really basic requirement for all Analysts – or should be). Therefore i can adapt very easily to new tools in this space.

IPS/IDS and Firewalls? Well they’re not even very functional devices. If you have ever setup Snort or iptables you’ll be fine with whatever product is out there. Recently myself and another Consultant were asked to configure a Tippingpoint device. We were up and running in 10 minutes. There were a few small items that we needed to check against the product documentation. I have 15 years+ experience in the field but the other guy is new. Nonetheless he had configured another IPS product before. He was immediately up and running with the product – no problem. Of course what to configure in the rule base – that is a bigger story and it requires knowledge of threats, attack techniques and vulnerabilities – but that area is GENERIC to security – its not specific to a product.

I’ve seen some truly crazy job specifications. One i saw was Websense Specialist!! Come on – its a web proxy! Its Squid with extra cosmetic functions. The position would be filled by a Websense “Olympian” probably. But what sort of job is that? Carpe Diem my friends, Carpe Diem.

If you run a security consultancy and you follow the usual market game of micro-boxed, pigeon-holed security skills, i don’t know how you can survive. A requirement comes up for a project that involves a number of different products. Your existing consultants don’t have those products written anywhere on their CVs, so you go to market looking for contractors at 600 USD per day. You either find the people somehow, or you turn the project down.  Either way you lose out massively. Or – you could have a base of 6 (its that number again) consultants with core skills that complement each other.

If the over-specialisation issue were addressed, businesses would save considerably on human resource and also find it easier to attract the right people. Pigeon-holed jobs are boring. It is possible and advisable to acquire human resource able to cover more bases in risk management.

There are those for and against accreditation in security. I think there is a solution here which is covered in more detail of Chapter 11 of Security De-engineering.

So how many Security Analysts does it take to change a light bulb? The answer is 6, but typically in real life the number is the mark of the beast: 666.

Migrating South: The Devolution Of Security From Security

Devolution might seem a strong word to use. In this article I will be discussing the pros and cons of the migration of some of the more technical elements of information security to IT operations teams.

By the dictionary definition of the word, “devolution” implies a downgrade of security – but sufficed to say my point does not even remotely imply that operations teams are subordinate to security. In fact in many cases, security has been marginalized such that a security manager (if such a function even exists) reports to a CIO, or some other managerial entity within IT operations. Whether this is right or wrong…this is subjective and also not the subject here.

Of course there are other department names that have metamorphosed out of the primordial soup …”Security Operations” or SecOps, DevOps, SecDev, SecOpsDev, SecOpsOps, DevSecOps, SecSecOps and so on. The discussion here is really about security knowledge, and the intellectual capital that needs to exist in a large-sized organisation. Where this intellectual capital resides doesn’t bother me at all – the name on the sign is irrelevant. Terms such as Security and Operations are the more traditional labels on the boxes and no, this is not something “from the 90s”. These two names are probably the more common names in business usage these days, and so these are the references I will use.

Examples of functions that have already, and continue to be pharmed out to Ops are functions such as Vulnerability Management, SIEM, firewalls, IDS/IPS, and Identity Management. In detail…which aspects of these functions are teflonned (non-stick) off? How about all of them? All aspects of the implementation project, including management, are handled by ops teams. And then in production, ops will handle all aspects of monitoring, problem resolution, incident handling ..ad infinitum.

A further pre-qualification is about ideal and actual security skills that are commonly present. Make no mistake…in some cases a shift of tech functions to ops teams will actually result in improvements, but this is only because the self-constructed mandate of the security department is completely non-tech, and some tech at a moderate cost will usually be better than zero tech, checklists, and so on.

We need to talk about typical ops skills. Of course there will be occasional operations team members who are well versed in security matters, and also have a handle on the business aspects, but this is extra-curricular and rare. Ops team members are system administrators usually. If we take Unix security as an example, they will be familiar with at least filesystem permissions and umask settings, so there is a level of security knowledge. Cisco engineers will have a concept of SNMP community strings and ACLs. Oracle DBAs will know how about profiles and roles.

But is the typical security portfolio of system administrators wide enough to form the foundations of an effective information security program? Not really. In fact its some way short. Security Analysts need to have a grasp not only on, for example, file system permissions, they need to know how attackers actually elevate privileges and compromise, for example, a critical database host. They need to know attack vectors and how to defend against them. This kind of knowledge isn’t a typical component of a system administrator’s training schedule. Its one thing to know the effect of a world-write permission bit on a directory, but what is the actual security impact? With some directories this can be relatively harmless, with others, it can present considerable business risk.

The stance from ops will be to patch and protect. While this is [sometimes] better than nothing, there are other ways to compromise servers, other than exploiting known vulnerabilities. There are zero days (i.e. undeclared vulnerabilities for which no patch has been released), and also means of installing back doors and trojans that do not involve exploiting local bugs.

So without the kind of knowledge I have been discussing, how would ops handle a case where a project team blocks the install of a patch because it breaks some aspect of their business-critical application? In most cases they will just agree to not install the patch. In consideration of the business risk several variables come into play. Network architecture, the qualitative technical risk to the host, value of information assets…and also is there a work-around? Is a work-around or compromise even worth the time and effort? Do the developers need to re-work their app at a cost of $15000?

A lack of security input in network operations leads to cases where over-redundancy is deployed. Literally every switch and router will have a hot swap. So take the budget for a core network infrastructure and just double it – in most cases this is excessive expenditure.

With firewall rules, ops teams have a concept of blocking incoming connections, but its not unusual that egress will be over-looked, with all the “bad netizen”, malware / private date harvests, reverse telnet implications. Do we really want our corporate domain name being blacklisted?

Another common symptom of a devolved security model is the excessive usage of automated scanners in vulnerability assessment, without having any idea that there are shortcomings with this family of product. The result of this is to “just run a scanner against it” for critical production servers and miss the kind of LHF (Low Hanging Fruit) false negatives that bad guys and malware writers just love to see.

The results of devolution will be many and varied in nature. What I have described here is only a small sampling. Whatever department is responsible for security analysis is irrelevant, but the knowledge has to be there. I cover this topic more thoroughly in Chapter 5 of Security De-engineering, with more details on the utopic skills in Chapter 11.

A Tribute To Our Oldest And Dearest Of Friends – The Firewall (Part 2)

In the first part of my coverage on firewalls I mentioned about the usefulness of firewalls, and apart from being one of the few commercial offerings to actually deliver in security, the firewall really does do a great deal for our information security posture when its configured well.

Some in the field have advocated that the firewall has seen its day and its time for the knackers yard, but these opinions are borne from a considerable distance from the coal face in this business. Firewalls, when seen as something as in the movies, as in “breaking through”, “punching through” the firewall, can be seen as useless when bad folk have compromised networks seemingly effortlessly. One doesn’t “break through” a firewall. Your profile is assessed. If you fit a certain profile you are allowed through. If not, you absolutely shall not pass.

There have been counters to these arguments in support of firewalls, but the extent of the efficacy of well-configured firewalls has only been covered with some distance from the nuts and bolts, and so is not fully appreciated. What about segmentation for example? Are there any other security controls and products that can undisputedly be linked with cost savings? Segmentation allows us to devote more resources to more critical subnets, rather than blanket measures across a whole network. As a contractor with a logistics multinational in Prague, I was questioned a few times as to why I was testing all internal Linux resources, on a standard issue UK contract rate. The answer? Because they had a flat, wide open internal network with only hot swap redundant firewalls on the perimeter. Regional offices connecting into the data centre had frequent malware problems with routable access to critical infrastructure.

Back in the late 90s, early noughties, some service providers offered a firewall assessment service but the engagements lacked focus and direction, and then this service disappeared altogether…partly because of the lack of thought that went into preparation and also because many in the market really did believe they had nailed firewall configuration. These engagements were delivered in a way that was something like “why do you leave these ports open?”, “because this application X needs those ports open”…and that would be the end of that, because the service providers didn’t know application X, or where its IT assets were located, or the business importance of application X. After thirty minutes into the engagement there were already “why are we here?” faces in the room.

As a roaming consultant, I would always ask to see firewall configurations as part of a wider engagement – usually an architecture workshop whiteboard session, or larger scale risk assessment. Under this guise, there is license to use firewall rulebases to tell us a great deal about the organisation, rather than querying each micro-issue.

Firewall rulebases reveal a large part of the true “face” of an organization. Political divisions are revealed, along with the old classic: opening social networks, betting sites (and such-like) only for senior management subnets, and often times some interesting ports are opened only for manager’s secretaries.

Nine times out of ten, when you ask to see firewall rules, faces will change in the room from “this is a nice time wasting meeting, but maybe I’ll learn something about security” to mild-to-severe discomfort. Discomfort – because there is no hiding place any more. Network and IT ops will often be aware that there are some shortcomings, but if we don’t see their firewall rules, they can hide and deflect the conversation in subtle ways. Firewall rulebases reveal all manner of architectural and application – related issues.

To illustrate some firewall configuration and data flow/architectural issues, here are some examples of common issues:

– Internal private resources 1-to-1 NAT’d to pubic IP addresses: an internal device with a private RFC 1918 address (something like 10. or 192.168. …) has been allocated a public IP address that is routable from the public Internet and clearly “visible” on the perimeter. Why is this a problem? If this device is compromised, the attacker has compromised an internal device and therefore has access to the internal network. What they “see” (can port scan) from there depends on internal network segmentation but if they upload and run their own tools and warez on the compromised device, it won’t take long to learn a great deal about the internal network make-up in these cases. This NAT’ing problem would be a severe problem for most businesses.

– A listening service was phased out, but the firewall still considers the port to be open: this is a problem, the severity of which is usually quite high but just like everything else in security, it depends on a lot of factors. Usually, even in default configurations, firewalls “silently drop” packets when they are denied. So there is no answer to a TCP SYN request from a port scanner trying to fire-up some small talk of a long winter evening. However, when there is no TCP service listening on a higher port (for example) but the firewall also doesn’t block access to this port – there will be a quick response to the effect “I don’t want to talk, I don’t know how to answer you, or maybe you’re just too boring” – this is bad but at least there’s a response. Let’s say port 10000 TCP was left unfiltered. A port scanner like nmap will report other ports as “filtered” but 10000 as “closed”. “Closed” sounds bad but the attacker’s eye light up when seeing this…because they have a port with which to bind their shell – a port that will be accessible remotely. If all ports other than listening services are filtered, this presents a problem for the attacker, it slows them down, and this is what we’re trying to achieve ultimately.

– Dual-homed issues: Sometimes you will see internal firewalls with rules for source addresses that look out of place. For example most of the rules are defined with 10.30.x.x and then in amidst them you see a 172.16.x.x. Oh oh. Turns out this is a source address for a dual-homed host. One NIC has an address for a subnet on one side of a firewall, plus one other NIC on the other side of the firewall. So effectively the dual-homed device is bypassing firewall controls. If this device is compromised, the firewall is rendered ineffective. Nine times out of ten, this dual homing is only setup as a short cut for admins to make their lives easier. I did see this once for a DMZ, where the internal network NIC address was the same subnet as a critical Oracle database.

– VPN gateways in inappropriate places: VPN services should usually be listening on a perimeter firewall. This enables firewalls to control what a VPN user can “see” and cannot see once they are authenticated. Generally, the resources made available to remote users should be in a VPN DMZ – at least give it some consideration. It is surprising (or perhaps not) how often you will see VPN services on internal network devices. So on firewalls such as the inner firewall of a DMZ, you will see classic VPN TCP services permitted to pass inbound! So the VPN client authenticates and then has direct access to the internal network – a nice encrypted tunnel for syphoning off sensitive data.

Outbound Rules

Outbound filtering is often ignored, usually because the business is unaware of the nature of attacks and technical risks. Inbound filtering is usually quite decent, but its still the case as of 2012 that many businesses do not filter any outbound traffic – as in none whatsoever. There are several major concerns when it comes to egress considerations:

– Good netizen: if there is no outbound filtering, your site can be broadcasting all kinds of traffic to all networks everywhere. Sometimes there is nothing malicious in this…its just seen as incompetence by others. But then of course there is the possibility of internal staff hacking other sites, or your site can be used as a base from which to launch other attacks – with a source IP address registered under the ownership of the source of the attack – and this is no small matter.

– Your own firewall can be DOS’d: Border firewalls NAT outgoing traffic, with address translation from private to public space. With some malware outbreaks that involve a lot of traffic generation, the NAT pool can fill quickly and the firewall NAT’ing can fail to service legitimate requests. This wouldn’t happen if these packets are just dropped.

– It will be an essential function of most malware and manual attacks to be able to dial home once “inside” the target – for botnets for example, this is essential. Plus, some publicly available exploits initiate outbound connections rather than fire up listening shells.

Generally, as with ingress, take the standard approach: start with deny-all, then figure out which internal DNS and SMTP servers need to talk to which external devices, and take the same approach with other services. Needless to say, this has to be backed by corporate security standards, and made into a living process.

Some specifics on egress:

– Netbios broadcasts reveal a great deal about internal resources – block them. In fact for any type of broadcast – what possible reason can there be for allowing them outside your network? There are other legacy protocols which broadcast nice information for interested parties – Cisco Discovery Protocol for example.

– Related to the previous point: be as specific as possible with subnet masks. Make these as “micro” as possible.

– There is a general principle around proxies for web access and other services. The proxy is the only device that needs access to the Internet, others can be blocked.

– DNS: Usually there will be an internal DNS server in private space which forwards queries to a public Internet DNS service. Make sure the DNS server is the only device “allowed out”. Direct connections from other devices to public Internet services should be blocked.

– SMTP: Access to mail services is important for many malware variants, or there is mail client functionality in the malware. Internal mail servers should be the only devices permitted to connect to external SMTP services.

As a final note, for those wishing to find more detail, the book I mentioned in part 1 of this diatribe, “Building Internet Firewalls” illustrates some different ways to set up services such as FTP and mail, and explains very well the principles of segregated subnets and DMZs.

A Tribute To Our Oldest And Dearest Of Friends – The Firewall (Part 1)

In my previous article I covered OS and database security in terms of the neglect shown to this area by the information security industry. In the same vein I now take a look at another blast from the past – firewalls. The buzz topics these days are cloud, big data, APT, “cyber”* and BYOD. Firewall was a buzz topic a very long time ago, but the fact that we moved on from that buzz topic, doesn’t mean we nailed it. And guess what? The newer buzz topics all depend heavily on the older ones. There is no cloud security without properly configured firewalls (and moving assets off-campus means even more thought has to be put into this area), and there shouldn’t be any BYOD if there is no firewall(s) between workstation subnets and critical infrastructure. Good OS/DB security, plus thoughtful firewall configs sets the stage on which the new short-sighted strategies are played out and retrenched.

We have a lot of bleeding edge software and hardware products in security backed by fierce marketing engines which set unrealistic expectations, advertised with 5 gold star ratings in infosec publications, coincidentally next to a full page ad for the vendor. Out of all these products, the oldest carries the highest bang for our bucks – the firewall. In fact the firewall is one of the few that actually gives us what we expect to get – network access control, and by and large, as a technology it’s mature and it works. At least when we buy a firewall looking for packet filtering, we get packet filtering, unlike another example where we buy a product which allegedly manages vulnerability, but doesn’t even detect vulnerability, let alone “manage” it.

Passwords, crypto, filesystem permissions – these are old concepts. The firewall arrived on the scene some considerable number of years after the aforementioned, but before some of the more recent marketing ideas such as IdM, SIEM, UTM etc. The firewall, along with anti-virus, formed the basis of the earliest corporate information security strategies.

Given the nature of TCP/IP, the next step on from this creation was quite an intuitive one to take. Network access control – not a bad idea! But the fact that firewalls have been around corporate networks for two decades doesn’t mean we have perfected our approach to configuration and deployment of firewalls – far from it.

What this article is not..

“I’m a firewall, I decide which packets are dropped or passed based on source and destination addresses and services”.

Let’s be clear, this article is not about which firewall is the best. New firewall, new muesli. How does one muesli differ from another? By the definition of muesli, not much, or it’s not muesli any more.

Some firewalls have exotic features – even going back 10 years, Checkpoint Firewall-1 had application layer trackers such as FTP passive mode trackers, earlier versions of which crashed the firewall if enabled – thereby introducing DoS as an innovative add-on. In most cases firewalls need to be able to track conversations and deny/pass packets based on unqualified TCP flags (for example) – but these days they all do this. Firewalls are not so CPU intensive but they can be memory-intensive if conversations are being monitored and we’re being DoS’d – but being a firewall doesn’t make a node uniquely vulnerable to SYN-Flood and so on. The list of considerations in firewall design goes on and on but by 2012 we have covered off most of the more important, and you will find the must-haves and the most useful features in any modern commercial firewall…although I wouldn’t be sure that this covers some of the UTM all-in-one matchbox size offerings.

Matters such as throughput and bandwidth are matters for network ops in reality. Our concern in security should be more about configuration and placement.

On the matter of which firewall to use, we can go back to the basic tenet of a firewall as in the first paragraph of this subsection – sometimes it is perfectly fine to cobble together an old PC, install Linux on it, and use iptables – but probably not for a perimeter choke point firewall that has to handle some considerable throughput. Likewise, do you want the latest bright flashing lights, bridge of the Starship Enterprise enterprise box for the firewall which separates a 10-node development subnet from the commercial business production subnets? Again, probably not – let’s just keep an open mind. Sometimes cheap does what we need. I didn’t mention the term “open source” here because it does tend to evoke quite emotional responses – ok well i did mention it actually, sorry, just couldn’t help myself there. There are the usual issues with open source such as lack of support, but apart from bandwidth, open source is absolutely fine in many cases.

Are firewalls still important?

All attack efforts will be successful given sufficient resources. What we need to do is slow down these efforts such that the resources required outweighs the potential gains from owning the network. Effective firewall configuration helps a great deal in this respect. I still meet analysts who underestimate the effect of a firewall on the security posture.

Taking the classic segregated subnet as in a DMZ type configuration, by now most of us are aware at least that a DMZ is in most cases advisable, and most analysts can draw a DMZ network diagram on a white board. But why DMZ? Chiefly we do this to prevent direct connections from untrusted networks to our most valuable information assets. When an outsider port scans us, we want them to “see” only the services we intend the outside world to see, which usually will be the regular candidates: HTTPs, VPN, etc. So the external firewall blocks access to all services apart from those required, and more importantly, it only allows access to very specific DMZ hosts, certainly no internal addresses should be directly accessible.

Taking the classic example of a DMZ web server application that connects to an internal database. Using firewalls and sensible OS and database configuration, we can create a situation where we can add some considerable time on an attack effort aimed at compromising the database. Having compromised the DMZ webserver, port scanning should then reveal only one or two services on the internal database server, and no other IP addresses need to be visible (usually). The internal firewall limits access from the source address of the DMZ webserver, to only the listening database service and the IP address of the destination database server. This is a considerably more challenging situation for attackers, as compared with a scenario where the internal private IP space is fully accessible…perhaps one where DMZ servers are not at all segregated and their “real” IP addresses are private RFC 1918 addresses, NAT’d to public Internet addresses to make them routable for clients.

Firewalls are not a panacea, especially with so many zero days in circulation, but in an era where even automated attacks can lead to our most financially critical assets disappearing via the upstream link, they can, and regularly do, make all the difference.

We All “Get” Firewalls…right?

There is no judgment being passed here, but it often is the case that security departments don’t have much to offer when it comes to firewall configuration and placement. Network and IT operations teams will try perhaps a couple of times to get some direction with firewalls, but usually what comes back is a check list of “best practices” and “deny all services that are not needed”, some will even take the extraordinary measure of reminding their colleagues about the default-deny, “catch all” rule. But very few security departments will get more involved than this.

IT and network ops teams, by the year 2012 AD, are quite averse in the wily ways of the firewall, and without any further guidance they will do a reasonable job of firewall configuration – but 9 times out of 10 there will be shortcomings. Ops peeps are rarely schooled on the art of technical risks. Its not part of their training. If they do understand the tech risk aspects of network access control, it will have been self-taught. Even if they have attended a course by a vendor, the course will cover the usage aspects, as in navigating GUIs and so on, and little of any significance to keeping bad guys out.

Ops teams generally configure fairly robust ingress filtering, but rarely is there any attention given to egress (more on that in part 2 of this offering), and the importance of other aspects such as whether services are UDP or TCP (with the result that one or other other is left open).

Generally, up to now, there are still some gaps and areas where businesses fall short in their configuration efforts, whereas I am convinced that in many cases attention moved away from firewalls many years ago – as if it’s an area that we have aced and so we can move on to other things.

So where next?

I would like to bring this diatribe to a close for now, until part 2. In the interim I would also like to point budding, enthusiastic analysts, SMEs, Senior *, and Evangelists in the direction of some rather nice reads. Try out TCP/IP Illustrated, at least Volume 1. Then O’ Reilly’s “Building Internet Firewalls”. The latter covers the in and outs of network architecture and how to firewall specific commonly used application layer protocols. This is a good starting point. Also, try some hands-on demo work (sorry – this involves using command shells) with IPtables – you’ll love it (I swear by this), and pay some attention to packet logging.

In Part 2 I will go over some of my experiences as a consultant with a roaming disposition, related to firewall configuration analysis, and I will cover some guiders related to classic misconfigurations – some of which may not be so obvious to the reader.