On Hiring For DevSecOps

Based on personal experience, and second hand reports, there’s still some confusion out there that results in lots of wasted time for job seekers, hiring organisations, and recruitment agents.

There is a want or a need to blame recruiters for any hiring difficulties, but we need to stop that. There are some who try to do the right thing but are limited by a lack of any sector experience. Others have been inspired by Wolf Of Wall Street while trying to sound like Simon Cowell.

It’s on the hiring organisation? Well, it is, but let’s take responsibility for the problem as a sector for a change. Infosec likes to shift responsibility and not take ownership of the problem. We blame CEOs, users, vendors, recruiters, dogs, cats, “Russia“, “China” – anyone but ourselves. Could it be we failed as a sector to raise awareness, both internally and externally?

So What Are Common Understandings Of Security Roles?

After 25 years+ we still don’t have universally accepted role descriptions, but at least we can say that some patterns are emerging. Security roles involve looking at risk holistically, and sometimes advising on how to deal with risk:

  • Security Engineers assess risk and design and sometimes also implement controls. BTW some sectors, legal in particular, still struggle with this. Someone who installs security products is in an IT ops role. Someone who upgrades and maintains a firewall is an IT ops role. The fact that a firewall is a security control doesn’t make this a security engineering function.
  • Security Architects take risk and compliance goals into account when they formulate requirements for engineers.
  • Security Analysts are usually level 2 SOC analysts, who make risk assessments in response to an alert or vulnerability, and act accordingly.

This subject evokes as much emotion as CISSP. There are lots of opinions out there. We owe to ourselves to be objective. There are plenty of sources of information on these role definitions.

No Aspect Of Risk Assessment != Security. This is Devops.

If there is no aspect of risk involved with a role, you shouldn’t looking for a security professional. You are looking for DEVOPS peeps. Not security peeps.

If you want a resource to install and configure tools in cloud – that is DEVOPS. It is not Devsecops. It is not Security Engineering or Architecture. It is not Landscape Architecture or Accounting. It is not Professional Dog Walker. it is DEVOPS. And you should hire a DEVOPS person. If you want a resource to install and configure appsec tools for CI/CD – that is DEVOPS. If you want a resource to advise on or address findings from appsec tools, that is a Security Analyst in the first case, DEVSECOPS in the 2nd case. In the 2nd case you can hire a security bod with coding experience – they do exist.

Ok Then So What Does A DevSecOps Beast Look Like?

DevSecOps peeps have an attack mindset from their time served in appsec/pen testing, and are able to take on board the holistic view of risk across multiple technologies. They are also coders, and can easily adapt to and learn multiple different devops tools. This is not a role for newly graduated peeps.

Doing Security With Non-Security Professionals Is At Best Highly Expensive

Another important point: what usually happens because of the skills gap in infosec:

  • Cloud: devops fills the gap.
  • On-premise: Network Engineers fill the gap.

Why doesn’t this work? I’ve met lots of folk who wear the aforementioned badges. Lots of them understand what security controls are for. Lots of them understand what XSS is. But what none of them understand is risk. That only comes from having an attack mindset. The result will be overspend usually – every security control ever conceived by humans will be deployed, while also having an infrastructure that’s full of holes (e.g. default install IDS and WAF is generally fairly useless and comes with a high price tag).

Vulnerability assessment is heavily impacted by not engaging security peeps. Devops peeps can deploy code testing tools and interpret the output. But a lack of a holistic view or an attack mindset, will result in either no response to the vulnerability, or an excessive response. Basically, the Threat And Vulnerability Management capability is broken under these circumstances – a sadly very common scenario.

SIEM/Logging is heavily impacted – what will happen is either nothing (default logging – “we have Stackdriver, we’re ok”), or a SIEM tool will be provisioned which becomes a black hole for events and also budgets. All possible events are configured from every log source. Not so great. No custom use cases will be developed. The capability will cost zillions while also not alerting when something bad is going down.

Identity Management – is not deploying a ForgeRock (please know what you’re getting into with this – its a fork of Sun Microsystems/Oracle’s identity management show) or an Azure AD and that’s it, job done. If you just deploy this with no thought of the problem you’re trying to solve in identity management, you will be fired.

One of the classic risk problems that emerges when no security input is taken: “there is no personally identifiable information in development Virtual Private Clouds, so there is no need for security controls”. Well – intelligence vulnerability such as database schema – attackers love this. And don’t you want your code to be safe and available?

You see a pattern here. It’s all or nothing. Either of which ends up being very expensive or worse. But actually come to think of it, expensive is the goal in some cases. Hold that thought maybe.

A Final Word

So – if the word risk doesn’t appear anywhere in the job description, it is nothing to do with security. You are looking for devops peeps in this case. And – security is an important consideration for cloud migrations.

Share This:

Exorcising Dark Reading’s Cloud Demons

Dark Reading recently covered what it says are Cloud “blind spots”. Really, there is some considerable FUD here. Is there really anything unique to cloud with the issues called out?

I’m not pro or anti-cloud. It is “just another organisation’s computer platform” as they say, and whereas this phrase was coined as a warning about cloud, it also serves as a reminder that the differences with on-premise aren’t as much as many would have you believe. In some areas cloud makes things easier and its a chance to go greed field and fix the architecture. Some fairly new concepts such as “immutable architecture” will change security architecture radically but there is nothing carved in stone which says this can only be implemented in cloud. In case it needed calling out – you can implement microservices in your data centre if microservices are what does it for you!

Migrating to cloud – there’s a lot to talk about regards infosec, and i can’t make an attempt at doing that comprehensively here. But some points to make based on what seems to be popular misconceptions:

  • Remember you will never get access to the Cloud Service Provider’s (CSP) hypervisors. Your hardware is in their hands. Check your SLA’s and contract terms.
  • SaaS – it many cases it can be bad to hand over your operating systems to CSPs, and not just from a security perspective. In the case of on-premise, it was deemed a good business choice to have skilled staff to administer complex resources that present many configuration options that can be used and abused for fun and profit. So why does moving to cloud change that?
  • Saas and SIEM/VA: Remember this now means you lose most of your Vulnerability Assessment coverage. And SaaS and SIEM is getting more popular. Due to the critical nature of a SIEM manager, with trust relationships across the board, personally i want access to the OS of such a critical device, but that’s just me.

So picking out the areas covered briefly….

  • Multi-cloud purchasing – “The problem for security professionals is that security models and controls vary widely across providers” – if it takes a few weeks to switch from on-premise to Cloud, then for example, from Azure to Google, or AWS, chances are they were struggling with on-premise. Sorry but there’s no “ignorance” here. A machine is now a VM but it still speaks TCP/IP, and a firewall is now not something in the basement…ok i’ll leave it there.
  • Hybrid Architecture – tracking assets is easier in cloud than it is on-premise. If they find it hard to track assets in cloud, they certainly find it hard on-premise. Same for “monitoring activity”.
  • Likewise with Cloud Misconfiguration – they are also finding it hard on-premise if they struggle with Cloud.
  • Business Managed IT – not a cloud specific problem.
  • Containers – “new platforms like Kubernetes are introducing new classes of misconfigurations and vulnerabilities to cloud environments faster than security teams can even wrap their arms around how container technology works. ” – WHAT!!? So does the world hold back on these devops initiatives while infosec plays catch up? There is “devsecops” which is supposed to be the area of security of security which is specialised in devops. If they also struggle, then what does it say about security? I have to say that on a recent banking project, most of the security team would certainly not know what Docker is. This is not a problem with Cloud, its a problem with security professionals coming into the field with the wrong background.
  • Dark Data – now you’re just taking the proverbial.
  • Forensics and Threat-Hunting Telemetry – show yourself Satan! Don’t lurk in the shadows! Ignore the fancy title – this is about logging. “Not only do organizations struggle to get the right information fed from all of their different cloud resources” – This is a logging problem, not a cloud problem. Even SaaS SIEM solutions do not present customers with issues getting hold of log data.

What happened Dark Reading? You used to be good. Why does’t thou vex us so? Cloud is just someone else’s computer, and just someone else’s network – there are a few unique challenges with cloud migrations, but none of those are mentioned here.

Share This:

“Cybersecurity Is About To Explode” – But in What Way?

I recently had the fortune to stumble across an interesting article: http://thetechnews.com/2019/08/17/cybersecurity-is-about-to-explode-heres-why/

The article probably was aimed at generating revenue for the likes of ISC2 (CISSP exam revenue) and so on, but i am open minded to the possibility that it was genuinely aimed at helping the sector. It is however hopelessly misleading. I would hate to think that this article was the thing that led a budding security wannabe to finally sign up. Certainly a more realistic outlook is needed.

Some comments on some of the points in said article:

“exciting headlines about data breaches” – exciting for who? The victims? 
“organizations have more resources to fight back ” – no they don’t. They spend lots but still cannot fight back.
“It’s become big enough that thought leaders, lawyers, and even academics are weighing in” – who are the thought leaders who are weighing in? If they are leading thought, i would like to know who they are.
“today’s cybercriminals are much more sophisticated than they were twenty years ago”. Do they need to be? I mean Wannacry exploited a basic firewall config problem. Actually firewall configs were better 20 years ago than they are today.
“employing the services of ethical hackers ” – i’m glad the hackers are ethical. They wouldn’t have the job if they had a criminal record. So what is the ‘ethical’ qualifier for? Does it mean the hackers are “nice” or… ?
“Include the use of new security technology like the blockchain and using psychology to trick, mislead, and confuse hackers before they ever reach sensitive data.” Psychology isn’t a defence method, it’s an attack method. Blockchain – there are no viable blue team use cases. 
“313,735 job openings in the cybersecurity field” – all of them are filled if this number is real (unlikely).
“since the need for security experts isn’t likely to drop anytime soon.” see Brexit. It’s dropping now. Today. Elsewhere its flat-line.
“You can take your pick of which industry you want to work in because just about every company needs to be concerned about the safety and security of their networks.” – “needing” to be concerned isn’t the same as being concerned. No. All sectors are still in the basic mode of just getting compliance. 
“Industries like healthcare, government, and fintech offer extensive opportunities for those who want to work in cybersecurity” – no, they do not. 
“90% of payment companies plan to switch over to blockchain technology by 2020” – can you tell your audience the source of this information?

Share This:

A Desperate Call For More Effective Information Security Accreditation

CISSP has to be the most covered topic in the world of infosec. Why is that? The discussions are mostly of course aimed at self-promotion (both by folk condemning the accreditation and then the same in the defensive responses) and justifying getting the accreditation. How many petabytes are there covering this subject? If you think about it, the sheer volume of the commentary on CISSP is proportional to the level of insecurity felt by infosec peeps. It’s a symptom of a sector that is really very ill indeed, and the sheer volume of the commentary is a symptom of how ineffective CISSP is in accreditation, and also the frustration felt by people who know we can do better.

We need _something_. We do need some kind of accreditation. Right now CISSP is the only recognised accreditation. But if you design an accreditation that attempts to cover the whole of infosec in one exam, what did you think the result would be? And there is no room for any argument or discussion on this. Its time to cut the defensiveness and come clean and honest.

The first stage of solving a problem is acknowledgment of its existence. And we’re not there yet. There are still 1000s in this field who cling onto CISSP like a lifebuoy out on the open ocean. There is a direct correlation between the buoy-clingers and the claim that “security is not about IT” …stop that!! You’re not fooling anybody. Nobody believes it! All it does is make the whole sector look even more like a circus to our customers than it already does. The lack of courage to admit the truth here is having a negative impact on society in general.

Seems to me that the “mandatory” label for CISSP in job qualifications is now rare to see. But CISSP is still alive and is better than nothing. Just stop pretending that it’s anything other than an inch thick and a mile wide.

Really we need an entry-level accreditation that tests a baseline level of technical skills and the possession of an attack mindset. We can’t attack or defend, or make calls on risk without an attack mindset. GRC is a thing in security and its a relevant thing – but it doesn’t take up much intellectual space so therefore it needs to be a small part of the requirements. Level 2 SOC Analysts need to understand risk and the importance of application availability, and the value of electronic information to the business, but this doesn’t require them to go and get a dedicated accreditation. Information Security Manage-ment is really an area for Manage-ers – the clue is in the name.

What are the two biggest challenges in terms of intellectual capital investment? They’re still operating systems (and ill-advised PaaS and SaaS initiatives haven’t changed this) and applications. So let’s focus on these two as the biggest chunks of stuff that an infosec team has to cover, and test entry-level skills in these areas.

Share This:

Musang – Oracle DB Security Testing – v1.8

Musang is a Docker-supplied Python/Django project for authenticated security testing scans for Oracle Database, for on-premise, or cloud SaaS, PaaS or IaaS stacks.

The last update of Musang (1.4) was in 2013! So this latest update (1.8) brings Musang into the modern age of old software by introducing support for 12c databases – after all i’m sure there are lots who upgraded from 9g. Right?

No but seriously, Oracle Database 12c introduces the concept of Containers and Pluggable Databases (PDB). It also stores password hashes in a dramatically different way (The mighty Pete Finnegan explains about these hashes so I don’t have to). The user doesn’t have to lose sleep worrying about such things – Musang auto-detects Database versions and selects the corresponding library of tests. However the connection string passed to the Database has to use the right SID of course – do you want to connect to a container or PDB?

Other major changes centred around Python 2.x going EOL – and with it the shift to Django 2.x.

An account privileges dump (for non-SYS, SYSTEM users) has been added also, and the output links each account to the account status (locked, expired, open, etc) – see below …

A rundown of the major changes…

  • Now covers 12c.
  • For the connection test, when Musang detects 12c it dumps the configuration of containers and pluggable databases – if you connect to the container instance, you see a very different output in some tests – notably the password hashes.
  • To get hashes for SYS and SYSTEM you need to connect to the container, with SYS and “as SYSDBA”. Musang now has as the classic “as SYSDBA” option.
  • Added a privileges dump for non SYS and SYSTEM DBAs – see above screen dump.
  • Phased out django-dajaxice and replaced those calls with native AJAX – basically Dajaxice is terrible and was not maintained for years.
  • Few other changes related to 12c and lots related to the above upgrades.
  • Celery is now 4.2.1.
  • Multiple other changes related to stability and 12c intricacies

Share This:

Infosec in APAC – A Very Summarised View

I spent a total of 16 years working in infosec in APAC – across the region as a whole except for India and mainland China. I was based initially in a pen test/research lab in Thailand with regional customers, and then later spent some time with big-4 in Thailand, before moving base to Jakarta for what will probably be my final stint in the region. As well as the aforementioned places i spent lots of time in Singapore, Taiwan, and HK. Less so in Malaysia, and i never worked in either of Vietnam, Cambodia, Laos, Myanmar, or the Philippines.

I was in APAC for most of the period between 1999 and 2013. My time with the consultancy which was based in Bangkok (although there was only one client account in Thailand) made up the formative, simulated-attack experience of my career – not a bad place to start. There were some brief spells away in the UK and Czech Republic (the best blue team experience one can hope to find). Overall i was lucky with the places I worked in, and especially the people I worked with – some of whom quit infosec not long after the Great Early Noughties Infosec Brain Drain. 

Appetite for risk is high in APAC – just look at the stats for insurance sales in the region. What results in infosec, even in banking and finance though, is exactly the same as the west – base compliance only. The difference is something like this: western CEOs showed interest and worried about cyber at some point in time, but when they went looking for answers they didn’t find any, other than buzzwords from CISSPs – result: base compliance – aka lets just get thru the audit. In Asia the CEOs didn’t go looking for answers – its just base compliance, do not pass go. But before you pass judgment on this statement – read on.

Where APAC countries were better was the lack of any pretence around GRC. You will never hear anything along the lines “security is not about IT” – i.e. there is no community of self-serving non-technical GRC folk spouting acronyms. Western countries blow billions down the dunny on this nonsense.

So both regions have poor security. Both face a significant threat. But if you measure security performance in terms of how much is spent, versus the results – there’s a clear winner, and that is APAC. Both have poor security, but one spends more for poor security than the other.

Share This:

#WannaCry and The Rise and Fall of the Firewall

The now infamous WannaCry Ransomware outbreak was the most widespread malware outbreak since the early 2000s. There was a very long gap between the early 2000s “worm” outbreaks (think Sasser, Blaster, etc) and this latest 2017 WannaCry outbreak. The usage of the phrase “worm” was itself widespread, especially as it was included in CISSP exam syllabuses, but then it died out. Now its seeing a resurgence, that started last weekend – but why? Why is the worm turning for the worm (I know – it’s bad – but it had to go in here somewhere)?

As far as WannaCry goes, there has been some interesting developments over the past few days – contrary to popular belief, it did not affect Windows XP, the most commonly affected was Windows 7, and according to some experts, the leading suspect in the case is the Lazarus group with ties to North Korea.

But this post is not about WannaCry. I’m going to say it: I used WannaCry to get attention (and with this statement i’m already more honest than the numerous others who jumped on the WannaCry bandwagon, including our beloved $VENDOR). But I have been meaning to cover the rise and fall of the firewall for some time now, and this instance of a widespread and damaging worm, that spreads by exploiting poor firewall configurations, brought this forward by a few months.

A worm is malware that “uses a computer network to spread itself, relying on security failures on the target computer”. If we think of malware delivery and propagation as two different things – lots of malware since 2004 used email (think Phishing) as a delivery mechanism but spread using an exploit once inside a private network. Worms use network propagation to both deliver and spread. And that is the key difference. WannaCry is without doubt a Worm. There is no evidence to suggest WannaCry was delivered on the back of successful Phishing attacks – as illustrated by the lack of WannaCry home user victims (who sit behind the protection of NAT’ing home routers). Most of the early WannaCry posts were covering Phishing, mostly because of the lack of belief that Server Message Block ports would never be exposed to the public Internet.

The Infosec sector is really only 20 years old in terms of the widespread adoption of security controls in larger organisations. So we have only just started to have a usable, relatable history in infosec. Firewalls are still, in 2017, the security control that delivers most value for investment, and they’ve been around since day one. But in the past 20 years I have seen firewall configurations go thru a spectacular rise in the early 2000s, to a spectacular fall a decade later.

Late 90s Firewall

If we’re talking late 90s, even with some regional APAC banks, you would see huge swaths of open ports in port scan results. Indeed, a firewall to many late 90s organisations was as in the image to the left.

However – you can ask a firewall what it is, even a “Next Gen” firewall, and it will answer “I’m a firewall, i make decisions on accepting or rejecting packets based on source and destination addresses and services”. Next Gen firewall vendors tout the ability of firewalls to do layer 7 DPI stuff such as IDS, WAF, etc, but from what I am hearing, many organisations don’t use these features for one reason or another. Firewalls are quite a simple control to understand and organisations got the whole firewall thing nailed quite early on in the game.

When we got to 2002 or so, you would scan a perimeter subnet and only see VPN and HTTP ports. Mind you, egress controls were still quite poor back then, and continue to be lacking to the present day, as is also the case with internal firewalls other than a DMZ (if there are any). 2002 was also the year when application security testing (OWASP type vulnerability testing) took off, and I doubt it would ever have evolved into a specialised area if organisations had not improved their firewalls. Ultimately organisations could improve their firewalls but they still had to expose web services to the planet. As Marcus Ranum said, when discussing the “ultimate firewall”, “You’ll notice there is a large hole sort of in the centre [of the ultimate firewall]. That represents TCP Port 80. Most firewalls have a big hole right about there, and so does mine.”

During testing engagements for the next decade, it was the case that perimeter firewalls would be well configured in the majority of cases. But then we entered an “interesting” period. It started for me around 2012. I was conducting a vulnerability scan of a major private infrastructure facility in the UK…and “what the…”! RDP and SMB vulnerabilities! So the target organisation served a vital function in national infrastructure and they expose databases, SMB, and terminal services ports to the planet. In case there’s any doubt – that’s bad. And since 2012, firewall configs have fallen by the wayside.

WannaCry is delivered and spreads using a SMB vulnerability, as did Blaster and Sasser all those years ago. If we look at Shodan results for Internet exposure of SMB we find 1.5 million cases. That’s a lot.

So how did we get here? Well there are no answers born out of questionnaires and research but i have my suspicions:

  • All the talk of “Next Generation” firewalls and layer 7 has led to organisations taking their eye off the ball when it comes to layer 3 and 4.
  • All the talk of magic $VENDOR snake oil silver bullets in general has led organisations away from the basics. Think APT-Buster ™.
  • All the talk of outsourcing has led some organisations, as Dr Anton Chuvakin said, to outsource thinking.
  • Talk of “distortion” of the perimeter (as in “in this age of mobile workforces, where is our perimeter now?”). Well the perimeter is still the perimeter – the clue is in the name. The difference is now there are several perimeters. But WannaCry has reminded us that the old perimeter is still…yes – a perimeter.
  • There are even some who advocated losing the firewall as a control, but one of the rare success stories for infosec was the subsequent flaming of such opinions. BTW when was that post published? Yes – it was 2012.

So general guidelines:

  • The Internet is an ugly place with lots of BOTs and humans with bad intentions, along with those who don’t intend to be bad but just are (I bet there are lots of private org firewall logs which show connection attempts of WannaCry from other organisations).
  • Block incoming for all ports other than those needed as a strict business requirement. Default-deny is the easiest way to achieve this.
  • Workstations and mobile devices can happily block all incoming connections in most cases.
  • Egress is important – also discussed very eloquently by Dave Piscitello. Its not all about ingress.
  • Other pitfalls with firewalls involve poor usage of NAT and those pesky network dudes who like to bypass inner DMZ firewalls with dual homing.
  • Watch out for connections from any internal subnet from which human-used devices derive to critical infrastructure such as databases. Those can be blocked in most cases.
  • Don’t focus on WannaCry. Don’t focus on Ransomware. Don’t focus on malware. Focus on Vulnerability Management.

So then perimeter firewall configurations, it seems, go through the same cycles that economies and seasonal temperature variations go through. When will the winter pass for firewall configurations?

Share This:

Clouds and Vulnerability Management

In the world of Clouds and Vulnerability Management, based on observations, it seems like a critical issue has slipped under the radar: if you’re running with PaaS and SaaS VMs, you cannot deliver anything close to a respectable level of vulnerability management with these platforms. This is because to do effective vulnerability management, the first part of that process – the vulnerability assessment – needs to be performed with administrative access (over SSH/SMB), and with PaaS and SaaS, you do not, as a customer, have such access (this is part of your agreement with the cloud provider). The rest of this article explains this issue in more detail.

The main reason for the clouding (sorry) of this issue, is what is still, after 20+ years, a fairly widespread lack of awareness of the ineffectiveness of unauthenticated vulnerability scanning. More and more security managers are becoming aware that credentialed scans are the only way to go. However, with a lack of objective survey data available, I can only draw on my own experiences. See – i’m one of those disgraceful contracting/consultant types, been doing security for almost 20 years, and been intimate with a good number of large organisations, and with each year that passes I can say that more organisations are waking up to the limitations of unauthenticated scanning. But there are also still lots more who don’t clearly see the limitations of unauthenticated scanning.

The original Nessus from the late 90s, now with Tenable, is a great product in terms of doing what it was intended to do. But false negatives were never a concern in with the design of Nessus. OpenVAS is still open source and available and it is also a great tool from the point of view of doing what it was intended to do. But if these tools are your sole source of vulnerability data, you are effectively running blind.

By the way Tenable do offer a product that covers credentialed scans for enterprises, but i have not had any hands-on experience with this tool. I do have hands on experience with the other market leaders’ products. By in large they all fall some way short but that’s a subject for another day.

Unauthenticated scanners all do the same thing:

  • port scan to find open ports
  • grab service banners – this is the equivalent of nmap -sV, and in fact as most of these tools use nmap libraries, is it _exactly_ that
  • lets say our tool finds Apache HTTP 14.x, it looks in its database of public disclosed vulnerability with that version of Apache, and spews out everything it finds. The tools generally do little in the way of actually probing with HTTP Methods for example, and they certainly were not designed to try, for example, a buffer overflow exploit attempt. They report lots of ‘noise’ in the way of false positives, but false negatives are the real concern.

So really the tools are doing a port scan, and then telling you you’re running old warez. Conficker is still very widespread and is the ultimate player in the ‘Pee’ arena (the ‘Pee’ in APT). An unauthenticated scanner doesn’t have enough visibility ‘under the hood’ to tell you if you are going to be the next Conficker victim, or the next ransomware victim. Some of the Linux vulnerabilities reported in the past few years – e.g. Heartbleed, Ghost, DirtyCOW – very few can be detected with an unauthenticated scanner, and none of these 3 examples can be detected with an unauthenticated scanner.

Credentialed scanning really is the only way to go. Credentialed based scanners are configured with root/administrative access to targets and are therefore in a position to ‘see’ everything.

The Connection With PaaS and SaaS

So how does this all relate to Cloud? Well, there two of the three cloud types where a lack of access to the operating system command shell becomes a problem – and from this description its fairly clear these are PaaS and SaaS.

 There are two common delusions abound in this area:

  • [Cloud maker] handles platform configuration and therefore vulnerability for me, so that’s ok, no need to worry:
    • Cloud makers like AWS and Azure will deal with patches, but concerns in security are much wider and operating systems are big and complex. No patches exist for 0days, and in space, nobody can hear you scream.
    • Many vulnerabilities arise from OS configuration aspects that cannot be removed with a patch – e.g. Conficker was mentioned above: some Conficker versions (yes its managed very professionally) use ‘at’ job scheduling to remain present even after MS08-067 is patched. If for example you use Azure, Microsoft manage your PaaS and SaaS but they don’t know if you want to use ‘at’ or not. Its safer for them to assume that you do want to use it, so they leave it enabled (when you sign up for PaaS or SaaS you are removed from the decision making here). Same applies to many other local services and file system permissions that are very popular with the dark side.
  • ‘Unauthenticated scanning gets me some of the way, its good enough’ – how much of the way does it get you? Less than half way? its more like 5% really. Remember its little more than a port scan, and you shouldn’t need a scanner to tell you you’re running old software. Certainly for critical cloud VMs, this is a problem.

With PaaS and SaaS, you are handing over the management of large and complex operating systems to cloud providers, who are perfectly justified, and also in many cases perfectly wise, in leaving open large security holes in your platforms, and as part of your agreement with them, there’s not a thing you can do about it (other than switch to IaaS or on-premise).

Share This:

Information Security And A Pale Blue Dot

This article is about the place of ego and pride in information security.

Earth From 6 Billion Miles - Thanks To Voyager 1 - Courtesy Of NASA

Earth From 6 Billion Miles – Thanks To Voyager 1 – Courtesy Of NASA

At the request of the late Carl Sagan, as the Voyager 1 space probe was leaving the solar system, at a record distance of approx 6 billion miles from Earth, NASA instructed Voyager 1 to turn its camera back toward Earth.

Yes – the circled pixel sized dot in the image on the right – this is Earth.

But that dot – that is also a good representation of what you know about security, compared to the whole. Its even more than what I know about security, compared with what there is to know.

One thing i have been right about – security, in theory at least, is a fantastic world to be a part of. I left IBM in the late 90s because i had heard about a world that covered all bases in the IT world. And i wasn’t wrong about this. Things did get ugly in the early 2000s – basically IT folk and engineers weren’t welcome any more. This is why we’re in the mess we’re in now. But security, relative to other fields in IT, is still by far the best place to be, at least from where i’m standing.

Security is such a vast field, and whichever turn you take, you find another world within a world, and within that world, the more you discover, the more you realise what there is left to discover. So in other words – the more you know about security – the more you know you don’t know.

So given all this – does ego and pride have a place in this field? And how do you assess your knowledge compared to others? If you think about it in the context of the image above, if you show excessive ego, hold grudges, or get into regular arguments with others in the field – what this really demonstrates in itself is a lack of awareness of security and how vast it is. Given the vastness of the field, if you’re taking a mocking attitude (99% of the time this will not be communicated to the target of the mockery), i hope you can see now how ludicrous is that attitude? Its diabolical actually. If an Analyst comes from a different background, spent all their time in a certain part of the universe, why on earth (pardon the pun) would you be critical or judgmental of them if they don’t know your neighbourhood as well as you do?

Many believe that excessive pride is mainly in the territory of hacking conference speakers, and its here where things get out of control, because of the attention one can get just from doing something as simple as a wifi “evil twin” attack. But no, not based on what i’ve seen. There are security folk from all walks of the sector, and not just the self-proclaimed ‘evangelists’, whose level of self-importance goes as far as taking patronage over the whole sector.

From the outside looking in, we in security are viewed in a fairly dim light in many cases. While working in a small consultancy here in the UK, I’ve heard it said while management was assessing a candidate’s suitability for a Consultant role: “is he weird enough?”. Security seniors in that firm regularly used to get in to impassioned exchanges with C-levels, because of the issue as I mentioned of taking patronage over security. Disagreements would spiral out of control.

C-levels really just want to have the same understandable conversations, and see the same reporting, from security folk as they do from others. The whole security show does seem like a circus to outsiders, especially to folk in other IT departments. And yet many in this field blame the board (“they’re clueless”) when security is pushed further away from the board, rather than looking at themselves.

And as long we do not have a trustworthy means of proving our ability or experience in this field, there will be lots of issues. Many try to compensate for the aforementioned with self-proclaimed titles, and other little nuances. Many develop a whole persona around trying to show the world how great they are.

We’re renown for being different, and we are, but we can be more careful about how we show our uniqueness. It should be enough to just keep a lower profile and do our jobs. If we have confidently given our advice, got it in black and white somewhere, that’s all we can do. If after that, others still don’t agree with us, leave it at that.

Having an out-of-control ego also prevents us from being team players. We need to be open minded to the idea that others can learn from us and benefit us in return – we will always be stronger as a team rather than as an individual, and no one acting as a lone gun slinger ever helped an organisation to improve its stance in information risk management.

Here’s what the Dalai Lama said about ego: “The foundation of the Buddha’s teachings lies in compassion, and the reason for practicing the teachings is to wipe out the persistence of ego, the number-one enemy of compassion.”

Certainly – at least between ourselves, remember how vast a field is security, and don’t lose perspective.

As i mentioned above: the more you know about security, the more you know you don’t know. So try not to demonstrate a lack of knowledge by attempting to demonstrate lots of knowledge.

Share This:

The Art Of The Security Delta: NetDelta

In this article we’ll be answering the following questions:

  • What is NetDelta?
  • Why should i be monitoring for changes in my network(s)?
  • Challenges we faced along the way
  • Other ways of detecting deltas in networks
  • About NetDelta

What Is NetDelta?

NetDelta allows users to configure groups of IP addresses (by department, subnet, etc) and perform one-off or periodic port scans against the configured group, and have NetDelta send alerts when a change is detected.

There’s lot of shiny new stuff out there. APT-buster ™, Silver Bullet ™, etc. Its almost as though someone sits in a room and literally looks for combinations of words that haven’t been used yet and uses this as the driver for a new VC sponsored effort. “Ok ‘threat’ is the most commonly used buzzword currently. Has ‘threat’ been combined with ‘cyber’ and ‘buster’ yet?”, “No”, [hellllooow Benjamin]. The most positive spin we can place on this is that we got so excited about the future while ignoring the present.

These new products are seen as serving the modern day needs of information security, as though the old challenges, going back to day 0 in this sector, or “1998”, have been nailed. Well, how about the old stalwart of Vulnerability Management? The products do not “Manage” anything, they produce lists of vulnerability – this is “assessment”, not “management”. And the lists they produce are riddled with noise (false positives), and what’s worse is there’s a much bigger false negatives problem in that the tools do not cover whole swaths of corporate estates. Doesn’t sound like this is an area that is well served by open source or commercial offerings.

Why Do I Need To Monitor My Networks For Changes?

On the same theme of new products in infosec – how about firewalls (that’s almost as old as it gets)? Well we now have “next-gen” firewalls, but does that mean that old-gen firewalls were phased out, we solved the Network Access Control problem, and then moved on?

How about this: if there is a change in listening services, say in, for example – your perimeter DMZ (!), and you didn’t authorise it, that cannot be a good thing. Its one of either:

  • Hacker/malware activity, e.g. hacker’s connection service (e.g. root shell), or
  • Unauthorised change, e.g. networks ops changed firewall or DMZ host configuration outside of change control
  • You imagined it – perhaps lack of sleep or too much caffeine

Neither of these can be good. They can only be bad. And do we have a way to detect such issues currently?

How does NetDelta help us solve these problems?

Users can configure scans either on a one-off basis, or to be run periodically. So for example, as a user i can tell NetDelta to scan my DMZ perimeter every night at 2 AM and alert me by email if something changed from the previous night’s scan:

  • Previously unseen host comes online in a subnet, either as an unauthorised addition to the group, or unauthorised (rogue) firewall change or new host deployment (maybe an unsanctioned wifi access point or webcam, for example) – these concerns are becoming more valid in the age of Internet of Things (IoT) where devices are shipped with open telnets and so on.
  • Host goes offline – this could be something of interest from a service availability/DoS point of view, as well as the dreaded ‘unauthorised change’.
  • Change in the available services – e.g. hacker’s exploit is successful and manages to locally open a shell on an unfiltered higher port, or new service turned on outside of change control. NetDelta will alert if services are added or removed on a target host.

Host ‘state’ is maintained by NetDelta for as long as the retention period allows, and overall 10 status codes reflect the state of a host over successive periodic scans.

Challenges We Faced With NetDelta

The biggest and only major obstacle is the output of ‘noise’ that results from scan timeouts. With some of the earlier tests scans we noticed that sporadic scan time-outs would occur frequently. This presented a problem (its sort of a false positive) in that a delta is alerted on, but really there hasn’t been a change in listening services or hosts. We increased  the timeout options with nmap but it didn’t help much and only added masses of time on the scans.

The aforementioned issue is one of the issues holding back the nmap ndiff shell script wrapper option, and also ndiff works with XML text files (messy). Shell scripts can work in corporate situations sometimes, but there are problems around the longevity and reliability of the solution. NetDelta is a web-based database (currently MySQL but NoSQL is planned) driven solution with reports and statistics readily available, but the biggest problem with the ndiff option is the scan timeout issues mentioned in the previous paragraph.

NetDelta records host “up” and “down” states and allows the user to configure a number for the number of scans before being considered really down. So if the user chooses 3 as an option, if a target host is down for 3 consecutive scans, it is considered actually down, and a delta is flagged.

Overall the ‘state’ of a host is recorded in the backend database, and the state is a code that reflects a change in the availability or existence of a host. NetDelta has a total of 10 status codes.

Are There Other Ways To Detect NetDeltas?

Remember that we’re covering network services here, i.e. the ‘visibility’ of network services, as they appear to hackers and your customers alike. This is not the same as local host configuration. I can run a netstat command locally to get a list of listening services, but this doesn’t tell me how well my firewall(s) protect me.

  • The ndiff option was covered already
  • Firewall management suites. At least one of these can flag changes in firewall rules, but it still doesn’t give the user the actual “real” view of services. Firewalls can be misconfigured, and they can do unexpected things under certain conditions. The port scanner view of a network is the holy grail effectively – its the absolute/real view that leaves no further room for interpretation and does not require further processing
  • IDS – neither HIDS (host based intrusion detection) nor NIDS (network based intrusion detection) can give a good representation of the view.
  • SIEM – these systems take in logs from other sources so partly by extrapolating from the previous comments, and also with the capability of SIEM itself, it would seem to be a challenge to ask a SIEM to do acrobatics in this area. First of all SIEM is not a cheap solution, but of course this conversation only applies to a case where the organisation already owns a SIEM and can afford the added log storage space, and management overhead, and…of the SIEMs i know, none of them are sufficiently flexible to:
    • take in logs from a port scanning source – theoretically its possible if you could get nmap to speak rsyslogish though, and i’m sure there’s some other acrobatics that are feasible
    • perform delta analysis on those logs and raise an alert

About NetDelta

NetDelta is a Python/Django based project with a MySQL backend (we will migrate to MongoDB – watch this space). Currently at v 1.0, you are most welcome to take part in a trial. We stand on the shoulders of giants:

  • nmap (https://nmap.org/)
  • Python (https://www.python.org/)
  • Django (https://www.djangoproject.com/)
  • Celery (http://www.celeryproject.org/)
  • RabbitMQ (https://www.rabbitmq.com/)
  • libnmap – a Python framework for nmap – (https://github.com/savon-noir/python-libnmap)

Contact us for more info!

Share This: