Windows SIEM – Optimizing Events Volume with CIS Benchmarks and AuditpolCIS

In our 2021 blog post, we focused on identifying quick wins for optimizing Windows Events, and provided a free spreadsheet (really free, not even a regwall) that indicated Windows Events that could be safely ignored, some of which cost lots for SIEM engines to ingest. This post takes a broader Windows Audit Policy view, and offers another free resource – this time taking a broader look in the context of comparing your setup for Windows Audit Policy, and the venerable CIS Benchmark for Windows 2019 Server.

If there’s sufficient interest i’ll follow up with a development effort for a Python tool (also freely available, on Github) that connects to your Windows server and performs the CIS Benchmark assessment as indicated in the spreadsheet.

SIEM Nightmares

Based on many first hand observations and second hand accounts, it’s not a stretch to say that many organisations are suffering from SIEM configuration issues, for which the result is a low signal-to-noise ratio. Your SIEM is ingesting lots of events, many of which are not at all helpful, and with most vendors charging by volume, it gets expensive. At the same time, the false negative problem is all too common. Forensics investigations reveal all too often that there are no events recorded by the expensive SIEM, that even closely relate to the incident. I hope you are never in this scenario. The short-term impact is never good.

Taking SIEM as a capability, if one is to advise on how to improve things, it is rarely ever about the technology. When one asks Analysts (and based on job postings, also hiring managers) about SIEM, it’s clear the first thing that comes to mind is Splunk. ELK, Sentinel, etc. I would estimate the technology-only focus with SIEM to be the norm rather than the exception, and it comes hand-in-hand with a failure to detect privilege elevations, and lateral movements for example.

There are some advisories that we can give out that are independent of your architecture, but many questions about SIEM configuration can only be answered by you, using your knowledge of the IT landscape in your organisation. The advisories in the referenced spreadsheet cover the “noise” part of the signal-to-noise ratio. These are events that are sure to be noise to at least a 90% level of assurance, from a security perspective.

Addtional Context on the Spreadsheet

Some context around the spreadsheet: where there is a CIS Benchmark metric for a specific Audit Subcategory, the spreadsheet follows exactly the CIS recommended setting. But there are some (e.g. DS Access –> Directory Service Access) where this subcategory was not covered by CIS. In these cases, an assessment is made based on our real-experience observations of logging volumes, versus the security (not the IT diagnostic, or other value) value of Audit Subcategories. In this case of the Directory Service Access subcategory, it can be turned off from a security perspective.

There is limited information available regarding actual experiences with specific event ID volumes. In 2018, I had the opportunity to track Windows events in a Splunk architecture for a government department. During this time, I recorded the occurrences of events over a 24-hour period on a network of approximately 150 Windows servers of various versions, some of which were quite exotic. This information has been valuable in supporting decisions related to whether or not to disable auditing.

SIEM Forwarder Filtering

There is another option offered by some SIEM vendors and that is to filter events by Event ID. Overall, the more resource-friendly approach is to prevent the events being generated at source, but in many cases this may not be feasible. Splunk for example allows you to filter at forwarders (via the inputs.conf file on the Splunk forwarder. This file is usually located in the $SPLUNK_HOME/etc/system/local/ directory … more info – BTW it looks like Splunk agrees with us on the 4662 event mentioned as an example above. Yay!).

Credits and Disclaimers

Windows Events are sometimes tricky to understand, both with respect of what the developers intended with those events, and the conditions under which they are generated. Sometimes with Windows Events, we are completely in unknown territory, even if there is some Microsoft documentation that covers them. Here’s one example from Microsoft documentation to fill us with confidence – “This auditing subcategory should not have any events in it, but for some reason Success auditing will enable the generation of event 4985″.

Ultimately only you can decide what’s best for the health of your SOC/SIEM. Only you know your network and your applications. The document supplied here was only intended as a guide, and to aid decision making. It was not intended to make decisions for you.

The cybersecurity landscape often focuses on the more sensational aspects, such as high-profile hacks or fake influencers, which can overshadow the essential work done by countless professionals in the background. These unsung heroes are dedicated to ensuring the stability and security of our digital infrastructure, and their contributions should not be underestimated. Among those are tthe likes of Randy Franklin Smith (founder of Ultmate Windows Security) who has put together an “encyclopedia” of Windows Event IDs. The experiences shared there were used in-part to form a view on whether or not to reject or accept certain Windows Events.

Fintechs and Security – Part 4

  • Prologue – covers the overall challenge at a high level
  • Part One – Recruiting and Interviews
  • Part Two – Threat and Vulnerability Management – Application Security
  • Part Three – Threat and Vulnerability Management – Other Layers
  • Part Four – Logging
  • Part Five – Cryptography and Key Management, and Identity Management
  • Part Six – Trust (network controls, such as firewalls and proxies), and Resilience

Logging

Notice “Logging” is used here, not “SIEM”. With use of “SIEM”, there is often a mental leap, or stumble, towards a commercial solution. But there doesn’t necessarily need to be a commercial solution. This post invites the reader to take a step back from the precipice of engaging with vendors, and check first if that journey is one you want to make.

Unfortunately, in 2020, it is still the case that many fintechs are doing one of two things:

  • Procuring a commercial solution without thinking about what is going to be logged, or thinking about the actual business goals that a logging solution is intended to achieve.
  • Just going with the Cloud Service Provider’s (CSP) SaaS offering – e.g. Stackdriver (now called “Operations”) for Google Cloud, or Security Center for Azure.

Design Process

The process HLD takes into risks from threat modelling (and maybe other sources), and another input from compliance requirements (maybe security standards and legal requirements), and uses the requirements from the HLD to drive the LLD. The LLD will call out the use cases and volume requirements that satisfy the HLD requirements – but importantly, it does not cover the technological solution. That comes later.

The diagram above calls out Splunk but of course it doesn’t have to be Splunk.

Security Operations

The end goal of the design process is heavily weighted towards a security operations or protective monitoring capability. Alerts will be specified which will then be configured into the technological solution (if it supports this). Run-books are developed based on on-going continuous improvement – this “tuning” is based on adjusting to false positives mainly, and adding further alerts, or modifying existing alerts.

The decision making on how to respond to alerts requires intimate knowledge of networks and applications, trust relationships, data flows, and the business criticality of information assets. This is not a role for fresh graduates. Risk assessment drives the response to an alert, and the decision on whether or not to engage an incident response process.

General IT monitoring can form the first level response, and then Security Operations consumes events from this first level that are related to potential security incidents.

Two main points relating this SecOps function:

  • Outsourcing doesn’t typically work when it comes to the 2nd level. Outsourcing of the first level is more likely to be cost effective. Dr Anton Chuvakin’s post on what can, and cannot be outsourced in security is the most well-rounded and realistic that i’ve seen. Generally anything that requires in-house knowledge and intimacy of how events relate to business risks – this cannot be outsourced at all effectively.
  • The maturity of SecOps doesn’t happen overnight. Expect it to take more than 12 months for a larger fintech with a complex cloud footprint.

The logging capability is the bedrock of SecOps, and how it relates to other security capabilities can be simplified as in the diagram below. The boxes on the left are self-explanatory with the possible exception of Active Trust Management – this is heavily network-oriented and at the engineering end of the rainbow, its about firewalls, reverse and forward proxies mainly:

Custom Use Cases

For the vast majority of cases, custom use cases will need to be formulated. This involves building a picture of “normal”, so as to enable alerting on abnormal. So taking the networking example: what are my data flows? Take my most critical applications – what are source and destination IP addresses, and what is the port on the server-side of the client-server relationship? So then a possible custom use case could be: raise an alert when a connection is aimed at the server from anywhere other than the client(s).

Generic use cases are no-brainers. Examples are brute force attempts and technology or user behaviour-specific use cases. Some good examples are here. Custom use cases requires an understanding of how applications, networks, and operating systems are knitted together. But both custom and generic use cases require a log source to be called out. For network events, this will be a firewall as the best candidate. It generally makes very little sense to deploy network IDS nodes in cloud.

So for each application, generate a table of custom use cases, and identify a log source for each. Generic use cases are those configured auto-tragically in Splunk Enterprise Security for example. But even Splunk cannot magically give you custom use cases, or even ensure that all devices are included in the coverage for generic use cases. No – humans still have a monopoly over custom use cases and well, really, most of SIEM configuration. AI and Cyberdyne Systems won’t be able to get near custom use cases in our lifetimes, or ever, other than the fantasy world of vendor Powerpoint slides.

Don’t forget to test custom use case alerting. So for network events, spin up a VM in a centrally trusted area, like a management Vnet/VPC for example. Port scan from there to see if alerts are triggered. Netcat can be very useful here too, for spoofing source addresses for example.

Correlation

Correlation was the phrase used by vendors in the heady days of the 00s. The premise was something like this: event A, event B, and event C. Taken in isolation (topical), each seem innocuous. But bake them together and you have a clear indicator that skullduggery is afoot.

I suggest you park correlation in the early stage of a logging capability deployment. Maybe consider it for down the road, once a decent level of maturity has been reached in SecOps, and consider also that any attempt to try and get too clever can result in your SIEM frying circuit boards. The aim initially should be to reduce complexity as much as possible, and nothing is better at adding complexity than correlation. Really – basic alerting on generic and custom use cases gives you most of the coverage you need for now, and in any case, you can’t expect to get anywhere near an ideal state with logging.

SaaS

Operating system logs are important in many cases. When you decide to SaaS a solution, note that you lose control over operating system events. You cannot turn off events that you’re not interested in (e.g. Windows Object auditing events which have had a few too many pizzas).Pizza This can be a problem if you decide to go with a COTS where licensing costs are based on volume of events. Also, you cannot turn on OS events that you could be interested in. The way CSPs play here is to assume everything is interesting, which can get expensive. Very expensive.

Note – its also, in most cases, not such a great idea to use a SaaS based SIEM. Why? Because this function has connectivity with everything. It has trust relationships with dev/test, pre-prod, and production. You really want full control over this platform (i.e. be able to login with admin credentials and take control of the OS), especially as it hosts lots of information that would be very interesting for attackers, and is potentially the main target for attackers, because of the trust relationships I mentioned before.

So with SaaS, its probably not the case that you are missing critical events. You just get flooded. The same applies to 3rd party applications, but for custom, in-house developed applications, you still have control of course of the application layer.

Custom, In-house Developed Applications

You have your debugging stream and you have your application stream. You can assign critical levels to events in your code (these are the classic syslog severity levels). The application events stream is critical. From an application security perspective, many events are not immediately intuitively of interest, but by using knowledge of how hackers work in practice, security can offer some surprises here, pleasant or otherwise.

If you’re a developer, you can ease the strain on your infosec colleagues by using consistent JSON logging keys across the board. For example, don’t start with ‘userid’ and then flip to ‘user_id’ later, because it makes the configuration of alerting more of a challenge than it needs to be. To some extent, this is unavoidable, because different vendors use different keys, but every bit helps. Note also that if search patterns for alerting have to cater for multiple different keys in JSON documents, the load on the SIEM will be unnecessarily high.

It goes without saying also: think about where your application and debug logs are being transmitted and stored. These are a source of extremely valuable intelligence for an attacker.

The Technology

The technological side of the logging capability isn’t the biggest side. The technology is there to fulfil a logging requirement, it is not in itself the logging capability. There are also people and processes around logging, but its worth talking about the technology.

What’s more common than many would think – organisation acquires a COTS SIEM tool but the security engineers hate it. Its slow and doesn’t do much of any use. So they find their own way of aggregating network-centralised events with a syslog bucket of some description. Performance is very often the reason why engineers will be grep’ing over syslog text files.

Whereas the aforementioned sounds ineffective, sadly its more effective than botched SIEM deployments with poorly designed tech. It also ticks the “network centralised logging” box for auditors.

The open-source tools solution can work for lots of organisations, but what you don’t get so easily is the real-time alerting. The main cost will be storage. No license fees. Just take a step back, and think what it is you really want to achieve in logging (see the design process above). The features of the open source logging solution can be something like this:

  • Rsyslog is TCP and covers authentication of hosts. Rsyslog is a popular protocol because it enables TCP layer transmission from most log source types (one exception is some Cisco network devices and firewalls), and also encryption of data in transit, which is strongly recommended in a wide open, “flat” network architecture where eavesdropping is a prevalent risk.
  • Even Windows can “speak” rsyslog with the aid of a local agent such as nxlog.
  • There are plenty of Host-based Intrusion Detection System (HIDS) agents for Linux and Windows – OSSEC, Suricata, etc.
  • Intermediate network logging Rsyslog servers can aggregate logs for network zones/subnets. There are the equivalent of Splunk forwarders or Alienvault Sensors. A cron job runs an rsync over Secure Shell (SSH), which uploads the batches of events data periodically to a Syslog Lake, for want of a better phrase.
  • The folder structure on the Syslog server can reflect dates – years, months, days – and distinct files are named to indicate the log source or intermediate server.
  • Good open source logging tools are getting harder to find. Once a tool gets a reputation, it aint’ free any mo. There are still some things you can do with ELK for free (but not alerting). Graylog is widely touted. At the time of writing you can still log e.g. 100 GB/day, and you don’t pay if you forego support or any of the other Enterprise features.

Splunk

Splunk sales people have a dart board with my picture on it. To be fair, the official Splunk line is that they want to help their customers save events indexing money because it benefits them in the longer term. And they’re right, this does work for Splunk and their customers. But many of the resellers are either lacking the skills to help, or they are just interested in a quick and dirty install. “Live for today, don’t worry about tomorrow”.

Splunk really is a Lamborghini, and the few times when i’ve been involved in bidding beauty parades for SIEM, Splunk often comes out cheaper believe it or not. Splunk was made for logging and was engineered as such. Some of the other SIEM engines are poorly coded and connect to a MySQL database for example, whereas Splunk has its own database effectively. The difference in performance is extraordinary. A Splunk search involving a complex regex with busy indexers and search heads takes a fraction of the time to complete, compared with a similar scenario from other tools on the same hardware.

Three main ways to reduce events indexing costs with Splunk:

  • Root out useless events. Windows is the main culprit here, in particular Auditing of Objects. Do you need, for example, all that performance monitoring data? Debug events? Firewall AND NIDS events? Denied AND accepted packets from firewalls?
  • Develop your use cases (see above) and turn off all other logging. You can use filters to achieve this.
  • You can be highly selective about which events are forwarded to the Splunk indexer. One conceptual model just to illustrate the point is given below:

Threat Hunting

Threat Hunting is kind of the sexy offering for the world of defence. Offence has had more than its fair share of glamour offerings over the years. Now its defence’s turn. Or is it? I mean i get it. It’s a good thing to put on your profile, and in some cases there are dramatic lines such as “be the hunter or the hunted”.

However, a rational view of “hunting” is that it requires LOTS of resources and LOTS of skill – two commodities that are very scarce. Threat hunting in most cases is the worst kind of resources sink hole. If you take vulnerability management (TVM) and the kind of basic detection discussed thus far in this article, you have a defence capability that in most cases fits the risk management needs of the organisation. So then there’s two questions to ask:

  • How much does threat hunting offer on top of a suitably configured logging and TVM capability? Not much in the best of cases. Especially with credentialed scanning with TVM – there is very little of your attack surface that you cannot cover.
  • How much does threat hunting offer in isolation (i.e. threat hunting with no TVM or logging)? This is the worst case scenario that will end up getting us all fired in security. Don’t do it!!! Just don’t. You will be wide open to attack. This is similar to a TVM program that consists only of one-week penetration tests every 6 months.

Threat Intelligence (TI)

Ok so here’s a funny story. At a trading house client here in London around 2016: they were paying a large yellow vendor lots of fazools every month for “threat intelligence”. I couldn’t help but notice a similarity in the output displayed in the portal as compared with what i had seen from the client’s Alienvault. There is a good reason for this: it WAS Alienvault. The feeds were coming from switches and firewalls inside the client network, and clearly $VENDOR was using Alienvault also. So they were paying heaps to see a duplication of the data they already had in their own Alienvault.

The aforementioned is an extremely bad case of course. The worst of the worst. But can you expect more value from other threat intelligence feeds? Well…remember what i was saying about the value of an effective TVM and detection program? Ok I’ll summarise the two main problems with TI:

  • You can really achieve LOTS in defence with a good credentialed TVM program plus even a half-decent logging program. I speak as someone who has lots of experience in unrestricted penetration testing – believe me you are well covered with a good TVM and detection SecOps function. You don’t need to be looking at threats apart from a few caveats…see later.
  • TI from commercial feeds isn’t about your network. Its about the whole planet. Its like picking up a newspaper to find out what’s happening in the world, and seeing on the front cover that a butterfly in China has flapped its wings recently.

Where TI can be useful – macro developments and sector-specific developments. For example, a new approach to Phishing, or a new class of vulnerability with software that you host, or if you’re in the public sector and your friendly national spy agency has picked up on hostile intentions towards you. But i don’t want to know that a new malware payload has been doing the rounds. In the time taken to read the briefing, 2000 new payloads have been released to the wild.

Summary

  • Start out with a design process that takes input feeds from compliance and risk (perhaps threat modelling), use the resulting requirements to drive the LLD, which may or may not result in a decision to procure tech that meets the requirements of the LLD.
  • An effective logging capability can only be designed with intimate knowledge of the estate – databases, crown jewels, data flows – for each application. Without such knowledge, it isn’t possible to build even a barely useful logging capability. Call out your generic and custom use cases in your LLD, independent of technology.
  • Get your basic alerting first, correlation can come later, if ever.
  • Outsourcing is a waste of resources for second level SecOps.
  • With SaaS, your SIEM itself is dangerously exposed, and you have no control over what is logged from SaaS log sources.
  • You are not mandated to get a COTS. Think about what it is that you want to achieve. It could be that open source tools across the board work for you.
  • Splunk really is the Lamborghini of SIEMs and the “expensive” tag is unjustified. If you carefully design custom and generic use cases, and remove everything else from indexing, you suddenly don’t have such an expensive logger. You can also aggregate everything in a Syslog pool before it hits Splunk indexers, and be more selective about what gets forwarded.
  • I speak as someone with lots of experience in unrestricted penetration testing: Threat Hunting and Threat Intelligence aren’t worth the effort in most cases.

Fintechs and Security – Part One

  • Prologue – covers the overall challenge at a high level
  • Part One – Recruiting and Interviews
  • Part Two – Threat and Vulnerability Management – Application Security
  • Part Three – Threat and Vulnerability Management – Other Layers
  • Part Four – Logging
  • Part Five – Cryptography and Key Management, and Identity Management
  • Part Six – Trust (network controls, such as firewalls and proxies), and Resilience

Recruiting and Interviews

In the prologue of this four-stage process, I set the scene for what may come to pass in my attempt to relate my experiences with fintechs, based on what i am hearing on the street and what i’ve seen myself. In this next instalment, i look at how fintechs are approaching the hiring conundrum when it comes to hiring security specialists, and how, based on typical requirements, things could maybe be improved.

The most common fintech setup is one of public-cloud (AWS, Azure, GCP, etc), They’re developing, or have developed, software for deployment in cloud, with a mobile/web front end. They use devops tools to deploy code, manage and scale (e.g. Kubernetes), collaborate (Git variants) and manage infrastructure (Ansible, Terraform, etc), perhaps they do some SAST. Sometimes they even have different Virtual Private Clouds (VPCs) for different levels of code maturity, one for testing, and one for management. And third party connections with APIs are not uncommon.

Common Pitfalls

  • Fintechs adopt the stance: “we don’t need outside help because we have hipsters. They use acronyms and seem quite confident, and they’re telling me they can handle it”. While not impossible that this can work – its unlikely that a few devops peeps can give a fintech the help they need – this will become apparent later.
  • Using devops staff to interview security engineers. More on this problem later.
  • Testing security engineers with a list of pre-prepared questions. This is unlikely to not end in tears for the fintech. Security is too wide and deep an area for this approach. Fintechs will be rejecting a lot of good candidates by doing this. Just have a chat! For example, ask the candidate their opinions on the usefulness of VA scanners. The length of the response is as important as its technical accuracy. A long response gives an indication of passion for the field.
  • Getting on the security bandwagon too late (such as when you’re already in production!) you are looking at two choices – engage an experienced security hand and ignore their advice, or do not ignore their advice and face downtime, and massive disruption. Most will choose the first option and run the project at massive business risk.

The Security Challenge

Infosec is important, just as checking to see if cars are approaching before crossing the road is important. And the complexity of infosec mandates architecture. Civil engineering projects use architecture. There’s a good reason for that – which doesn’t need elaborating on.

Collapsing buildingWhenever you are trying to build something complex with lots of moving parts, architecture is used to reduce the problem down to a manageable size, and help to build good practices in risk management. The end goal is protective monitoring of an infrastructure that is built with requirements for meeting both risk and compliance challenges.

Because of the complexity of the challenge, it’s good to split the challenge into manageable parts. This doesn’t require talking endlessly about frameworks such as SABSA. But the following six capabilities (people, process, technology) approach is sleek and low-footprint enough for fintechs:

  • Threat and Vulnerability Management (TVM)
  • Logging – not “telemetry” or Threat intelligence, or threat hunting. Just logging. Not even necessarily SIEM.
  • Cryptography and Key Management
  • Identity Management
  • Business Continuity Management
  • Trust (network segmentation, firewalls, proxies).

I will cover these 6 areas in the next two articles, in more detail.

The above mentioned capabilities have an engineering and architecture component and cover very briefly the roles of security engineers and architects. A SABSA based approach without the SABSA theory can work. So an architect takes into account risk (maybe with a threat modelling approach) and compliance goals in a High Level Design (HLD), and generates requirements for the Low Level Design (LLD), which will be compiled by a security engineer. The LLD gives a breakdown of security controls to meet the requirements of the HLD, and how to configure the controls.

Security Engineers and Devops Tools

What happens when a devops peep interviews a security peep? Well – they only have their frame of reference to go by. They will of course ask questions about devops tools. How useful is this approach? Not very. Is this is good test of a security engineer? Based on the security requirements for fintechs, the answer is clear.

Security engineers can use devops tools, and they do, and it doesn’t take a 2 week training course to learn Ansible. There is no great mystery in Kubernetes. If you hire a security engineer with the right background (see the previous post in this series) they will adapt easily. The word on the street is that Terraform config isn’t the greatest mystery in the world and as long as you know Linux, and can understand what the purpose of the tool is (how it fits in, what is the expected result), the time taken to get productive is one day or less.

The point is: if i’m a security engineer and i need to, for example, setup a cloud SIEM collector: some fintechs will use one Infrastructure As Code (IaC) tool, others use another one – one will use Chef, another Ansible, and there are other permutations. Is a lack of familiarity with the tool a barrier to progress? No. So why would you test a security engineer’s suitability for a fintech role by asking questions about e.g. stanzas in Ansible config? You need to ask them questions about the six capabilities I mentioned above – i.e. security questions for a security professional.

Security Engineers and Clouds

Again – what was the transition period from on-premise to cloud? Lets take an example – I know how networking works on-premise. How does it work in cloud? There is this thing called a firewall on-premise. In Azure it’s called a Network Security Group. In AWS its called a …drum roll…firewall. In Google Cloud its called a …firewall. From the web-based portal UI for admin, these appear to filter by source and destination addresses and services, just like an actual non-virtual firewall. They can also filter by service account (GCP), or VM tag.

There is another thing called VPN. And another thing called a Virtual Router. On the world of on-premise, a VPN is a …VPN. A virtual router is a…router. There might be a connection here!

Cloud Service Providers (CSP) in general don’t re-write IT from the ground up. They still use TCP/IP. They host virtual machines (VM) instead of real machines, but as VMs have operating systems which security engineers (with the right background) are familiar with, where is the complication here?

The areas that are quite new compared to anything on-premise are areas where the CSP has provided some technology for a security capability such as SIEM, secrets management, or Identity Management. But these are usually sub-standard for the purpose they were designed for – this is deliberate – the CSPs want to work with Commercial Off The Shelf (COTS) vendors such as Splunk and Qualys, who will provide a IaaS or SaaS solution.

There is also the subject of different clouds. I see some organisations being fussy about this, e.g. a security engineer who worked a lot with Azure but not AWS, is not suitable for a fintech that uses AWS. Apparently. Well, given that the transition from on-premise to cloud was relatively painless, how painful is it to transition from Azure to AWS or …? I was on a project last summer where the fintech used Google Cloud Platform. It was my first date with GCP but I had worked with AWS and Azure before. Was it a problem? No. Do i have an IQ of 160? Hell no!

The Wrap-up

Problems we see in fintech infosec hiring represent what is most likely a lack of understanding of how they can best manage risk with a budget that is considerably less than a large MNC for example. But in security we haven’t been particularly helpful for fintechs – the problem is on us.

The security challenge for fintechs is not just about SAST/DAST of their code. The challenge is wider and be represented as six security capabilities that need to be designed with an architecture and engineering view. This sounds expensive, but its a one-off design process that can be covered in a few weeks. The on-going security challenge, whereby capabilities are pushed through into the final security operations stage, can be realised with one or two security engineers.

The lack of understanding of requirements in security leads to some poor hiring practices, the most common of which is to interview a security engineer with a devops guru. The fintech will be rejecting lots of good security engineers with this approach.

In so many ways, the growth of small to medium development houses has exposed the weaknesses in the infosec sector more than they were ever exposed with large organisations. The lack of the sector’s ability to help fintechs exposes a fundamental lack of skilled personnel, more particularly at the strategic/advisory level than others.

Fintechs and Security – Prologue

  • Prologue – covers the overall challenge at a high level
  • Part One – Recruiting and Interviews
  • Part Two – Threat and Vulnerability Management – Application Security
  • Part Three – Threat and Vulnerability Management – Other Layers
  • Part Four – Logging
  • Part Five – Cryptography and Key Management, and Identity Management
  • Part Six – Trust (network controls, such as firewalls and proxies), and Resilience

Fintechs and Security – A Match Made In Heaven?

Well, no. Far from it actually. But again, as i’ve been repeating for 20 years now, its not on the fintechs. It’s on us in infosec, and infosec has to take responsibility for these problems in order to change. If i’m a CTO of a fintech, I would be confused at the array of opinions and advice which vary radically from one expert to another

But there shouldn’t be such confusion with fintech challenges. Confusion only reigns where there’s FUD. FUD manifests itself in the form of over-lengthy coverage and excessive focus on “controls” (the archetypal shopping list of controls to be applied regardless of risk – expensive), GRC, and “hacking/”[red,blue,purple,yellow,magenta/teal/slate grey] team”/”appsec.

Really what’s needed is something like this (in order):

  • Threat modelling lite – a one off, reviewed periodically.
  • Architecture lite – a one off, review periodically.
  • Engineering lite – a one off, review periodically.
  • Secops lite – the result of the previous 3 – an on-going protective monitoring capability, the first level of monitoring and response for which can be outsourced to a Managed Service Provider.

I will cover these areas in more details in later episodes but what’s needed is, for example, a security design that only provides the answer to “What is the problem? How are we going to solve it?” – so a SIEM capability design for example – not more than 20 pages. No theory. Not even any justifications. And one that can be consumed by non-security folk (i.e. it’s written in the language of business and IT).

Fintechs and SMBs – How Is The Infosec Challenge Unique?

With a lower budget, there is less room for error. Poor security advice can co-exist with business almost seamlessly in the case of larger organisations. Not so with fintechs and Small and Medium Businesses (SMBs). There has been cases of SMBs going under as a result of a security incident, whereas larger businesses don’t even see a hit on their share price.

Look For A Generalist – They Do Exist!

The term “generalist” is seen as a four-letter word in some infosec circles. But it is possible for one or two generalists to cover the needs of a fintech at green-field, and then going forward into operations, its not unrealistic to work with one in-house security engineer of the right background, the key ingredients of which are:

  • Spent at least 5 years in IT, in a complex production environment, and outgrew the role.
  • Has flexibility – the old example still applies today – a Unix fan has tinkered with Windows. So i.e. a technology lover. One who has shown interest in networking even though they’re not a network engineer by trade. Or one who sought to improve efficiency by automating a task with shell scripting.
  • Has an attack mindset – without this, how can they evaluate risk or confidently justify a safeguard?

I have seen some crazy specialisations in larger organisations e.g. “Websense Security Engineer”! If fintechs approached security staffing in the same way as larger organisations, they would have more security staff than developers which is of course ridiculous.

So What’s Next?

In “On Hiring For DevSecOps” I covered some common pitfalls in hiring and explained the role of a security engineer and architect.

There are “fallback” or “retreat” positions in larger organisations and fintechs alike, wherein executive decisions are made to reduce the effort down to a less-than-advisable position:

  • Larger organisations: compliance driven strategy as opposed to risk based strategy. Because of a lack of trustworthy security input, execs end up saying “OK i give up, what’s the bottom line of what’s absolutely needed?”
  • Fintechs: Application security. The connection is made with application development and application security – which is quite valid but the challenge is wider. Again, the only blame i would attribute here is with infosec. Having said that, i noticed this year that “threat modelling” has started to creep into job descriptions for Security Engineers.

So for later episodes – of course the areas to cover in security are wider than appsec, but again there is no great complication or drama or arm-waiving:

  • Part One – Hiring and Interviews – I expand on “On Hiring For DevSecOps“. I noticed some disturbing trends in 2019 and i cover these in some more detail.
  • Part Two – Security Architecture and Engineering I – Threat and Vulnerability Management (TVM)
  • Part Three – Security Architecture and Engineering II – Logging (not necessarily SIEM). No Threat Hunting, Telemetry, or Threat “Intelligence”. No. Just logging. This is as sexy as it needs to be. Any more sexy than this should be illegal.
  • Part Four – Security Architecture and Engineering III – Identity Management (IDAM) and Cryptography and Key Management (CKM).
  • Part Five – Security Architecture and Engineering IV – Trust (network trust boundary controls – e.g. firewalls and forward proxies), and Business Resilience Management (BRM).

I will try and get the first episode on hiring and interviewing out before 2020 hits us but i can’t make any promises!

On Hiring For DevSecOps

Based on personal experience, and second hand reports, there’s still some confusion out there that results in lots of wasted time for job seekers, hiring organisations, and recruitment agents.

There is a want or a need to blame recruiters for any hiring difficulties, but we need to stop that. There are some who try to do the right thing but are limited by a lack of any sector experience. Others have been inspired by Wolf Of Wall Street while trying to sound like Simon Cowell.

It’s on the hiring organisation? Well, it is, but let’s take responsibility for the problem as a sector for a change. Infosec likes to shift responsibility and not take ownership of the problem. We blame CEOs, users, vendors, recruiters, dogs, cats, “Russia“, “China” – anyone but ourselves. Could it be we failed as a sector to raise awareness, both internally and externally?

So What Are Common Understandings Of Security Roles?

After 25 years+ we still don’t have universally accepted role descriptions, but at least we can say that some patterns are emerging. Security roles involve looking at risk holistically, and sometimes advising on how to deal with risk:

  • Security Engineers assess risk and design and sometimes also implement controls. BTW some sectors, legal in particular, still struggle with this. Someone who installs security products is in an IT ops role. Someone who upgrades and maintains a firewall is an IT ops role. The fact that a firewall is a security control doesn’t make this a security engineering function.
  • Security Architects take risk and compliance goals into account when they formulate requirements for engineers.
  • Security Analysts are usually level 2 SOC analysts, who make risk assessments in response to an alert or vulnerability, and act accordingly.

This subject evokes as much emotion as CISSP. There are lots of opinions out there. We owe to ourselves to be objective. There are plenty of sources of information on these role definitions.

No Aspect Of Risk Assessment != Security. This is Devops.

If there is no aspect of risk involved with a role, you shouldn’t looking for a security professional. You are looking for DEVOPS peeps. Not security peeps.

If you want a resource to install and configure tools in cloud – that is DEVOPS. It is not Devsecops. It is not Security Engineering or Architecture. It is not Landscape Architecture or Accounting. It is not Professional Dog Walker. it is DEVOPS. And you should hire a DEVOPS person. If you want a resource to install and configure appsec tools for CI/CD – that is DEVOPS. If you want a resource to advise on or address findings from appsec tools, that is a Security Analyst in the first case, DEVSECOPS in the 2nd case. In the 2nd case you can hire a security bod with coding experience – they do exist.

Ok Then So What Does A DevSecOps Beast Look Like?

DevSecOps peeps have an attack mindset from their time served in appsec/pen testing, and are able to take on board the holistic view of risk across multiple technologies. They are also coders, and can easily adapt to and learn multiple different devops tools. This is not a role for newly graduated peeps.

Doing Security With Non-Security Professionals Is At Best Highly Expensive

Another important point: what usually happens because of the skills gap in infosec:

  • Cloud: devops fills the gap.
  • On-premise: Network Engineers fill the gap.

Why doesn’t this work? I’ve met lots of folk who wear the aforementioned badges. Lots of them understand what security controls are for. Lots of them understand what XSS is. But what none of them understand is risk. That only comes from having an attack mindset. The result will be overspend usually – every security control ever conceived by humans will be deployed, while also having an infrastructure that’s full of holes (e.g. default install IDS and WAF is generally fairly useless and comes with a high price tag).

Vulnerability assessment is heavily impacted by not engaging security peeps. Devops peeps can deploy code testing tools and interpret the output. But a lack of a holistic view or an attack mindset, will result in either no response to the vulnerability, or an excessive response. Basically, the Threat And Vulnerability Management capability is broken under these circumstances – a sadly very common scenario.

SIEM/Logging is heavily impacted – what will happen is either nothing (default logging – “we have Stackdriver, we’re ok”), or a SIEM tool will be provisioned which becomes a black hole for events and also budgets. All possible events are configured from every log source. Not so great. No custom use cases will be developed. The capability will cost zillions while also not alerting when something bad is going down.

Identity Management – is not deploying a ForgeRock (please know what you’re getting into with this – its a fork of Sun Microsystems/Oracle’s identity management show) or an Azure AD and that’s it, job done. If you just deploy this with no thought of the problem you’re trying to solve in identity management, you will be fired.

One of the classic risk problems that emerges when no security input is taken: “there is no personally identifiable information in development Virtual Private Clouds, so there is no need for security controls”. Well – intelligence vulnerability such as database schema – attackers love this. And don’t you want your code to be safe and available?

You see a pattern here. It’s all or nothing. Either of which ends up being very expensive or worse. But actually come to think of it, expensive is the goal in some cases. Hold that thought maybe.

A Final Word

So – if the word risk doesn’t appear anywhere in the job description, it is nothing to do with security. You are looking for devops peeps in this case. And – security is an important consideration for cloud migrations.

Prevalent DNS Attacks – is DNSSEC The Answer?

Recently the venerable Brian Krebs covered a mass-DNS hijacking attack wherein suspected Iranian attackers intercepted highly sensitive traffic from public and private organisations. Over the course of the last decade, DNS issues such as cache poisoning and response/request hijacking have caused financial headaches for many organisations.

Wired does occasionally dip into the world of infosec when there’s something major to cover, as they did here, and Arstechnica published an article in January this year that quotes warnings about DNS issues from Federal authorities and private researchers. Interestingly DNSSEC isn’t covered in either of these.

The eggheads behind the Domain Name System Security Extensions (obvious really – you could have worked that out from the use of ‘DNSSEC’) are keeping out of the limelight, and its unknown as to exactly how DNSSEC was conceived, although if you like RFCs (and who doesn’t?) there is a strong clue from RFC 3833 – 2004 was a fine year for RFCs.

The idea that responses from DNS servers may be untrustworthy goes way back, indeed the Council of Elrond behind RFC 3833 called out the year 1993 as being the one where the discussion on this matter was introduced, but the idea was quashed – the threats were not clearly seen in the early 90s. An even more exploitable issue was around lack of access control with networks, but the concept of private networks with firewalls at choke points was far from widespread.

DNSSEC Summarised

For a well-balanced look at DNSSEC, check Cloudfare’s version. Here’s the headline paragraph which serves as a decent summary “DNSSEC creates a secure domain name system by adding cryptographic signatures to existing DNS records. These digital signatures are stored in DNS name servers alongside common record types like A, AAAA, MX, CNAME, etc. By checking its associated signature, you can verify that a requested DNS record comes from its authoritative name server and wasn’t altered en-route, opposed to a fake record injected in a man-in-the-middle attack.”

DNSSEC Gripes

There is no such thing as a “quick look” at a technical coverage of DNSSEC. There is no “birds eye view” aside from “it’s used for DNS authentication”. It is complex – so much so that’s it’s amazing that it even works at all. It is PKI-like in its complexity but PKIs do not generally live almost entirely on the Public Internet – the place where nothing bad ever happened and everything is always available.

The resources required to make DNSSEC work, with key rotation, are not negligible. A common scenario – architecture designs call out a requirement for authentication of DNS responses in the HLD, then the LLD speaks of DNSSEC. But you have to ask yourself – how do client-side resolvers know what good looks like? If you’re comparing digital signatures, doesn’t that mean that the client needs to know what a good signature is? There’s some considerable work needed to get, for example, a Windows 10/Server 2k12 environment DNSSEC-ready: client side configuration.

DNSSEC is far from ubiquitous. Indeed – here’s a glaring example of that:


iantibble$ dig update.microsoft.com dnskey

via GIPHY

So, maybe i’m missing something, but i’m not seeing any Resource Records for DNSSEC here. And that’s bad, especially when threat modelling tells us that in some architectures, controls can be used to mitigate risk with most attack vectors, but if WSUS isn’t able to make a call on whether or not its pulling patches from an authentic source, this opens the door for attackers to introduce bad stuff into the network. DNSSEC isn’t going to help in this case.

Overall the provision of DNSSEC RRs for .com domains is less than 10%, and there are some interesting stats here that show that the most commonly used Domain Name registrars do not allow users to add DNSSEC records even if they wanted to.

Don’t forget key rotation – DNSSEC is subject to key management. The main problem with Cryptography in the business world has been less about brute-forcing keys and exploiting algorithm weaknesses than is has been about key management weaknesses – keys need to be stored, rotated, and transported securely. Here’s an example of an epic fail in this area, in this case with the NSA’s IAD site. The page linked to by that tweet has gone missing.

For an organisation wishing to authenticate DNS responses, DNSSEC really does have to be ubiquitous – and that can be a challenge with mobile/remote workers. In the article linked above from Brian Krebs, the point was made that the two organisations involved are both vocal proponents and adopters of DNSSEC, but quoting from Brian’s article: “On Jan. 2, 2019 — the same day the DNSpionage hackers went after Netnod’s internal email system — they also targeted PCH directly, obtaining SSL certificates from Comodo for two PCH domains that handle internal email for the company. Woodcock said PCH’s reliance on DNSSEC almost completely blocked that attack, but that it managed to snare email credentials for two employees who were traveling at the time. Those employees’ mobile devices were downloading company email via hotel wireless networks that — as a prerequisite for using the wireless service — forced their devices to use the hotel’s DNS servers, not PCH’s DNNSEC-enabled systems.”

Conclusion

Organisations do need to take DNS security more seriously – based on what i’ve seen most are not even logging DNS queries and answers, occasionally even OS and app layer logs are AWOL on the servers that handle these requests (these are typically serving AD to the organisation in a MS Windows world!).

But we do need DNS. The alternative is manually configuring IP addresses in a load balanced and forward-proxied world where the Origin IP address of web services isn’t at all clear. We are really back in pen and paper territory if there’s no DNS. And there’s also no real, planet earth alternative to DNSSEC.

DNSSEC does actually work as it was intended and its a technically sound concept, and as in Brian’s article, it has thwarted or delayed attacks. It comes with the management costs of any key management system, and relies on private and public organisations to DNSSEC-ize themselves (as well as manage their keys).

While I regard myself an advocate of DNSSEC deployment, it’s clear there are legitimate criticisms of DNSSEC. But we need some way of authentication of answers we receive from public DNS servers. DNSSEC is a key management system that works in principle.

If the private sector applies enough pressure, we won’t be seeing so many articles about either DNS attacks or DNSSEC, because it will be one of those aspects of engineering that has been addressed and seen as a mandatory aspect of security architecture.

Clouds and Vulnerability Management

In the world of Clouds and Vulnerability Management, based on observations, it seems like a critical issue has slipped under the radar: if you’re running with PaaS and SaaS VMs, you cannot deliver anything close to a respectable level of vulnerability management with these platforms. This is because to do effective vulnerability management, the first part of that process – the vulnerability assessment – needs to be performed with administrative access (over SSH/SMB), and with PaaS and SaaS, you do not, as a customer, have such access (this is part of your agreement with the cloud provider). The rest of this article explains this issue in more detail.

The main reason for the clouding (sorry) of this issue, is what is still, after 20+ years, a fairly widespread lack of awareness of the ineffectiveness of unauthenticated vulnerability scanning. More and more security managers are becoming aware that credentialed scans are the only way to go. However, with a lack of objective survey data available, I can only draw on my own experiences. See – i’m one of those disgraceful contracting/consultant types, been doing security for almost 20 years, and been intimate with a good number of large organisations, and with each year that passes I can say that more organisations are waking up to the limitations of unauthenticated scanning. But there are also still lots more who don’t clearly see the limitations of unauthenticated scanning.

The original Nessus from the late 90s, now with Tenable, is a great product in terms of doing what it was intended to do. But false negatives were never a concern in with the design of Nessus. OpenVAS is still open source and available and it is also a great tool from the point of view of doing what it was intended to do. But if these tools are your sole source of vulnerability data, you are effectively running blind.

By the way Tenable do offer a product that covers credentialed scans for enterprises, but i have not had any hands-on experience with this tool. I do have hands on experience with the other market leaders’ products. By in large they all fall some way short but that’s a subject for another day.

Unauthenticated scanners all do the same thing:

  • port scan to find open ports
  • grab service banners – this is the equivalent of nmap -sV, and in fact as most of these tools use nmap libraries, is it _exactly_ that
  • lets say our tool finds Apache HTTP 14.x, it looks in its database of public disclosed vulnerability with that version of Apache, and spews out everything it finds. The tools generally do little in the way of actually probing with HTTP Methods for example, and they certainly were not designed to try, for example, a buffer overflow exploit attempt. They report lots of ‘noise’ in the way of false positives, but false negatives are the real concern.

So really the tools are doing a port scan, and then telling you you’re running old warez. Conficker is still very widespread and is the ultimate player in the ‘Pee’ arena (the ‘Pee’ in APT). An unauthenticated scanner doesn’t have enough visibility ‘under the hood’ to tell you if you are going to be the next Conficker victim, or the next ransomware victim. Some of the Linux vulnerabilities reported in the past few years – e.g. Heartbleed, Ghost, DirtyCOW – very few can be detected with an unauthenticated scanner, and none of these 3 examples can be detected with an unauthenticated scanner.

Credentialed scanning really is the only way to go. Credentialed based scanners are configured with root/administrative access to targets and are therefore in a position to ‘see’ everything.

The Connection With PaaS and SaaS

So how does this all relate to Cloud? Well, there two of the three cloud types where a lack of access to the operating system command shell becomes a problem – and from this description its fairly clear these are PaaS and SaaS.

 There are two common delusions abound in this area:

  • [Cloud maker] handles platform configuration and therefore vulnerability for me, so that’s ok, no need to worry:
    • Cloud makers like AWS and Azure will deal with patches, but concerns in security are much wider and operating systems are big and complex. No patches exist for 0days, and in space, nobody can hear you scream.
    • Many vulnerabilities arise from OS configuration aspects that cannot be removed with a patch – e.g. Conficker was mentioned above: some Conficker versions (yes its managed very professionally) use ‘at’ job scheduling to remain present even after MS08-067 is patched. If for example you use Azure, Microsoft manage your PaaS and SaaS but they don’t know if you want to use ‘at’ or not. Its safer for them to assume that you do want to use it, so they leave it enabled (when you sign up for PaaS or SaaS you are removed from the decision making here). Same applies to many other local services and file system permissions that are very popular with the dark side.
  • ‘Unauthenticated scanning gets me some of the way, its good enough’ – how much of the way does it get you? Less than half way? its more like 5% really. Remember its little more than a port scan, and you shouldn’t need a scanner to tell you you’re running old software. Certainly for critical cloud VMs, this is a problem.

With PaaS and SaaS, you are handing over the management of large and complex operating systems to cloud providers, who are perfectly justified, and also in many cases perfectly wise, in leaving open large security holes in your platforms, and as part of your agreement with them, there’s not a thing you can do about it (other than switch to IaaS or on-premise).

Addressing The Information Security Skills Gap

We are told there is a skills gap in information security. I agree – there is, but recent suggestions to address the gap take us to dangerous places that are great for recruitment agencies, but not so great for the business world.

I want to steer away from use of the phrase ‘skills’ in this article because its too micro and the phrase has been violated by modern hiring practices. We are not looking for ‘Websense’, ‘DLP’ skills or as i saw recently ‘HSM’ skills. These requirements are silly unless it is the plan for organisations to spend 10 to 50 times more than they need on human resource, and have a security team of 300. Its healthier for organisations to look at ‘habits’ or ‘backgrounds’, and along those lines, in information security we’re looking for the following:

  • At least 5 years in an IT discipline: sys admin, DBA, devops bod, programmer for example
  • Evidence of having excelled in those positions and sort of grown out of them
  • Flexibility: for example, the crusty Radagast BSD-derivative disciple who has no fundamentalist views of other operating systems (think ‘Windows’) and not only can happily work with something like Active Directory, but they actually love working with Active Directory
  • A good-to-have-but-not-critical is past evidence of breaking or making things, but this should seen as a nice bonus. In its own right, it is insufficient – recruiting from hacker confz is far from guaranteed to work – too much to cover here

So really it should be seen that a career in infosec is a sort of ‘graduation’ on from other IT vocations. There should be an entrance exam based on core technologies and penetration testing. The career progression path goes something like: Analyst (5 years) –> Consultant -> Architect/Manager. Managers and architects cannot be effective if they do not have a solid IT background. An architect who doesn’t know her way around a Cisco router, implement a new SIEM correlation rule, or who cannot run or interpret the output from a packet sniffer is not an architect.

Analysts and Consultants should be skilled with the core building blocks to the level of being confidently handed administrative access to production systems. As it is, security pros find it hard to even get read-only access to firewall management suites. And having fast access to information on firewall rules – it can be critical.

Some may believe that individuals fitting the above profile are hard to find, and they’d be right. However, with the aforementioned model, the workforce will change from lots of people with micro-skills or product-based pseudo skills, to fewer people who are just fast learners and whose core areas complement each other. If you consider that a team of 300 could be reduced to 6 – the game has changed beyond recognition.

Quoting a recent article: “The most in demand cyber security certifications were Security+, Ethical Hacking, Network+, CISSP, and A+. The most in demand skills were Ethical Hacking, Computer Forensics, CISSP, Malware Analysis, and Advanced Penetration Testing”. There are more problems with this to describe in a reasonable time frame but none of these should ever be called ‘skills’. Of these, Penetration Testing (leave out the ‘ethics’ qualifier because it adds a distasteful layer of judgment on top of the law) is the only one that should be called a specification in its own right.

And yes, Governance, Risk and Control (GRC) is an area that needs addressing, but this must be the role of the Information Security Manager. There is a connection between Information Security Manage-ment and Information Security Manage-er.  Some organisations have separate GRC functions, the UK public sector usually has dedicated “assurance” functions, and as i’ve seen with some law firms, they are separated from the rest of security and IT.  Decision making on risk acceptance or mitigation, and areas such as Information Classification, MUST have an IT input and this is the role of the Information Security Manager. There must be one holistic security team consisting of a few individuals and one Information Security Manager.

In security we should not be leaving the impression that one can leave higher education, take a course in forensics, get accreditation, and then go and get a job in forensics. This is not bridging the security skills gap – its adding security costs with scant return. If you know something about forensics (usually this will be seen as ‘Encase‘ by the uninitiated) but don’t even have the IT background, let alone the security background, you will not know where to look in an investigation, or have a picture of risk. You will not have an inkling of how systems are compromised or the macro-techniques used by malware authors. So you may know how to use Encase and take an integral disk image for example, but that will be the limit of your contribution. Doesn’t sound like a particularly rewarding way to spend 200 business days per year? You’d be right.

Sticking with the forensics theme: an Analyst with the right mindset can contribute effectively in an incident investigation from day one. There are some brief aspects of incident response for them to consider, but it is not advisable to view forensics/incident response as a deep area. We can call it a specification, just as an involuntary action such as breathing is a specification, but if we do, we are saying that it takes more than one person to change a light bulb.

Incident response from the organisational / Incident Response Plan (IRP) formation point of view is a one-day training course or a few hours of reading. The tech aspects are 99% not distinct from the core areas of IT and network security. This is not a specialisation.

Other areas such as DLP, Threat Intelligence, SIEM, Cryptography and Key Management – these can be easily adopted by the right security minds. And with regard security products – it should be seen that security professionals are picking up new tools on-the-fly and don’t need 2 week training courses that cost $4000. Some of the tools in the VM and proxy space are GUIs for older open source efforts such as Nessus, OpenVAS, and Squid with which they will be well-versed, and if they’re not, it will take an hour to pick up the essentials.

There’s been a lot of talk of Operating Systems (OS) thus far. Operating Systems are not ‘a thing from 1998’. Take an old idea that has been labelled ‘modern’ as an example: ok, lets go with ‘Cloud’. Clouds have operating systems. VMs deployed to clouds have operating systems. When we deploy a critical service to a cloud, we cannot ignore the OS even if its a PaaS deployment. So in security we need people who can view an OS in the same way that a hacker views an OS – we need to think about Kill Chains and local privilege elevations. The Threat and Vulnerability Management (TVM) challenge does not disappear just because you have PaaS’d everything. Moreover if you have PaaS’d everything, you have immediately lost the TVM battle. As Beaker famously said in his cloud presentation – “Platforms Bitches”. Popular OS like Windows, *nix, Linux, and popular applications such as Oracle Database are going to be around for some time yet and its the OS where the front-lines are drawn.

Also what is a common misconception and does not work: a secops/network engineer going straight into security with no evidence of interest in other areas. ‘Secops’ is not good preparation for a security career, mainly because secops is sort of purgatory. Just as “there is no Dana, only Zool“, so “there is no secops, only ops”. There is only a security element to these roles because the role covers operational processes with security products. That is anti-security.

All Analyst roles should have an element of penetration testing and appsec, and when I say penetration testing, i do mean unrestricted testing as in an actual simulation. That means no restrictions on exploit usage or source address – because attackers do not have such restrictions. Why spend on this type of testing if its not an actual simulation?

Usage of Cisco Discovery Protocol (CDP) offers a good example of how a lack of penetration testing experience can impede a security team. If security is being done even marginally professionally in an organisation, there will exist a security standard for Cisco network devices that mandates the disabling of CDP.  But once asked to disable CDP, network ops teams will want justification. Any experienced penetration tester knows the value of intelligence in expediting the attack effort and CDP is a relative gold mine of intelligence that is blasted multicast around networks. It can, and often does, reveal the identity and IP address of a core switch. But without the testing experience or knowledge of how attacks actually go down, the point will be lost, and the confidence missing from the advisory.

The points i’ve just covered are not actually ground-breaking at all. Analysts with a good core background of IT and network security can easily move into any new area that marketeers can dream up.

There is an intuition that Information Security has a connection with Information Technology, if only for the common word in them both (that was ‘Information’ by the way, in case you didn’t get it). However, as Upton Sinclair said “It is difficult to get a man to understand something, when his salary depends upon his not understanding it”.

And please don’t create specialisations for Big Data or Internet of Things…woops, too late.

So, consider a small team of enthusiastic, flexible, fast learners, rather than a large team of people who can be trained at a high cost to understand the UI of an application that was designed in the international language and to be intuitive and easy to learn.

Consider using one person to change a light bulb, and don’t be the butt of future jokes.

Information Security Pseudo-skills and the Power of 6

How many Security Analysts does it take to change a light bulb? The answer should be one but it seems organisations are insistent on spending huge amounts of money on armies of Analysts with very niche “skills”, as opposed to 6 (yes, 6!) Analysts with certain core skills groups whose abilities complement each other. Banks and telcos with 300 security professionals could reduce that number to 6.

Let me ask you something: is Symantec Control Compliance Suite (CCS) a “skill” or a product or both? Is Vulnerability Management a skill? Its certainly not a product. Is HP Tippingpoint IPS a product or a skill?

Is McAfee Vulnerability Manager 7.5 a skill whereas the previous version is another skill? So if a person has experience with 7.5, they are not qualified to apply for a shop where the previous version is used? Ok this is taking it to the extreme, but i dare say there have been cases where this analogy is applicable.

How long does it take a person to get “skilled up” with HP Arcsight SIEM? I was told by a respected professional who runs his own practice that the answer is 6 months. My immediate thought is not printable here. Sorry – 6 months is ridiculous.

So let me ask again, is Symantec CCS a skill? No – its a product. Its not a skill. If you take a person who has experience in operational/technical Vulnerability Management – you know, vulnerability assessment followed by the treatment of risk, then they will laugh at the idea that CCS is a skill. Its only a skill to someone who has never seen a command shell before, tested manually for a false positive, or taken part in an unrestricted manual network penetration test.

Being a software product from a major vendor means the GUI has been designed to make the software intuitive to use. I know that in vulnerability assessment, i need to supply the tool with IP addresses of targets and i need to tell the tool which tests I want to run against those targets. Maybe the area where I supply the addresses of targets is the tab which has “targets” written on it? And I don’t want to configure the same test every time I run it, maybe this “templates” tab might be able to help me? Do i need a $4000 2-week training course and a nice certificate to prove to the world that I can work effectively with such a product? Or should there be an effective accreditation program which certifies core competencies (including evidence of the ability to adapt fast to new tools) in security? I think the answer is clear.

A product such as a Vulnerability Management product is only a “Window” to a Vulnerability Management solution. Its only a GUI. It has been tailored to be intuitive to use. Its the thin layer on top of the Vulnerability Management solution. The solution itself is much bigger than this. The product only generates list of vulnerabilities. Its how the organisation treats those vulnerabilities that is key – and the product does not help too much with the bigger picture.

Historically vulnerability management has been around for years. Then came along commercial products, which basically just slapped a GUI on processes and practices that existed for 20 years+, after which the jobs market decided to call the product the solution. The product is the skill now, whereas its really vulnerability management that is the skill.

The ability to adapt fast to new tools is a skill in itself but it also is a skill that should be built-in by default: a skill that should be inherent with all professionals who enter the field. Flexibility is the key.

The real skills are those associated with areas for large volumes of intellectual capital. These are core technologies. Say a person has 5 years+ experience of working in Unix environments as a system administrator and has shown interest in scripting. Then they learn some aspects of network penetration testing and are also not afraid of other technologies (such as Windows). I can guarantee that they will be up and running in less than one day with any new Vulnerability Management tool, or SIEM tool, or [insert marketing buzzphrase here] that vendors can magic up.

Different SIEM tools use different terms and phrases for the same basic idea. HP uses “flex connectors” whilst Splunk talks about “Forwarders” and “Heavy Forwarders” and so on. But guess what? I understand English but If i don’t know what the words mean, i can check in an online dictionary. I know what a SIEM is designed to do and i get the data flows and architecture concept. Network aggregation of syslog and Windows Events is not an alien concept to me, and neither are all layers of the TCP/IP stack (a really basic requirement for all Analysts – or should be). Therefore i can adapt very easily to new tools in this space.

IPS/IDS and Firewalls? Well they’re not even very functional devices. If you have ever setup Snort or iptables you’ll be fine with whatever product is out there. Recently myself and another Consultant were asked to configure a Tippingpoint device. We were up and running in 10 minutes. There were a few small items that we needed to check against the product documentation. I have 15 years+ experience in the field but the other guy is new. Nonetheless he had configured another IPS product before. He was immediately up and running with the product – no problem. Of course what to configure in the rule base – that is a bigger story and it requires knowledge of threats, attack techniques and vulnerabilities – but that area is GENERIC to security – its not specific to a product.

I’ve seen some truly crazy job specifications. One i saw was Websense Specialist!! Come on – its a web proxy! Its Squid with extra cosmetic functions. The position would be filled by a Websense “Olympian” probably. But what sort of job is that? Carpe Diem my friends, Carpe Diem.

If you run a security consultancy and you follow the usual market game of micro-boxed, pigeon-holed security skills, i don’t know how you can survive. A requirement comes up for a project that involves a number of different products. Your existing consultants don’t have those products written anywhere on their CVs, so you go to market looking for contractors at 600 USD per day. You either find the people somehow, or you turn the project down.  Either way you lose out massively. Or – you could have a base of 6 (its that number again) consultants with core skills that complement each other.

If the over-specialisation issue were addressed, businesses would save considerably on human resource and also find it easier to attract the right people. Pigeon-holed jobs are boring. It is possible and advisable to acquire human resource able to cover more bases in risk management.

There are those for and against accreditation in security. I think there is a solution here which is covered in more detail of Chapter 11 of Security De-engineering.

So how many Security Analysts does it take to change a light bulb? The answer is 6, but typically in real life the number is the mark of the beast: 666.

Scangate Re-visited: Vulnerability Scanners Uncovered

I have covered VA tools before but I feel that one year later, the same misconceptions prevail. The notion that VA tools really can be used to give a decent picture of vulnerability is still heavily embedded, and that notion in itself presents a serious vulnerability for businesses.

A more concise approach at a run down on the functionality of VA warez may be worth a try. At least lets give it one last shot. On second thoughts, no, don’t shoot anything.

Actually forget “positive” or “negative” views on VAs before reading this. I am just going to present the facts based on what I know myself and of course I’m open to logical, objective discussion. I may have missed something.

Why the focus on VA? Well, the tools are still so commonplace and heavily used and I don’t believe that’s in our best interests.

What I discovered many years ago (it was actually 2002 at first) was that discussions around these tools can evoke some quite emotional responses. “Emotional” you quiz? Yes. I mean when you think about it, whole empires have been built using these tools. The tools are so widespread in security and used as the basis of corporate VM programs. VM market revenues runs at around 1 billion USD annually. Songs and poems have been written about VAs – OK I can’t back that up, but careers have been built and whole enterprise level security software suites built using a nasty open source VA engine.

I presented on the subject of automation in VA all those years ago, and put forward a notion that running VA tools doesn’t carry much more value as compared to something like this: nmap -v -sS -sV <targets> . Any Security Analyst worth their weight in spam would see open ports and service banners, and quickly deduce vulnerability from this limited perspective. “Limited”, maybe, but is a typical VA tool in a better position to interrogate a target autotragically?

One pre-qualifier I need to throw out is that the type of scanners I will discuss here are Nessus-like scanners, the modus operandi of which is to use unauthenticated means to scan a target. Nessus itself isn’t the main focus but it’s the tool that’s most well known and widely used. The others do not present any major advantages over Nessus. In fact Nessus is really as good as it gets. There’s a highly limited potential with these tools and Nessus reaches that limit.

Over the course of my infosec career I have had the privilege to be in a position where I have been coerced into using VAs extensively, and spent many long hours investigating false positives. In many cases I set up a dummy Linux target and used a packet sniffer to deduce what the tool was doing. As a summary, the findings were approximately:

  • Out of the 1000s of tests, or “patterns”, configured in the tools, only a few have the potential to result in accurate/useful findings. Some examples of these are SNMP community string tests, and tests for plain text services (e.g. telnet, FTP).
  • The vast majority of the other tests merely grab a service “banner”. For example, the tool port scans, finds an open port 80 TCP, then runs a test to grab a service banner (e.g. Apache 2.2.22, mickey mouse plug-in, bla bla). I was sort of expecting the tool to do some more probing having found a specific service and version, but in most cases it does not.
  • The tool, having found what it thinks is a certain application layer service and version, then correlates its finding with its database of public disclosed vulnerabilities for the detected service.

Even for some of the plan text services, some of the tests which have the potential to reveal useful findings have been botched by the developers. For example, tests for anonymous FTP only work with a very specific flavour of FTP. Other FTP daemons return different messages for successful anonymous logins and the tool does not accommodate this.

Also what happens if a service is moved from its default port? I had some spectacular failures with running Nessus against a FTP service on port 1980 TCP (usually it is listening on port 21). Different timing options were tested. Nessus uses a nmap engine for port scanning, but nmap by itself is usually able to find non-default port services using default settings.

So in summary, what the VA tools do is mostly just report that you are running ridiculous unencrypted blast-from-the-past services or old, down-level services – maybe. Really I would hope security teams wouldn’t need to spend 25K USD on an enterprise solution to tell them this.

False positives is one thing, but false negatives is quite another. Popular magazines always report something like 50% success rate in finding vulnerabilities in staged tests. Why is it always 50%? Remember also that the product under testing is usually one from a vendor who pays for a full spread ad in that magazine.

Putting numbers to false negatives makes little sense with huge, complex software packages of millions of lines of source code. However, it occurred to me not so long ago whilst doing some white box testing on a client’s critical infrastructure: how many of the vulnerabilities under testing could possibly be discovered by use of a VA tool? In the case of Oracle Database the answer was less than 5%. And when we’re talking Oracle, we’re usually talking critical, as in crown jewels critical.

If nothing else, the main aspect I would hope the reader would take out of this discussion is about expectation. The expectation that is set by marketing people with VA tools is that the tools really can be used to accurately detect a wide range of vulnerability, and you can bet your business on the tools by using them to test critical infrastructure. Ladies and gentlemen: please don’t be deceived by this!

Can you safely replace manual testing with use of these tools? Yes, but only if the target has zero value to the business.