Windows SIEM – Optimizing Events Volume with CIS Benchmarks and AuditpolCIS

In our 2021 blog post, we focused on identifying quick wins for optimizing Windows Events, and provided a free spreadsheet (really free, not even a regwall) that indicated Windows Events that could be safely ignored, some of which cost lots for SIEM engines to ingest. This post takes a broader Windows Audit Policy view, and offers another free resource – this time taking a broader look in the context of comparing your setup for Windows Audit Policy, and the venerable CIS Benchmark for Windows 2019 Server.

If there’s sufficient interest i’ll follow up with a development effort for a Python tool (also freely available, on Github) that connects to your Windows server and performs the CIS Benchmark assessment as indicated in the spreadsheet.

SIEM Nightmares

Based on many first hand observations and second hand accounts, it’s not a stretch to say that many organisations are suffering from SIEM configuration issues, for which the result is a low signal-to-noise ratio. Your SIEM is ingesting lots of events, many of which are not at all helpful, and with most vendors charging by volume, it gets expensive. At the same time, the false negative problem is all too common. Forensics investigations reveal all too often that there are no events recorded by the expensive SIEM, that even closely relate to the incident. I hope you are never in this scenario. The short-term impact is never good.

Taking SIEM as a capability, if one is to advise on how to improve things, it is rarely ever about the technology. When one asks Analysts (and based on job postings, also hiring managers) about SIEM, it’s clear the first thing that comes to mind is Splunk. ELK, Sentinel, etc. I would estimate the technology-only focus with SIEM to be the norm rather than the exception, and it comes hand-in-hand with a failure to detect privilege elevations, and lateral movements for example.

There are some advisories that we can give out that are independent of your architecture, but many questions about SIEM configuration can only be answered by you, using your knowledge of the IT landscape in your organisation. The advisories in the referenced spreadsheet cover the “noise” part of the signal-to-noise ratio. These are events that are sure to be noise to at least a 90% level of assurance, from a security perspective.

Addtional Context on the Spreadsheet

Some context around the spreadsheet: where there is a CIS Benchmark metric for a specific Audit Subcategory, the spreadsheet follows exactly the CIS recommended setting. But there are some (e.g. DS Access –> Directory Service Access) where this subcategory was not covered by CIS. In these cases, an assessment is made based on our real-experience observations of logging volumes, versus the security (not the IT diagnostic, or other value) value of Audit Subcategories. In this case of the Directory Service Access subcategory, it can be turned off from a security perspective.

There is limited information available regarding actual experiences with specific event ID volumes. In 2018, I had the opportunity to track Windows events in a Splunk architecture for a government department. During this time, I recorded the occurrences of events over a 24-hour period on a network of approximately 150 Windows servers of various versions, some of which were quite exotic. This information has been valuable in supporting decisions related to whether or not to disable auditing.

SIEM Forwarder Filtering

There is another option offered by some SIEM vendors and that is to filter events by Event ID. Overall, the more resource-friendly approach is to prevent the events being generated at source, but in many cases this may not be feasible. Splunk for example allows you to filter at forwarders (via the inputs.conf file on the Splunk forwarder. This file is usually located in the $SPLUNK_HOME/etc/system/local/ directory … more info – BTW it looks like Splunk agrees with us on the 4662 event mentioned as an example above. Yay!).

Credits and Disclaimers

Windows Events are sometimes tricky to understand, both with respect of what the developers intended with those events, and the conditions under which they are generated. Sometimes with Windows Events, we are completely in unknown territory, even if there is some Microsoft documentation that covers them. Here’s one example from Microsoft documentation to fill us with confidence – “This auditing subcategory should not have any events in it, but for some reason Success auditing will enable the generation of event 4985″.

Ultimately only you can decide what’s best for the health of your SOC/SIEM. Only you know your network and your applications. The document supplied here was only intended as a guide, and to aid decision making. It was not intended to make decisions for you.

The cybersecurity landscape often focuses on the more sensational aspects, such as high-profile hacks or fake influencers, which can overshadow the essential work done by countless professionals in the background. These unsung heroes are dedicated to ensuring the stability and security of our digital infrastructure, and their contributions should not be underestimated. Among those are tthe likes of Randy Franklin Smith (founder of Ultmate Windows Security) who has put together an “encyclopedia” of Windows Event IDs. The experiences shared there were used in-part to form a view on whether or not to reject or accept certain Windows Events.

Fintechs and Security – Prologue

  • Prologue – covers the overall challenge at a high level
  • Part One – Recruiting and Interviews
  • Part Two – Threat and Vulnerability Management – Application Security
  • Part Three – Threat and Vulnerability Management – Other Layers
  • Part Four – Logging
  • Part Five – Cryptography and Key Management, and Identity Management
  • Part Six – Trust (network controls, such as firewalls and proxies), and Resilience

Fintechs and Security – A Match Made In Heaven?

Well, no. Far from it actually. But again, as i’ve been repeating for 20 years now, its not on the fintechs. It’s on us in infosec, and infosec has to take responsibility for these problems in order to change. If i’m a CTO of a fintech, I would be confused at the array of opinions and advice which vary radically from one expert to another

But there shouldn’t be such confusion with fintech challenges. Confusion only reigns where there’s FUD. FUD manifests itself in the form of over-lengthy coverage and excessive focus on “controls” (the archetypal shopping list of controls to be applied regardless of risk – expensive), GRC, and “hacking/”[red,blue,purple,yellow,magenta/teal/slate grey] team”/”appsec.

Really what’s needed is something like this (in order):

  • Threat modelling lite – a one off, reviewed periodically.
  • Architecture lite – a one off, review periodically.
  • Engineering lite – a one off, review periodically.
  • Secops lite – the result of the previous 3 – an on-going protective monitoring capability, the first level of monitoring and response for which can be outsourced to a Managed Service Provider.

I will cover these areas in more details in later episodes but what’s needed is, for example, a security design that only provides the answer to “What is the problem? How are we going to solve it?” – so a SIEM capability design for example – not more than 20 pages. No theory. Not even any justifications. And one that can be consumed by non-security folk (i.e. it’s written in the language of business and IT).

Fintechs and SMBs – How Is The Infosec Challenge Unique?

With a lower budget, there is less room for error. Poor security advice can co-exist with business almost seamlessly in the case of larger organisations. Not so with fintechs and Small and Medium Businesses (SMBs). There has been cases of SMBs going under as a result of a security incident, whereas larger businesses don’t even see a hit on their share price.

Look For A Generalist – They Do Exist!

The term “generalist” is seen as a four-letter word in some infosec circles. But it is possible for one or two generalists to cover the needs of a fintech at green-field, and then going forward into operations, its not unrealistic to work with one in-house security engineer of the right background, the key ingredients of which are:

  • Spent at least 5 years in IT, in a complex production environment, and outgrew the role.
  • Has flexibility – the old example still applies today – a Unix fan has tinkered with Windows. So i.e. a technology lover. One who has shown interest in networking even though they’re not a network engineer by trade. Or one who sought to improve efficiency by automating a task with shell scripting.
  • Has an attack mindset – without this, how can they evaluate risk or confidently justify a safeguard?

I have seen some crazy specialisations in larger organisations e.g. “Websense Security Engineer”! If fintechs approached security staffing in the same way as larger organisations, they would have more security staff than developers which is of course ridiculous.

So What’s Next?

In “On Hiring For DevSecOps” I covered some common pitfalls in hiring and explained the role of a security engineer and architect.

There are “fallback” or “retreat” positions in larger organisations and fintechs alike, wherein executive decisions are made to reduce the effort down to a less-than-advisable position:

  • Larger organisations: compliance driven strategy as opposed to risk based strategy. Because of a lack of trustworthy security input, execs end up saying “OK i give up, what’s the bottom line of what’s absolutely needed?”
  • Fintechs: Application security. The connection is made with application development and application security – which is quite valid but the challenge is wider. Again, the only blame i would attribute here is with infosec. Having said that, i noticed this year that “threat modelling” has started to creep into job descriptions for Security Engineers.

So for later episodes – of course the areas to cover in security are wider than appsec, but again there is no great complication or drama or arm-waiving:

  • Part One – Hiring and Interviews – I expand on “On Hiring For DevSecOps“. I noticed some disturbing trends in 2019 and i cover these in some more detail.
  • Part Two – Security Architecture and Engineering I – Threat and Vulnerability Management (TVM)
  • Part Three – Security Architecture and Engineering II – Logging (not necessarily SIEM). No Threat Hunting, Telemetry, or Threat “Intelligence”. No. Just logging. This is as sexy as it needs to be. Any more sexy than this should be illegal.
  • Part Four – Security Architecture and Engineering III – Identity Management (IDAM) and Cryptography and Key Management (CKM).
  • Part Five – Security Architecture and Engineering IV – Trust (network trust boundary controls – e.g. firewalls and forward proxies), and Business Resilience Management (BRM).

I will try and get the first episode on hiring and interviewing out before 2020 hits us but i can’t make any promises!

“Cybersecurity Is About To Explode” – But in What Way?

I recently had the fortune to stumble across an interesting article: http://thetechnews.com/2019/08/17/cybersecurity-is-about-to-explode-heres-why/

The article probably was aimed at generating revenue for the likes of ISC2 (CISSP exam revenue) and so on, but i am open minded to the possibility that it was genuinely aimed at helping the sector. It is however hopelessly misleading. I would hate to think that this article was the thing that led a budding security wannabe to finally sign up. Certainly a more realistic outlook is needed.

Some comments on some of the points in said article:

“exciting headlines about data breaches” – exciting for who? The victims? 
“organizations have more resources to fight back ” – no they don’t. They spend lots but still cannot fight back.
“It’s become big enough that thought leaders, lawyers, and even academics are weighing in” – who are the thought leaders who are weighing in? If they are leading thought, i would like to know who they are.
“today’s cybercriminals are much more sophisticated than they were twenty years ago”. Do they need to be? I mean Wannacry exploited a basic firewall config problem. Actually firewall configs were better 20 years ago than they are today.
“employing the services of ethical hackers ” – i’m glad the hackers are ethical. They wouldn’t have the job if they had a criminal record. So what is the ‘ethical’ qualifier for? Does it mean the hackers are “nice” or… ?
“Include the use of new security technology like the blockchain and using psychology to trick, mislead, and confuse hackers before they ever reach sensitive data.” Psychology isn’t a defence method, it’s an attack method. Blockchain – there are no viable blue team use cases. 
“313,735 job openings in the cybersecurity field” – all of them are filled if this number is real (unlikely).
“since the need for security experts isn’t likely to drop anytime soon.” see Brexit. It’s dropping now. Today. Elsewhere its flat-line.
“You can take your pick of which industry you want to work in because just about every company needs to be concerned about the safety and security of their networks.” – “needing” to be concerned isn’t the same as being concerned. No. All sectors are still in the basic mode of just getting compliance. 
“Industries like healthcare, government, and fintech offer extensive opportunities for those who want to work in cybersecurity” – no, they do not. 
“90% of payment companies plan to switch over to blockchain technology by 2020” – can you tell your audience the source of this information?

A Desperate Call For More Effective Information Security Accreditation

CISSP has to be the most covered topic in the world of infosec. Why is that? The discussions are mostly of course aimed at self-promotion (both by folk condemning the accreditation and then the same in the defensive responses) and justifying getting the accreditation. How many petabytes are there covering this subject? If you think about it, the sheer volume of the commentary on CISSP is proportional to the level of insecurity felt by infosec peeps. It’s a symptom of a sector that is really very ill indeed, and the sheer volume of the commentary is a symptom of how ineffective CISSP is in accreditation, and also the frustration felt by people who know we can do better.

We need _something_. We do need some kind of accreditation. Right now CISSP is the only recognised accreditation. But if you design an accreditation that attempts to cover the whole of infosec in one exam, what did you think the result would be? And there is no room for any argument or discussion on this. Its time to cut the defensiveness and come clean and honest.

The first stage of solving a problem is acknowledgment of its existence. And we’re not there yet. There are still 1000s in this field who cling onto CISSP like a lifebuoy out on the open ocean. There is a direct correlation between the buoy-clingers and the claim that “security is not about IT” …stop that!! You’re not fooling anybody. Nobody believes it! All it does is make the whole sector look even more like a circus to our customers than it already does. The lack of courage to admit the truth here is having a negative impact on society in general.

Seems to me that the “mandatory” label for CISSP in job qualifications is now rare to see. But CISSP is still alive and is better than nothing. Just stop pretending that it’s anything other than an inch thick and a mile wide.

Really we need an entry-level accreditation that tests a baseline level of technical skills and the possession of an attack mindset. We can’t attack or defend, or make calls on risk without an attack mindset. GRC is a thing in security and its a relevant thing – but it doesn’t take up much intellectual space so therefore it needs to be a small part of the requirements. Level 2 SOC Analysts need to understand risk and the importance of application availability, and the value of electronic information to the business, but this doesn’t require them to go and get a dedicated accreditation. Information Security Manage-ment is really an area for Manage-ers – the clue is in the name.

What are the two biggest challenges in terms of intellectual capital investment? They’re still operating systems (and ill-advised PaaS and SaaS initiatives haven’t changed this) and applications. So let’s focus on these two as the biggest chunks of stuff that an infosec team has to cover, and test entry-level skills in these areas.

Infosec in APAC – A Very Summarised View

I spent a total of 16 years working in infosec in APAC – across the region as a whole except for India and mainland China. I was based initially in a pen test/research lab in Thailand with regional customers, and then later spent some time with big-4 in Thailand, before moving base to Jakarta for what will probably be my final stint in the region. As well as the aforementioned places i spent lots of time in Singapore, Taiwan, and HK. Less so in Malaysia, and i never worked in either of Vietnam, Cambodia, Laos, Myanmar, or the Philippines.

I was in APAC for most of the period between 1999 and 2013. My time with the consultancy which was based in Bangkok (although there was only one client account in Thailand) made up the formative, simulated-attack experience of my career – not a bad place to start. There were some brief spells away in the UK and Czech Republic (the best blue team experience one can hope to find). Overall i was lucky with the places I worked in, and especially the people I worked with – some of whom quit infosec not long after the Great Early Noughties Infosec Brain Drain. 

Appetite for risk is high in APAC – just look at the stats for insurance sales in the region. What results in infosec, even in banking and finance though, is exactly the same as the west – base compliance only. The difference is something like this: western CEOs showed interest and worried about cyber at some point in time, but when they went looking for answers they didn’t find any, other than buzzwords from CISSPs – result: base compliance – aka lets just get thru the audit. In Asia the CEOs didn’t go looking for answers – its just base compliance, do not pass go. But before you pass judgment on this statement – read on.

Where APAC countries were better was the lack of any pretence around GRC. You will never hear anything along the lines “security is not about IT” – i.e. there is no community of self-serving non-technical GRC folk spouting acronyms. Western countries blow billions down the dunny on this nonsense.

So both regions have poor security. Both face a significant threat. But if you measure security performance in terms of how much is spent, versus the results – there’s a clear winner, and that is APAC. Both have poor security, but one spends more for poor security than the other.

Clouds and Vulnerability Management

In the world of Clouds and Vulnerability Management, based on observations, it seems like a critical issue has slipped under the radar: if you’re running with PaaS and SaaS VMs, you cannot deliver anything close to a respectable level of vulnerability management with these platforms. This is because to do effective vulnerability management, the first part of that process – the vulnerability assessment – needs to be performed with administrative access (over SSH/SMB), and with PaaS and SaaS, you do not, as a customer, have such access (this is part of your agreement with the cloud provider). The rest of this article explains this issue in more detail.

The main reason for the clouding (sorry) of this issue, is what is still, after 20+ years, a fairly widespread lack of awareness of the ineffectiveness of unauthenticated vulnerability scanning. More and more security managers are becoming aware that credentialed scans are the only way to go. However, with a lack of objective survey data available, I can only draw on my own experiences. See – i’m one of those disgraceful contracting/consultant types, been doing security for almost 20 years, and been intimate with a good number of large organisations, and with each year that passes I can say that more organisations are waking up to the limitations of unauthenticated scanning. But there are also still lots more who don’t clearly see the limitations of unauthenticated scanning.

The original Nessus from the late 90s, now with Tenable, is a great product in terms of doing what it was intended to do. But false negatives were never a concern in with the design of Nessus. OpenVAS is still open source and available and it is also a great tool from the point of view of doing what it was intended to do. But if these tools are your sole source of vulnerability data, you are effectively running blind.

By the way Tenable do offer a product that covers credentialed scans for enterprises, but i have not had any hands-on experience with this tool. I do have hands on experience with the other market leaders’ products. By in large they all fall some way short but that’s a subject for another day.

Unauthenticated scanners all do the same thing:

  • port scan to find open ports
  • grab service banners – this is the equivalent of nmap -sV, and in fact as most of these tools use nmap libraries, is it _exactly_ that
  • lets say our tool finds Apache HTTP 14.x, it looks in its database of public disclosed vulnerability with that version of Apache, and spews out everything it finds. The tools generally do little in the way of actually probing with HTTP Methods for example, and they certainly were not designed to try, for example, a buffer overflow exploit attempt. They report lots of ‘noise’ in the way of false positives, but false negatives are the real concern.

So really the tools are doing a port scan, and then telling you you’re running old warez. Conficker is still very widespread and is the ultimate player in the ‘Pee’ arena (the ‘Pee’ in APT). An unauthenticated scanner doesn’t have enough visibility ‘under the hood’ to tell you if you are going to be the next Conficker victim, or the next ransomware victim. Some of the Linux vulnerabilities reported in the past few years – e.g. Heartbleed, Ghost, DirtyCOW – very few can be detected with an unauthenticated scanner, and none of these 3 examples can be detected with an unauthenticated scanner.

Credentialed scanning really is the only way to go. Credentialed based scanners are configured with root/administrative access to targets and are therefore in a position to ‘see’ everything.

The Connection With PaaS and SaaS

So how does this all relate to Cloud? Well, there two of the three cloud types where a lack of access to the operating system command shell becomes a problem – and from this description its fairly clear these are PaaS and SaaS.

 There are two common delusions abound in this area:

  • [Cloud maker] handles platform configuration and therefore vulnerability for me, so that’s ok, no need to worry:
    • Cloud makers like AWS and Azure will deal with patches, but concerns in security are much wider and operating systems are big and complex. No patches exist for 0days, and in space, nobody can hear you scream.
    • Many vulnerabilities arise from OS configuration aspects that cannot be removed with a patch – e.g. Conficker was mentioned above: some Conficker versions (yes its managed very professionally) use ‘at’ job scheduling to remain present even after MS08-067 is patched. If for example you use Azure, Microsoft manage your PaaS and SaaS but they don’t know if you want to use ‘at’ or not. Its safer for them to assume that you do want to use it, so they leave it enabled (when you sign up for PaaS or SaaS you are removed from the decision making here). Same applies to many other local services and file system permissions that are very popular with the dark side.
  • ‘Unauthenticated scanning gets me some of the way, its good enough’ – how much of the way does it get you? Less than half way? its more like 5% really. Remember its little more than a port scan, and you shouldn’t need a scanner to tell you you’re running old software. Certainly for critical cloud VMs, this is a problem.

With PaaS and SaaS, you are handing over the management of large and complex operating systems to cloud providers, who are perfectly justified, and also in many cases perfectly wise, in leaving open large security holes in your platforms, and as part of your agreement with them, there’s not a thing you can do about it (other than switch to IaaS or on-premise).

Addressing The Information Security Skills Gap

We are told there is a skills gap in information security. I agree – there is, but recent suggestions to address the gap take us to dangerous places that are great for recruitment agencies, but not so great for the business world.

I want to steer away from use of the phrase ‘skills’ in this article because its too micro and the phrase has been violated by modern hiring practices. We are not looking for ‘Websense’, ‘DLP’ skills or as i saw recently ‘HSM’ skills. These requirements are silly unless it is the plan for organisations to spend 10 to 50 times more than they need on human resource, and have a security team of 300. Its healthier for organisations to look at ‘habits’ or ‘backgrounds’, and along those lines, in information security we’re looking for the following:

  • At least 5 years in an IT discipline: sys admin, DBA, devops bod, programmer for example
  • Evidence of having excelled in those positions and sort of grown out of them
  • Flexibility: for example, the crusty Radagast BSD-derivative disciple who has no fundamentalist views of other operating systems (think ‘Windows’) and not only can happily work with something like Active Directory, but they actually love working with Active Directory
  • A good-to-have-but-not-critical is past evidence of breaking or making things, but this should seen as a nice bonus. In its own right, it is insufficient – recruiting from hacker confz is far from guaranteed to work – too much to cover here

So really it should be seen that a career in infosec is a sort of ‘graduation’ on from other IT vocations. There should be an entrance exam based on core technologies and penetration testing. The career progression path goes something like: Analyst (5 years) –> Consultant -> Architect/Manager. Managers and architects cannot be effective if they do not have a solid IT background. An architect who doesn’t know her way around a Cisco router, implement a new SIEM correlation rule, or who cannot run or interpret the output from a packet sniffer is not an architect.

Analysts and Consultants should be skilled with the core building blocks to the level of being confidently handed administrative access to production systems. As it is, security pros find it hard to even get read-only access to firewall management suites. And having fast access to information on firewall rules – it can be critical.

Some may believe that individuals fitting the above profile are hard to find, and they’d be right. However, with the aforementioned model, the workforce will change from lots of people with micro-skills or product-based pseudo skills, to fewer people who are just fast learners and whose core areas complement each other. If you consider that a team of 300 could be reduced to 6 – the game has changed beyond recognition.

Quoting a recent article: “The most in demand cyber security certifications were Security+, Ethical Hacking, Network+, CISSP, and A+. The most in demand skills were Ethical Hacking, Computer Forensics, CISSP, Malware Analysis, and Advanced Penetration Testing”. There are more problems with this to describe in a reasonable time frame but none of these should ever be called ‘skills’. Of these, Penetration Testing (leave out the ‘ethics’ qualifier because it adds a distasteful layer of judgment on top of the law) is the only one that should be called a specification in its own right.

And yes, Governance, Risk and Control (GRC) is an area that needs addressing, but this must be the role of the Information Security Manager. There is a connection between Information Security Manage-ment and Information Security Manage-er.  Some organisations have separate GRC functions, the UK public sector usually has dedicated “assurance” functions, and as i’ve seen with some law firms, they are separated from the rest of security and IT.  Decision making on risk acceptance or mitigation, and areas such as Information Classification, MUST have an IT input and this is the role of the Information Security Manager. There must be one holistic security team consisting of a few individuals and one Information Security Manager.

In security we should not be leaving the impression that one can leave higher education, take a course in forensics, get accreditation, and then go and get a job in forensics. This is not bridging the security skills gap – its adding security costs with scant return. If you know something about forensics (usually this will be seen as ‘Encase‘ by the uninitiated) but don’t even have the IT background, let alone the security background, you will not know where to look in an investigation, or have a picture of risk. You will not have an inkling of how systems are compromised or the macro-techniques used by malware authors. So you may know how to use Encase and take an integral disk image for example, but that will be the limit of your contribution. Doesn’t sound like a particularly rewarding way to spend 200 business days per year? You’d be right.

Sticking with the forensics theme: an Analyst with the right mindset can contribute effectively in an incident investigation from day one. There are some brief aspects of incident response for them to consider, but it is not advisable to view forensics/incident response as a deep area. We can call it a specification, just as an involuntary action such as breathing is a specification, but if we do, we are saying that it takes more than one person to change a light bulb.

Incident response from the organisational / Incident Response Plan (IRP) formation point of view is a one-day training course or a few hours of reading. The tech aspects are 99% not distinct from the core areas of IT and network security. This is not a specialisation.

Other areas such as DLP, Threat Intelligence, SIEM, Cryptography and Key Management – these can be easily adopted by the right security minds. And with regard security products – it should be seen that security professionals are picking up new tools on-the-fly and don’t need 2 week training courses that cost $4000. Some of the tools in the VM and proxy space are GUIs for older open source efforts such as Nessus, OpenVAS, and Squid with which they will be well-versed, and if they’re not, it will take an hour to pick up the essentials.

There’s been a lot of talk of Operating Systems (OS) thus far. Operating Systems are not ‘a thing from 1998’. Take an old idea that has been labelled ‘modern’ as an example: ok, lets go with ‘Cloud’. Clouds have operating systems. VMs deployed to clouds have operating systems. When we deploy a critical service to a cloud, we cannot ignore the OS even if its a PaaS deployment. So in security we need people who can view an OS in the same way that a hacker views an OS – we need to think about Kill Chains and local privilege elevations. The Threat and Vulnerability Management (TVM) challenge does not disappear just because you have PaaS’d everything. Moreover if you have PaaS’d everything, you have immediately lost the TVM battle. As Beaker famously said in his cloud presentation – “Platforms Bitches”. Popular OS like Windows, *nix, Linux, and popular applications such as Oracle Database are going to be around for some time yet and its the OS where the front-lines are drawn.

Also what is a common misconception and does not work: a secops/network engineer going straight into security with no evidence of interest in other areas. ‘Secops’ is not good preparation for a security career, mainly because secops is sort of purgatory. Just as “there is no Dana, only Zool“, so “there is no secops, only ops”. There is only a security element to these roles because the role covers operational processes with security products. That is anti-security.

All Analyst roles should have an element of penetration testing and appsec, and when I say penetration testing, i do mean unrestricted testing as in an actual simulation. That means no restrictions on exploit usage or source address – because attackers do not have such restrictions. Why spend on this type of testing if its not an actual simulation?

Usage of Cisco Discovery Protocol (CDP) offers a good example of how a lack of penetration testing experience can impede a security team. If security is being done even marginally professionally in an organisation, there will exist a security standard for Cisco network devices that mandates the disabling of CDP.  But once asked to disable CDP, network ops teams will want justification. Any experienced penetration tester knows the value of intelligence in expediting the attack effort and CDP is a relative gold mine of intelligence that is blasted multicast around networks. It can, and often does, reveal the identity and IP address of a core switch. But without the testing experience or knowledge of how attacks actually go down, the point will be lost, and the confidence missing from the advisory.

The points i’ve just covered are not actually ground-breaking at all. Analysts with a good core background of IT and network security can easily move into any new area that marketeers can dream up.

There is an intuition that Information Security has a connection with Information Technology, if only for the common word in them both (that was ‘Information’ by the way, in case you didn’t get it). However, as Upton Sinclair said “It is difficult to get a man to understand something, when his salary depends upon his not understanding it”.

And please don’t create specialisations for Big Data or Internet of Things…woops, too late.

So, consider a small team of enthusiastic, flexible, fast learners, rather than a large team of people who can be trained at a high cost to understand the UI of an application that was designed in the international language and to be intuitive and easy to learn.

Consider using one person to change a light bulb, and don’t be the butt of future jokes.

Scangate Re-visited: Vulnerability Scanners Uncovered

I have covered VA tools before but I feel that one year later, the same misconceptions prevail. The notion that VA tools really can be used to give a decent picture of vulnerability is still heavily embedded, and that notion in itself presents a serious vulnerability for businesses.

A more concise approach at a run down on the functionality of VA warez may be worth a try. At least lets give it one last shot. On second thoughts, no, don’t shoot anything.

Actually forget “positive” or “negative” views on VAs before reading this. I am just going to present the facts based on what I know myself and of course I’m open to logical, objective discussion. I may have missed something.

Why the focus on VA? Well, the tools are still so commonplace and heavily used and I don’t believe that’s in our best interests.

What I discovered many years ago (it was actually 2002 at first) was that discussions around these tools can evoke some quite emotional responses. “Emotional” you quiz? Yes. I mean when you think about it, whole empires have been built using these tools. The tools are so widespread in security and used as the basis of corporate VM programs. VM market revenues runs at around 1 billion USD annually. Songs and poems have been written about VAs – OK I can’t back that up, but careers have been built and whole enterprise level security software suites built using a nasty open source VA engine.

I presented on the subject of automation in VA all those years ago, and put forward a notion that running VA tools doesn’t carry much more value as compared to something like this: nmap -v -sS -sV <targets> . Any Security Analyst worth their weight in spam would see open ports and service banners, and quickly deduce vulnerability from this limited perspective. “Limited”, maybe, but is a typical VA tool in a better position to interrogate a target autotragically?

One pre-qualifier I need to throw out is that the type of scanners I will discuss here are Nessus-like scanners, the modus operandi of which is to use unauthenticated means to scan a target. Nessus itself isn’t the main focus but it’s the tool that’s most well known and widely used. The others do not present any major advantages over Nessus. In fact Nessus is really as good as it gets. There’s a highly limited potential with these tools and Nessus reaches that limit.

Over the course of my infosec career I have had the privilege to be in a position where I have been coerced into using VAs extensively, and spent many long hours investigating false positives. In many cases I set up a dummy Linux target and used a packet sniffer to deduce what the tool was doing. As a summary, the findings were approximately:

  • Out of the 1000s of tests, or “patterns”, configured in the tools, only a few have the potential to result in accurate/useful findings. Some examples of these are SNMP community string tests, and tests for plain text services (e.g. telnet, FTP).
  • The vast majority of the other tests merely grab a service “banner”. For example, the tool port scans, finds an open port 80 TCP, then runs a test to grab a service banner (e.g. Apache 2.2.22, mickey mouse plug-in, bla bla). I was sort of expecting the tool to do some more probing having found a specific service and version, but in most cases it does not.
  • The tool, having found what it thinks is a certain application layer service and version, then correlates its finding with its database of public disclosed vulnerabilities for the detected service.

Even for some of the plan text services, some of the tests which have the potential to reveal useful findings have been botched by the developers. For example, tests for anonymous FTP only work with a very specific flavour of FTP. Other FTP daemons return different messages for successful anonymous logins and the tool does not accommodate this.

Also what happens if a service is moved from its default port? I had some spectacular failures with running Nessus against a FTP service on port 1980 TCP (usually it is listening on port 21). Different timing options were tested. Nessus uses a nmap engine for port scanning, but nmap by itself is usually able to find non-default port services using default settings.

So in summary, what the VA tools do is mostly just report that you are running ridiculous unencrypted blast-from-the-past services or old, down-level services – maybe. Really I would hope security teams wouldn’t need to spend 25K USD on an enterprise solution to tell them this.

False positives is one thing, but false negatives is quite another. Popular magazines always report something like 50% success rate in finding vulnerabilities in staged tests. Why is it always 50%? Remember also that the product under testing is usually one from a vendor who pays for a full spread ad in that magazine.

Putting numbers to false negatives makes little sense with huge, complex software packages of millions of lines of source code. However, it occurred to me not so long ago whilst doing some white box testing on a client’s critical infrastructure: how many of the vulnerabilities under testing could possibly be discovered by use of a VA tool? In the case of Oracle Database the answer was less than 5%. And when we’re talking Oracle, we’re usually talking critical, as in crown jewels critical.

If nothing else, the main aspect I would hope the reader would take out of this discussion is about expectation. The expectation that is set by marketing people with VA tools is that the tools really can be used to accurately detect a wide range of vulnerability, and you can bet your business on the tools by using them to test critical infrastructure. Ladies and gentlemen: please don’t be deceived by this!

Can you safely replace manual testing with use of these tools? Yes, but only if the target has zero value to the business.

 

Security in Virtual Machine Environments. And the planet.

This post is based on a recent article on the CIO.com site.

I have to say, when I read the title of the article, the cynic in me once again prevailed. And indeed there will be some cynicism and sarcasm in this article, so if that offends the reader, i would like to suggest other sources of information: those which do not accurately reflect the state of the information security industry. Unfortunately the truth is often accompanied by at least cynicism. Indeed, if I meet an IT professional who isn’t cynical and sarcastic, I do find it hard to trust them.

Near the end of the article there will be a quiz with a scammed prize offering, just to take the edge of the punishment of the endless “negativity” and abject non-MBA’edness.

“While organizations have been hot to virtualize their machine operations, that zeal hasn’t been transferred to their adoption of good security practices”. Well you see they’re two different things. Using VMs reduces power and physical space requirements. Note the word “physical” here and being physical, the benefits are easier to understand.

Physical implies something which takes physical form – a matter energy field. Decision makers are familiar with such energy fields. There are other examples in their lives such as tables, chairs, other people, walls, cars. Then there is information in electronic form – that’s a similar thing (also an energy field) but the hunter/gatherer in some of us doesn’t see it that way, and still as of 2013, the concept eludes many IT decision makers who have fought their way up through the ranks as a result of excellent performance in their IT careers (no – it’s not just because they have a MBA, or know the right people).

There is a concept at board level of insuring a building (another matter energy field) against damages from natural causes. But even when 80% of information assets are in electronic form, there is still a disconnect from the information. Come on chaps, we’ve been doing this for 20 years now!

Josh Corman recently tweeted “We depend on software just as much as steel and concrete, its just that software is infinitely more attack-able!”. Mr Corman felt the need to make this statement. Ok, like most other wise men in security, it was intended to boost his Klout score, but one does not achieve that by tweeting stuff that everybody already knows. I would trust someone like Mr Corman to know where the gaps are in the mental portfolios of IT decision makers.

Ok, so moving on…”Nearly half (42 percent) of the 346 administrators participating in the security vendor BeyondTrust‘s survey said they don’t use any security tools regularly as part of operating their virtual systems…”

What tools? You mean anti-virus and firewalls, or the latest heuristic HIDS box of shite? Call me business-friendly but I don’t want to see endless tools on end points, regardless of their function. So if they’re not using tools, is it not at this point good journalism to comment on what tools exactly? Personally I want to see a local firewall and the obligatory and increasingly less beneficial anti-virus (and i do not care as to where, who, whenceforth, or which one…preferably the one where the word “heuristic” is not used in the marketing drivel on the box). Now if you’re talking system hardening and utilizing built-in logging capability – great, that’s a different story, and worthy of a cuddly toy as a prize.

“Insecure practices when creating new virtual images is a systemic problem” – it is, but how many security problems can you really eradicate at build-time and be sure that the change won’t break an application or introduce some other problem. When practical IT-oriented security folk actually try to do this with skilled and experienced ops and devs, they realise that less than 50% of their policies can be implemented safely in a corporate build image. Other security changes need to be assessed on a per-application basis.

Forget VMs and clouds for a moment – 90%+ of firms are not rolling out effectively hardened build images for any platform. The information security world is still some way off with practices in the other VM field (Vulnerability Management).

“If an administrator clones a machine or rolls back a snapshot,”… “the security risks that those machines represent are bubbled up to the administrator, and they can make decisions as to whether they should be powered on, off or left in state.”

Ok, so “the security risks that those machines represent are bubbled up to the administrator”!!?? [Double-take] Really? Ok, this whole security thing really can be automated then? In that case, every platform should be installed as a VM managed under VMware vCenter with the BeyondTrust plugin. A tab that can show us our risks? There has to be a distinction between vulnerability and risk here, because they are two quite different things. No but seriously, I would want to know how those vulnerabilities are detected because to date the information security industry still doesn’t have an accurate way to do this for some platforms.

Another quote: “It’s pretty clear that virtualization has ripped up operational practices and that security lags woefully behind the operational practice of managing the virtual infrastructure,”. I would edit that and just the two words “security” and “lags”. What with visualized stuff being a subset of the full spectrum of play things and all.

“Making matters worse is that traditional security tools don’t work very well in virtual environments”. In this case i would leave remaining five words. A Kenwood Food Mixer goes to the person who can guess which ones those are. See? Who said security isn’t fun?

“System operators believe that somehow virtualization provides their environments with security not found in the world of physical machines”. Now we’re going Twilight Zone. We’ve been discussing the inter-cluster sized gap between the physical world and electronic information in this article, and now we have this? Segmentation fault, core dumped.

Anyway – virtualization does increase security in some cases. It depends how the VM has been configured and what type of networking config is used, but if we’re talking virtualised servers that advertise services to port scanners, and / or SMB shares with their hosts, then clearly the virtualised aspect is suddenly very real. VM guests used in a NAT’ing setup is a decent way to hide information on a laptop/mobile device or anything that hooks into an untrusted network (read: “corporate private network”).

The vendor who was being interviewed finished up with “Every product sounds the same,” …”They all make you secure. And none of them deliver.” Probably if i was a vendor I might not say that.

Sorry, I just find discussions of security with “radical new infrastructure” to be something of a waste of bandwidth. We have some very fundamental, ground level problems in information security that are actually not so hard to understand or even solve, at least until it comes to self-reflection and the thought of looking for a new line of work.

All of these “VM” and “cloud” and “BYOD” discussions would suddenly disappear with the introduction of integrity in our little world because with that, the bigger picture of skills, accreditation, and therefore trust would be solved (note the lack of a CISSP/CEH dig there).

I covered the problems and solutions in detail in Security De-engineering, but you know what? The solution (chapter 11) is no big secret. It comes from the gift of intuition with which many humans are endowed. Anyway – someone had to say it, now its in black and white.

Hardening is Hard If You’re Doing it Right

Yes, ladies and gentlemen, hardening is hard. If its not hard, then there are two possibilities. One is that the maturity of information security in the organization is at such a level that security happens both effectively and transparently – its fully integrated into the fabric of BAU processes and many of said processes are fully automated with accurate results. The second (far more likely given the reality of security in 2013) is that the hardening is not well implemented.

For the purpose of this diatribe, let us first define “hardening” so that we can all be reading from the same hymn sheet. When I’m talking about hardening here, the theme is one of first assessing vulnerability, then addressing the business risk presented by the vulnerability. This can apply to applications, or operating systems, or any aspect of risk assessment on corporate infrastructure.

In assessing vulnerability, if you’re following a check list, hardening is not hard – in fact a parrot can repeat pearls of wisdom from a check list. But the result of merely following a check list will be either wide open critical hosts or over-spending on security – usually the former. For sure, critical production systems will be impacted, and I don’t mean in a positive way.

You see, like most things in security, some thinking is involved. It does suit the agenda of many in this field to pretend that security analysis can be reduced down to parrot-fashion recital of a check list. Unfortunately though, some neural activity is required, at least if gaining the trust of our customers (C-levels, other business units, home users, etc) is important to us.

The actual contents of the check list should be the easy part, although unfortunately as of 2013, we all seem to be using different versions of the check list, and some versions are appallingly lacking. The worst offenders here deliver with a quality that is inversely proportional to the prices they charge – and these are usually external auditors from big 4 consultancies, all of whom have very decent check lists, but who also fail to ensure that Consultants use said check list. There are plenty of cases where the auditor knocks up their own garage’y style shell script for testing. In one case i witnessed not so long ago, the script for testing RedHat Enterprise Linux consisted of 6 tests (!) and one of the tests showed a misunderstanding of the purpose of the /etc/ftpusers file.

But the focus here is not on the methods deployed by auditors, its more general than that. Vulnerability testing in general is not a small subject. I have posted previously on the subject of “manual” network penetration testing. In summary: there will be a need for some businesses to show auditors that their perimeter has been assessed by a “trusted third party”, but in terms of pure value, very few businesses should be paying for the standard two week delivery with a four person team. For businesses to see any real value in a network penetration test, their security has to be at a certain level of maturity. Most businesses are nowhere near that level.

Then there is the subject of automated, unauthenticated “scanning” techniques which I have also written about extensively, both in an earlier post and in Chapter Five of Security De-engineering. In summary, the methodology used in unauthenticated vulnerability scanning results in inaccuracy, large numbers of false positives, wasted resources, and annoyed operations and development teams. This is not a problem with any particular tool, although some of them are especially bad. It is a limitation of the concept of unauthenticated testing, which amounts to little more than pure guesswork in vulnerability assessment.

How about the growing numbers of “vulnerability management” products out there (which do not “manage” vulnerability, they make an attempt at assessing vulnerability)? Well, most of them are either purely an expensive graphical interface to [insert free/open source scanner name], or if the tool was designed to make a serious attempt at accurate vulnerability assessment (more of them do not), then the tests will be lacking or over-done, inaccurate, and / or doing the scanning in an insecure way (e.g. the application is run over a public URL, with the result that all of your configuration data, including admin passwords, are held by an untrusted third party).

In one case, a very expensive VM product literally does nothing other than port scan. It is configured with hundreds of “test” patterns for different types of target (MS Windows, *nix, etc) but if you’re familiar with your OS configurations,you will look at the tool output and be immediately suspicious. I ran the tool against a Linux and Windows test target and “packet sniffed” the scanning engine’s probe attempts. In summary, the tool does nothing. It just produces a long list of configuration items (so effectively a kind of Security Standard for the target) without actually testing for the existence of vulnerability.

So the overall principle: the company [hopefully] has a security standard for each major operating system and database on their network and each item in the standard needs to be tested for all, or some of the information asset hosts in the organization, depending on the overall strategy and network architecture. As of the time of writing, there will need to be some manual / scripted augmentation of automatic vulnerability assessment processes.

So once armed with a list of vulnerabilities, what does one do with it? The vulnerability assessment is the first step. What has to happen after that? Can Security just toss the report over to ops and hope for the best? Yes, they can, but this wouldn’t make them very popular and also there needs to be some input from security regarding the actual risk to the business. Taking the typical function of operations teams (I commented on the functions and relationships between security and operations in an earlier post), if there is no input from security, then every risk mitigation that meets any kind of an impact will be blocked.

There are some security service providers/consultancies who offer a testing AND a subsequent hardening service. They want to offer both detection AND a solution, and this is very innovative and noble of them. However, how many security vulnerabilities can be addressed blindly without impacting critical production processes? Rhetorical question: can applications be broken by applying security fixes? If I remove the setuid bit from a root owned X Window related binary, it probably has no effect on business processes. Right? What if operations teams can no longer authenticate via their usual graphical interface? This is at least a little bit disruptive.

In practice, as it turns out, if you look at a Security Standard for a core technology, lets take Oracle 11g as an example: how many of the numerous elements of a Security Standard can we say can be implemented without fear of breaking applications, limiting access for users or administrators, or generally just making trouble-shooting of critical applications a lot less efficient? The answer is: not many. Dependencies and other problems can come from surprising sources.

Who in the organization knows about dependencies and the complexities of production systems? Usually that would be IT / Network Operations. And how about application – related dependencies? That would be application architects, or just generally we’ll say “dev teams” as they’re so affectionately referred to these days. So the point: even if security does have admin access to IT resources (rare), is the risk mitigation/hardening a job purely for security? Of course the answer is a resounding no, and the same goes for IT Operations.

So, operations and applications architects bring knowledge of the complexities of apps and infrastructure to the table. Security brings knowledge of the network architecture (data flows, firewall configurations, network device configurations), the risk of each vulnerability (how hard is to exploit and what is the impact?), and the importance to the business of information assets/applications. Armed with the aforementioned knowledge, informed and sensible decisions on what to do with the risk (accept, mitigate, work around, or transfer) can be made by the organization, not by security, or operations.

The early days of deciding what to do with the risk will be slow and difficult and there might even be some feisty exchanges, but eventually, addressing the risk becomes a mature, documented process that almost melts into the background hum of the machinery of a business.