Fintechs and Security – Part One

  • Prologue – covers the overall challenge at a high level
  • Part One – Recruiting and Interviews
  • Part Two – Threat and Vulnerability Management – Application Security
  • Part Three – Threat and Vulnerability Management – Other Layers
  • Part Four – Logging
  • Part Five – Cryptography and Key Management, and Identity Management
  • Part Six – Trust (network controls, such as firewalls and proxies), and Resilience

Recruiting and Interviews

In the prologue of this four-stage process, I set the scene for what may come to pass in my attempt to relate my experiences with fintechs, based on what i am hearing on the street and what i’ve seen myself. In this next instalment, i look at how fintechs are approaching the hiring conundrum when it comes to hiring security specialists, and how, based on typical requirements, things could maybe be improved.

The most common fintech setup is one of public-cloud (AWS, Azure, GCP, etc), They’re developing, or have developed, software for deployment in cloud, with a mobile/web front end. They use devops tools to deploy code, manage and scale (e.g. Kubernetes), collaborate (Git variants) and manage infrastructure (Ansible, Terraform, etc), perhaps they do some SAST. Sometimes they even have different Virtual Private Clouds (VPCs) for different levels of code maturity, one for testing, and one for management. And third party connections with APIs are not uncommon.

Common Pitfalls

  • Fintechs adopt the stance: “we don’t need outside help because we have hipsters. They use acronyms and seem quite confident, and they’re telling me they can handle it”. While not impossible that this can work – its unlikely that a few devops peeps can give a fintech the help they need – this will become apparent later.
  • Using devops staff to interview security engineers. More on this problem later.
  • Testing security engineers with a list of pre-prepared questions. This is unlikely to not end in tears for the fintech. Security is too wide and deep an area for this approach. Fintechs will be rejecting a lot of good candidates by doing this. Just have a chat! For example, ask the candidate their opinions on the usefulness of VA scanners. The length of the response is as important as its technical accuracy. A long response gives an indication of passion for the field.
  • Getting on the security bandwagon too late (such as when you’re already in production!) you are looking at two choices – engage an experienced security hand and ignore their advice, or do not ignore their advice and face downtime, and massive disruption. Most will choose the first option and run the project at massive business risk.

The Security Challenge

Infosec is important, just as checking to see if cars are approaching before crossing the road is important. And the complexity of infosec mandates architecture. Civil engineering projects use architecture. There’s a good reason for that – which doesn’t need elaborating on.

Collapsing buildingWhenever you are trying to build something complex with lots of moving parts, architecture is used to reduce the problem down to a manageable size, and help to build good practices in risk management. The end goal is protective monitoring of an infrastructure that is built with requirements for meeting both risk and compliance challenges.

Because of the complexity of the challenge, it’s good to split the challenge into manageable parts. This doesn’t require talking endlessly about frameworks such as SABSA. But the following six capabilities (people, process, technology) approach is sleek and low-footprint enough for fintechs:

  • Threat and Vulnerability Management (TVM)
  • Logging – not “telemetry” or Threat intelligence, or threat hunting. Just logging. Not even necessarily SIEM.
  • Cryptography and Key Management
  • Identity Management
  • Business Continuity Management
  • Trust (network segmentation, firewalls, proxies).

I will cover these 6 areas in the next two articles, in more detail.

The above mentioned capabilities have an engineering and architecture component and cover very briefly the roles of security engineers and architects. A SABSA based approach without the SABSA theory can work. So an architect takes into account risk (maybe with a threat modelling approach) and compliance goals in a High Level Design (HLD), and generates requirements for the Low Level Design (LLD), which will be compiled by a security engineer. The LLD gives a breakdown of security controls to meet the requirements of the HLD, and how to configure the controls.

Security Engineers and Devops Tools

What happens when a devops peep interviews a security peep? Well – they only have their frame of reference to go by. They will of course ask questions about devops tools. How useful is this approach? Not very. Is this is good test of a security engineer? Based on the security requirements for fintechs, the answer is clear.

Security engineers can use devops tools, and they do, and it doesn’t take a 2 week training course to learn Ansible. There is no great mystery in Kubernetes. If you hire a security engineer with the right background (see the previous post in this series) they will adapt easily. The word on the street is that Terraform config isn’t the greatest mystery in the world and as long as you know Linux, and can understand what the purpose of the tool is (how it fits in, what is the expected result), the time taken to get productive is one day or less.

The point is: if i’m a security engineer and i need to, for example, setup a cloud SIEM collector: some fintechs will use one Infrastructure As Code (IaC) tool, others use another one – one will use Chef, another Ansible, and there are other permutations. Is a lack of familiarity with the tool a barrier to progress? No. So why would you test a security engineer’s suitability for a fintech role by asking questions about e.g. stanzas in Ansible config? You need to ask them questions about the six capabilities I mentioned above – i.e. security questions for a security professional.

Security Engineers and Clouds

Again – what was the transition period from on-premise to cloud? Lets take an example – I know how networking works on-premise. How does it work in cloud? There is this thing called a firewall on-premise. In Azure it’s called a Network Security Group. In AWS its called a …drum roll…firewall. In Google Cloud its called a …firewall. From the web-based portal UI for admin, these appear to filter by source and destination addresses and services, just like an actual non-virtual firewall. They can also filter by service account (GCP), or VM tag.

There is another thing called VPN. And another thing called a Virtual Router. On the world of on-premise, a VPN is a …VPN. A virtual router is a…router. There might be a connection here!

Cloud Service Providers (CSP) in general don’t re-write IT from the ground up. They still use TCP/IP. They host virtual machines (VM) instead of real machines, but as VMs have operating systems which security engineers (with the right background) are familiar with, where is the complication here?

The areas that are quite new compared to anything on-premise are areas where the CSP has provided some technology for a security capability such as SIEM, secrets management, or Identity Management. But these are usually sub-standard for the purpose they were designed for – this is deliberate – the CSPs want to work with Commercial Off The Shelf (COTS) vendors such as Splunk and Qualys, who will provide a IaaS or SaaS solution.

There is also the subject of different clouds. I see some organisations being fussy about this, e.g. a security engineer who worked a lot with Azure but not AWS, is not suitable for a fintech that uses AWS. Apparently. Well, given that the transition from on-premise to cloud was relatively painless, how painful is it to transition from Azure to AWS or …? I was on a project last summer where the fintech used Google Cloud Platform. It was my first date with GCP but I had worked with AWS and Azure before. Was it a problem? No. Do i have an IQ of 160? Hell no!

The Wrap-up

Problems we see in fintech infosec hiring represent what is most likely a lack of understanding of how they can best manage risk with a budget that is considerably less than a large MNC for example. But in security we haven’t been particularly helpful for fintechs – the problem is on us.

The security challenge for fintechs is not just about SAST/DAST of their code. The challenge is wider and be represented as six security capabilities that need to be designed with an architecture and engineering view. This sounds expensive, but its a one-off design process that can be covered in a few weeks. The on-going security challenge, whereby capabilities are pushed through into the final security operations stage, can be realised with one or two security engineers.

The lack of understanding of requirements in security leads to some poor hiring practices, the most common of which is to interview a security engineer with a devops guru. The fintech will be rejecting lots of good security engineers with this approach.

In so many ways, the growth of small to medium development houses has exposed the weaknesses in the infosec sector more than they were ever exposed with large organisations. The lack of the sector’s ability to help fintechs exposes a fundamental lack of skilled personnel, more particularly at the strategic/advisory level than others.

Fintechs and Security – Prologue

  • Prologue – covers the overall challenge at a high level
  • Part One – Recruiting and Interviews
  • Part Two – Threat and Vulnerability Management – Application Security
  • Part Three – Threat and Vulnerability Management – Other Layers
  • Part Four – Logging
  • Part Five – Cryptography and Key Management, and Identity Management
  • Part Six – Trust (network controls, such as firewalls and proxies), and Resilience

Fintechs and Security – A Match Made In Heaven?

Well, no. Far from it actually. But again, as i’ve been repeating for 20 years now, its not on the fintechs. It’s on us in infosec, and infosec has to take responsibility for these problems in order to change. If i’m a CTO of a fintech, I would be confused at the array of opinions and advice which vary radically from one expert to another

But there shouldn’t be such confusion with fintech challenges. Confusion only reigns where there’s FUD. FUD manifests itself in the form of over-lengthy coverage and excessive focus on “controls” (the archetypal shopping list of controls to be applied regardless of risk – expensive), GRC, and “hacking/”[red,blue,purple,yellow,magenta/teal/slate grey] team”/”appsec.

Really what’s needed is something like this (in order):

  • Threat modelling lite – a one off, reviewed periodically.
  • Architecture lite – a one off, review periodically.
  • Engineering lite – a one off, review periodically.
  • Secops lite – the result of the previous 3 – an on-going protective monitoring capability, the first level of monitoring and response for which can be outsourced to a Managed Service Provider.

I will cover these areas in more details in later episodes but what’s needed is, for example, a security design that only provides the answer to “What is the problem? How are we going to solve it?” – so a SIEM capability design for example – not more than 20 pages. No theory. Not even any justifications. And one that can be consumed by non-security folk (i.e. it’s written in the language of business and IT).

Fintechs and SMBs – How Is The Infosec Challenge Unique?

With a lower budget, there is less room for error. Poor security advice can co-exist with business almost seamlessly in the case of larger organisations. Not so with fintechs and Small and Medium Businesses (SMBs). There has been cases of SMBs going under as a result of a security incident, whereas larger businesses don’t even see a hit on their share price.

Look For A Generalist – They Do Exist!

The term “generalist” is seen as a four-letter word in some infosec circles. But it is possible for one or two generalists to cover the needs of a fintech at green-field, and then going forward into operations, its not unrealistic to work with one in-house security engineer of the right background, the key ingredients of which are:

  • Spent at least 5 years in IT, in a complex production environment, and outgrew the role.
  • Has flexibility – the old example still applies today – a Unix fan has tinkered with Windows. So i.e. a technology lover. One who has shown interest in networking even though they’re not a network engineer by trade. Or one who sought to improve efficiency by automating a task with shell scripting.
  • Has an attack mindset – without this, how can they evaluate risk or confidently justify a safeguard?

I have seen some crazy specialisations in larger organisations e.g. “Websense Security Engineer”! If fintechs approached security staffing in the same way as larger organisations, they would have more security staff than developers which is of course ridiculous.

So What’s Next?

In “On Hiring For DevSecOps” I covered some common pitfalls in hiring and explained the role of a security engineer and architect.

There are “fallback” or “retreat” positions in larger organisations and fintechs alike, wherein executive decisions are made to reduce the effort down to a less-than-advisable position:

  • Larger organisations: compliance driven strategy as opposed to risk based strategy. Because of a lack of trustworthy security input, execs end up saying “OK i give up, what’s the bottom line of what’s absolutely needed?”
  • Fintechs: Application security. The connection is made with application development and application security – which is quite valid but the challenge is wider. Again, the only blame i would attribute here is with infosec. Having said that, i noticed this year that “threat modelling” has started to creep into job descriptions for Security Engineers.

So for later episodes – of course the areas to cover in security are wider than appsec, but again there is no great complication or drama or arm-waiving:

  • Part One – Hiring and Interviews – I expand on “On Hiring For DevSecOps“. I noticed some disturbing trends in 2019 and i cover these in some more detail.
  • Part Two – Security Architecture and Engineering I – Threat and Vulnerability Management (TVM)
  • Part Three – Security Architecture and Engineering II – Logging (not necessarily SIEM). No Threat Hunting, Telemetry, or Threat “Intelligence”. No. Just logging. This is as sexy as it needs to be. Any more sexy than this should be illegal.
  • Part Four – Security Architecture and Engineering III – Identity Management (IDAM) and Cryptography and Key Management (CKM).
  • Part Five – Security Architecture and Engineering IV – Trust (network trust boundary controls – e.g. firewalls and forward proxies), and Business Resilience Management (BRM).

I will try and get the first episode on hiring and interviewing out before 2020 hits us but i can’t make any promises!

On Hiring For DevSecOps

Based on personal experience, and second hand reports, there’s still some confusion out there that results in lots of wasted time for job seekers, hiring organisations, and recruitment agents.

There is a want or a need to blame recruiters for any hiring difficulties, but we need to stop that. There are some who try to do the right thing but are limited by a lack of any sector experience. Others have been inspired by Wolf Of Wall Street while trying to sound like Simon Cowell.

It’s on the hiring organisation? Well, it is, but let’s take responsibility for the problem as a sector for a change. Infosec likes to shift responsibility and not take ownership of the problem. We blame CEOs, users, vendors, recruiters, dogs, cats, “Russia“, “China” – anyone but ourselves. Could it be we failed as a sector to raise awareness, both internally and externally?

So What Are Common Understandings Of Security Roles?

After 25 years+ we still don’t have universally accepted role descriptions, but at least we can say that some patterns are emerging. Security roles involve looking at risk holistically, and sometimes advising on how to deal with risk:

  • Security Engineers assess risk and design and sometimes also implement controls. BTW some sectors, legal in particular, still struggle with this. Someone who installs security products is in an IT ops role. Someone who upgrades and maintains a firewall is an IT ops role. The fact that a firewall is a security control doesn’t make this a security engineering function.
  • Security Architects take risk and compliance goals into account when they formulate requirements for engineers.
  • Security Analysts are usually level 2 SOC analysts, who make risk assessments in response to an alert or vulnerability, and act accordingly.

This subject evokes as much emotion as CISSP. There are lots of opinions out there. We owe to ourselves to be objective. There are plenty of sources of information on these role definitions.

No Aspect Of Risk Assessment != Security. This is Devops.

If there is no aspect of risk involved with a role, you shouldn’t looking for a security professional. You are looking for DEVOPS peeps. Not security peeps.

If you want a resource to install and configure tools in cloud – that is DEVOPS. It is not Devsecops. It is not Security Engineering or Architecture. It is not Landscape Architecture or Accounting. It is not Professional Dog Walker. it is DEVOPS. And you should hire a DEVOPS person. If you want a resource to install and configure appsec tools for CI/CD – that is DEVOPS. If you want a resource to advise on or address findings from appsec tools, that is a Security Analyst in the first case, DEVSECOPS in the 2nd case. In the 2nd case you can hire a security bod with coding experience – they do exist.

Ok Then So What Does A DevSecOps Beast Look Like?

DevSecOps peeps have an attack mindset from their time served in appsec/pen testing, and are able to take on board the holistic view of risk across multiple technologies. They are also coders, and can easily adapt to and learn multiple different devops tools. This is not a role for newly graduated peeps.

Doing Security With Non-Security Professionals Is At Best Highly Expensive

Another important point: what usually happens because of the skills gap in infosec:

  • Cloud: devops fills the gap.
  • On-premise: Network Engineers fill the gap.

Why doesn’t this work? I’ve met lots of folk who wear the aforementioned badges. Lots of them understand what security controls are for. Lots of them understand what XSS is. But what none of them understand is risk. That only comes from having an attack mindset. The result will be overspend usually – every security control ever conceived by humans will be deployed, while also having an infrastructure that’s full of holes (e.g. default install IDS and WAF is generally fairly useless and comes with a high price tag).

Vulnerability assessment is heavily impacted by not engaging security peeps. Devops peeps can deploy code testing tools and interpret the output. But a lack of a holistic view or an attack mindset, will result in either no response to the vulnerability, or an excessive response. Basically, the Threat And Vulnerability Management capability is broken under these circumstances – a sadly very common scenario.

SIEM/Logging is heavily impacted – what will happen is either nothing (default logging – “we have Stackdriver, we’re ok”), or a SIEM tool will be provisioned which becomes a black hole for events and also budgets. All possible events are configured from every log source. Not so great. No custom use cases will be developed. The capability will cost zillions while also not alerting when something bad is going down.

Identity Management – is not deploying a ForgeRock (please know what you’re getting into with this – its a fork of Sun Microsystems/Oracle’s identity management show) or an Azure AD and that’s it, job done. If you just deploy this with no thought of the problem you’re trying to solve in identity management, you will be fired.

One of the classic risk problems that emerges when no security input is taken: “there is no personally identifiable information in development Virtual Private Clouds, so there is no need for security controls”. Well – intelligence vulnerability such as database schema – attackers love this. And don’t you want your code to be safe and available?

You see a pattern here. It’s all or nothing. Either of which ends up being very expensive or worse. But actually come to think of it, expensive is the goal in some cases. Hold that thought maybe.

A Final Word

So – if the word risk doesn’t appear anywhere in the job description, it is nothing to do with security. You are looking for devops peeps in this case. And – security is an important consideration for cloud migrations.

Make Cybersecurity Great Again, Again.

Another ‘we can fix infosec‘ is out there.

“OK I admit we can’t make cybersecurity great again, because it never was great in the first place”.

It was certainly better than it is now. At one point in time, we had the technical folk, but not the managers. Now we have neither. There was a brain drain from security around the early 2000s whereby tech folk left in droves, either voluntarily or ‘as a business need’. They were seen as aesthetically unpleasing at a time where the perception was that a threat did not exist! In the proceeding years risks increased on the top of the aforementioned shedding of intellectual capital from organisations. Then around 2010 things reached boiling point when security incidents found their way back on the front pages of the Financial Times.

So around 2010 some organisations wanted to get ‘tech’ again but since all the skills were lost 10 years ago, who knows what good looks like? The same folk who inherited the kingdom of security with their fine aesthetics were now charged with finding the skills, while not knowing what the skills look like.

“President Trump recently appointed Rudy Giuliani as cybersecurity adviser. Some reacted to this as a joke”. I would agree that this reaction is short sighted.

“Well me and my colleagues are in industry and we see the issues every day, we are the consultants, the IT auditors, systems administrators, security managers and network engineers. No we are not CEOs or business owners but it’s our job to educate and inform these business leaders of the risk of doing business on the internet. Sometimes they listen and too often they don’t seem to hear us”. All you can do is confidently state your case and get it in writing somewhere. But be aware that confidence should never be faked. Either learn the skills necessary FAST, or find another vocation. C-levels can detect BS ladies and gentlemen and the more of you that try to BS a C-level, the harder you’re making it for the rest of us. Ask yourselves why it is that security was once a board level thing and now most security chiefs reports to a CIO or COO.

“I see this every day as I travel across Florida doing IT audits and assessments. The organizations with a security role funded do 90 percent better than those with no such funded position.” Audits are a poor way of assessing the performance of security. Really poor in fact. Although it can be said that if an audit is failed – that’s uber bad, but if an audit is passed it does not mean all is good.

“One of the problems of the Internet is that we didn’t install what I like to call strong user authentication or strong file authentication.” Yes, we did. Its called an Operating System. For the most part the security sector has shied away from the OS because its hard for folk who don’t have an IT background to understand. Infosec would like to convince decision makers that it doesn’t exist, because if it does exist, then vendors can’t sell many of the snake oil offerings, and non-tech infosec folk are in a vulnerable position.

Operating Systems come with a slew of controls that can be used to thwart and/or detect attacks – perhaps it would be good if we started using them and reporting on how effectively the organisation uses each control? Why spend extra on snake oil products? For example, why spend gazillions on identity management in cloud deployments when we already have it?

“All too often we see organizations relegating cyber security to the IT department. I have said this a hundred times, cybersecurity is a business problem not an IT issue”. This statement suits a certain agenda that plays to the non-tech/GRC oriented folk. Security is a business problem AND an IT problem, but in terms of the intellectual capital required, its 10% a business problem, and 90% an IT problem.

“All users need awareness training” – yes, i think we are now at the stage where security has to be something that is everyone’s responsibility in the same way as checking for cars before crossing the road is everyone’s responsibility.

Infosec is in dire straits because of the loss of critical skills from the sector, and now we have a situation where people with the wrong skills are reporting to the likes of Rudy Giuliani with a lack of confidence and a myriad of confused messages, mostly built around self-serving interests at the expense of the whole. Its likely the former mayor of NY won’t be any wiser as to the scale of the problem, and therefore how to solve it.

Security professionals with no IT background are like animals handlers who are afraid of animals, and its these folk who are representing the sector.

The message that will be delivered to Giuliani will include the part that the sector needs more money. You know it really doesn’t – it needs less. Stop spending money on “next gen” products where “old gen” gets it done. “Legacy” stuff isn’t legacy unless you allow yourself to be duped by vendors into believing that its legacy. Really firewalls and OS offer most of what’s needed.

The same goes for people. We have too many people. Don’t create jobs around products – this is creating micro-specialisations that you are then calling ‘skills’, and hiring dedicated staff who won’t be very busy and won’t be very enthused or ‘synergistic’. This is what you’re looking for...http://www.seven-stones.biz/blog/addressing-the-information-security-skills-gap/

As Upton Sinclair said “It is difficult to get a man to understand something, when his salary depends on his not understanding it.” This quote lends itself to the problems in information security more than any other sector. Moreover it has defined information security as a broken entity since it was first adopted seriously by banks and then others.

How To Break Into Information Security

I’ve been asked a few times recently, usually by operations folk, to give some advice about how to break into the security sector, so under much pain I decided to commit my thoughts on the subject to this web log post. I’ve commented on this subject before and more extensively in chapter 6 of Security De-engineering, but this version is more in line with the times (up to 2012 I was advising a wide pass-by trajectory of planet infosec) and it will be shorter – you have my word(s).

blog-image

First I’d just be wary about trying to get into security just because of financial reasons (David Froud has an excellent blog and one of his posts covered this point well). At the time of writing it is possible to get into the field just by having an IT background and a CISSP. But don’t do that unless you have what’s REALLY required (do not judge what is REALLY required for the field based on job descriptions – at the time of writing, there are still plenty of mistakes being made by organisations). Summarising this in a very brief way:

  • You feel like you have grown out of pure IT-based roles and sort of excelled in whatever IT field you were involved in. You’re the IT professional who doesn’t just clear their problem tickets and switch off. You are, for example, looking for ways to automate things, and self-teach around the subject.
  • Don’t think about getting into security straight from higher education. Whereas it is possible, don’t do it. Just…don’t. Operational Security (or opsec/devopssec) is an option but have some awareness of what this is (scroll down to the end for an explanation).
  • Flexibility: can jump freely from a Cisco switch to an Oracle Database on any Operating System. Taking an example: some IT folk are religious about Unix and experience a mental block when it comes to Windows – this doesn’t work for security. Others have some kind of aversion to Cloud, whereas a better mindset for the field is one that embraces the challenge. Security pros in the “engineer” box should be enthusiastic about the new opportunities for learning offered by extended use of YAML, choosing the ideal federated identity management solution, Puppet, Azure Powershell, and so on. [In theory] projects where on-premise applications are being migrated to Cloud are not [in theory] such a bad place to be in security [in theory].
  • You like coding. Maybe you did some Python or some other scripting. What i’ve noticed is that coding skills are more frequently being seen as requirements. In fact I heard that one organisation went as far as putting candidates through a programming test for a security role. Python, Ruby, Shell ([Li,U]nix) and Powershell are common requirements these days. But even if role descriptions don’t mention coding as a requirement – having these skills demonstrates the kind of flexibility and enthusiasm that go well with infosec. “Regex” comes up a lot but if you’ve done lots of Python/Ruby and/or Unix sed/awk you will be more than familiar with regular expressions.

There is a non-tech element to security (sometimes referred to as “GRC”) but this is something you can get into later. Being aware of international standards and checking to see what’s in a typical corporate security policy is a good idea, but don’t be under the impression that you need to be able to recite verses from these. Generally speaking “writing stuff” and communication is more of a requirement in security than other fields, but you don’t need to be polished at day zero. There are some who see the progression path as Security Analyst –> Security Consultant (Analyst who can communicate effectively).

Another common motivator is hacker conferences or Mr Robot. Infosec isn’t like that. Even the dark side – you see Elliott with a hoody writing code with electronic techno-beats in the background, but hackers don’t write code to compromise networks to any huge degree, if at all. All the code is written for them by others mostly. And as with the femtocell and Raspberry Pi incidents, they usually have to assume a physical presence on the inside, or they are an internal employee themselves, or they dupe someone on the inside of the organisation under attack. Even if you’re in a testing role on the light side, the tests are vastly restricted and there’s a very canned approach to the whole thing with performance KPIs based on reports or something else that doesn’t link to actual intellectual value. Its far from glamorous. There’s an awful lot of misunderstanding out there. What is spoken about at hacker confz is interesting but its not usually stuff that is required to prove the existence of vulnerability in a commercial penetration test – most networks are not particularly well defended, and very little attention is given to results, more so because in most cases the only concern is getting ticks in boxes for an audit – and the auditors are often 12 years old and have never seen a command shell. Quality is rarely a concern.

Its a good practice to build up a list of the more influential bloggers and build up a decent Twitter feed and check what’s happening daily, but also, here are the books that I found most useful in terms of starting out in the field:

  • TCP/IP Illustrated – there are 3 volumes. 1 and 2 are the most useful. Then…
  • Building Internet Firewalls – really a very good way to understand some of the bigger picture ideas behind network architecture design and data flows. I hear rolling of eyes from some sectors, but the same principles apply to Cloud and other “modern” ideas that are from the 90s. With Cloud you have less control over network aspects but network access control and trust relationships are still very much a concern.
  • Network Security Assessment – the earlier versions are also still pertinent unless you will never see a Secure Shell or SMB port (hint: you will).
  • Security Engineering – there’s a very good chapter on Cryptography and Key Management.
  • The Art of Software Security Assessment – whether or not you will be doing appsec for a living you should look at OWASP‘s site and check out Webgoat. They are reportedly looking to bolster their API security coverage, which is nice (a lot of APIs are full of the same holes that were plugged in public apps by the same orgs some years ago). But if you are planning on network penetration testing or application security as a day job, then read this book, its priceless and still very applicable today.
  • The Phoenix Project – a good background illustrative for gaining a better understanding of the landscape in devops.

Also – take a look at perhaps a Windows security standard from the range of CIS benchmarks.

Finally – as i alluded earlier – opsec is not security. Why do i say this? Because i did come across many who believe they made it as a security pro once they joined a SOC/NOC team and then switched off. Security is a holistic function that covers the entire organisation – not just its IT estate, but its people, management, availability and resilience concerns, and processes. As an example – you could be part of a SOC team analysing the alerts generated by a SIEM (BTW some of the best SIEM material online is that written by Dr Anton Chuvakin). This is a very product centric role. So what knowledge is required to architect a SIEM and design its correlation rules? This is security. The same applies to IDS. Responding to alerts and working with the product is opsec. Security is designing the rulebase on an internal node that feeds off a strategically placed network tap. You need to know how hackers work among other areas (see above). Security is a holistic function. A further example: opsec takes the alerts generated by vulnerability management enterprise suites and maybe does some base false positives testing. But how does the organisation respond effectively to a discovered vulnerability? This is security.

Addressing The Information Security Skills Gap

We are told there is a skills gap in information security. I agree – there is, but recent suggestions to address the gap take us to dangerous places that are great for recruitment agencies, but not so great for the business world.

I want to steer away from use of the phrase ‘skills’ in this article because its too micro and the phrase has been violated by modern hiring practices. We are not looking for ‘Websense’, ‘DLP’ skills or as i saw recently ‘HSM’ skills. These requirements are silly unless it is the plan for organisations to spend 10 to 50 times more than they need on human resource, and have a security team of 300. Its healthier for organisations to look at ‘habits’ or ‘backgrounds’, and along those lines, in information security we’re looking for the following:

  • At least 5 years in an IT discipline: sys admin, DBA, devops bod, programmer for example
  • Evidence of having excelled in those positions and sort of grown out of them
  • Flexibility: for example, the crusty Radagast BSD-derivative disciple who has no fundamentalist views of other operating systems (think ‘Windows’) and not only can happily work with something like Active Directory, but they actually love working with Active Directory
  • A good-to-have-but-not-critical is past evidence of breaking or making things, but this should seen as a nice bonus. In its own right, it is insufficient – recruiting from hacker confz is far from guaranteed to work – too much to cover here

So really it should be seen that a career in infosec is a sort of ‘graduation’ on from other IT vocations. There should be an entrance exam based on core technologies and penetration testing. The career progression path goes something like: Analyst (5 years) –> Consultant -> Architect/Manager. Managers and architects cannot be effective if they do not have a solid IT background. An architect who doesn’t know her way around a Cisco router, implement a new SIEM correlation rule, or who cannot run or interpret the output from a packet sniffer is not an architect.

Analysts and Consultants should be skilled with the core building blocks to the level of being confidently handed administrative access to production systems. As it is, security pros find it hard to even get read-only access to firewall management suites. And having fast access to information on firewall rules – it can be critical.

Some may believe that individuals fitting the above profile are hard to find, and they’d be right. However, with the aforementioned model, the workforce will change from lots of people with micro-skills or product-based pseudo skills, to fewer people who are just fast learners and whose core areas complement each other. If you consider that a team of 300 could be reduced to 6 – the game has changed beyond recognition.

Quoting a recent article: “The most in demand cyber security certifications were Security+, Ethical Hacking, Network+, CISSP, and A+. The most in demand skills were Ethical Hacking, Computer Forensics, CISSP, Malware Analysis, and Advanced Penetration Testing”. There are more problems with this to describe in a reasonable time frame but none of these should ever be called ‘skills’. Of these, Penetration Testing (leave out the ‘ethics’ qualifier because it adds a distasteful layer of judgment on top of the law) is the only one that should be called a specification in its own right.

And yes, Governance, Risk and Control (GRC) is an area that needs addressing, but this must be the role of the Information Security Manager. There is a connection between Information Security Manage-ment and Information Security Manage-er.  Some organisations have separate GRC functions, the UK public sector usually has dedicated “assurance” functions, and as i’ve seen with some law firms, they are separated from the rest of security and IT.  Decision making on risk acceptance or mitigation, and areas such as Information Classification, MUST have an IT input and this is the role of the Information Security Manager. There must be one holistic security team consisting of a few individuals and one Information Security Manager.

In security we should not be leaving the impression that one can leave higher education, take a course in forensics, get accreditation, and then go and get a job in forensics. This is not bridging the security skills gap – its adding security costs with scant return. If you know something about forensics (usually this will be seen as ‘Encase‘ by the uninitiated) but don’t even have the IT background, let alone the security background, you will not know where to look in an investigation, or have a picture of risk. You will not have an inkling of how systems are compromised or the macro-techniques used by malware authors. So you may know how to use Encase and take an integral disk image for example, but that will be the limit of your contribution. Doesn’t sound like a particularly rewarding way to spend 200 business days per year? You’d be right.

Sticking with the forensics theme: an Analyst with the right mindset can contribute effectively in an incident investigation from day one. There are some brief aspects of incident response for them to consider, but it is not advisable to view forensics/incident response as a deep area. We can call it a specification, just as an involuntary action such as breathing is a specification, but if we do, we are saying that it takes more than one person to change a light bulb.

Incident response from the organisational / Incident Response Plan (IRP) formation point of view is a one-day training course or a few hours of reading. The tech aspects are 99% not distinct from the core areas of IT and network security. This is not a specialisation.

Other areas such as DLP, Threat Intelligence, SIEM, Cryptography and Key Management – these can be easily adopted by the right security minds. And with regard security products – it should be seen that security professionals are picking up new tools on-the-fly and don’t need 2 week training courses that cost $4000. Some of the tools in the VM and proxy space are GUIs for older open source efforts such as Nessus, OpenVAS, and Squid with which they will be well-versed, and if they’re not, it will take an hour to pick up the essentials.

There’s been a lot of talk of Operating Systems (OS) thus far. Operating Systems are not ‘a thing from 1998’. Take an old idea that has been labelled ‘modern’ as an example: ok, lets go with ‘Cloud’. Clouds have operating systems. VMs deployed to clouds have operating systems. When we deploy a critical service to a cloud, we cannot ignore the OS even if its a PaaS deployment. So in security we need people who can view an OS in the same way that a hacker views an OS – we need to think about Kill Chains and local privilege elevations. The Threat and Vulnerability Management (TVM) challenge does not disappear just because you have PaaS’d everything. Moreover if you have PaaS’d everything, you have immediately lost the TVM battle. As Beaker famously said in his cloud presentation – “Platforms Bitches”. Popular OS like Windows, *nix, Linux, and popular applications such as Oracle Database are going to be around for some time yet and its the OS where the front-lines are drawn.

Also what is a common misconception and does not work: a secops/network engineer going straight into security with no evidence of interest in other areas. ‘Secops’ is not good preparation for a security career, mainly because secops is sort of purgatory. Just as “there is no Dana, only Zool“, so “there is no secops, only ops”. There is only a security element to these roles because the role covers operational processes with security products. That is anti-security.

All Analyst roles should have an element of penetration testing and appsec, and when I say penetration testing, i do mean unrestricted testing as in an actual simulation. That means no restrictions on exploit usage or source address – because attackers do not have such restrictions. Why spend on this type of testing if its not an actual simulation?

Usage of Cisco Discovery Protocol (CDP) offers a good example of how a lack of penetration testing experience can impede a security team. If security is being done even marginally professionally in an organisation, there will exist a security standard for Cisco network devices that mandates the disabling of CDP.  But once asked to disable CDP, network ops teams will want justification. Any experienced penetration tester knows the value of intelligence in expediting the attack effort and CDP is a relative gold mine of intelligence that is blasted multicast around networks. It can, and often does, reveal the identity and IP address of a core switch. But without the testing experience or knowledge of how attacks actually go down, the point will be lost, and the confidence missing from the advisory.

The points i’ve just covered are not actually ground-breaking at all. Analysts with a good core background of IT and network security can easily move into any new area that marketeers can dream up.

There is an intuition that Information Security has a connection with Information Technology, if only for the common word in them both (that was ‘Information’ by the way, in case you didn’t get it). However, as Upton Sinclair said “It is difficult to get a man to understand something, when his salary depends upon his not understanding it”.

And please don’t create specialisations for Big Data or Internet of Things…woops, too late.

So, consider a small team of enthusiastic, flexible, fast learners, rather than a large team of people who can be trained at a high cost to understand the UI of an application that was designed in the international language and to be intuitive and easy to learn.

Consider using one person to change a light bulb, and don’t be the butt of future jokes.

Information Security Pseudo-skills and the Power of 6

How many Security Analysts does it take to change a light bulb? The answer should be one but it seems organisations are insistent on spending huge amounts of money on armies of Analysts with very niche “skills”, as opposed to 6 (yes, 6!) Analysts with certain core skills groups whose abilities complement each other. Banks and telcos with 300 security professionals could reduce that number to 6.

Let me ask you something: is Symantec Control Compliance Suite (CCS) a “skill” or a product or both? Is Vulnerability Management a skill? Its certainly not a product. Is HP Tippingpoint IPS a product or a skill?

Is McAfee Vulnerability Manager 7.5 a skill whereas the previous version is another skill? So if a person has experience with 7.5, they are not qualified to apply for a shop where the previous version is used? Ok this is taking it to the extreme, but i dare say there have been cases where this analogy is applicable.

How long does it take a person to get “skilled up” with HP Arcsight SIEM? I was told by a respected professional who runs his own practice that the answer is 6 months. My immediate thought is not printable here. Sorry – 6 months is ridiculous.

So let me ask again, is Symantec CCS a skill? No – its a product. Its not a skill. If you take a person who has experience in operational/technical Vulnerability Management – you know, vulnerability assessment followed by the treatment of risk, then they will laugh at the idea that CCS is a skill. Its only a skill to someone who has never seen a command shell before, tested manually for a false positive, or taken part in an unrestricted manual network penetration test.

Being a software product from a major vendor means the GUI has been designed to make the software intuitive to use. I know that in vulnerability assessment, i need to supply the tool with IP addresses of targets and i need to tell the tool which tests I want to run against those targets. Maybe the area where I supply the addresses of targets is the tab which has “targets” written on it? And I don’t want to configure the same test every time I run it, maybe this “templates” tab might be able to help me? Do i need a $4000 2-week training course and a nice certificate to prove to the world that I can work effectively with such a product? Or should there be an effective accreditation program which certifies core competencies (including evidence of the ability to adapt fast to new tools) in security? I think the answer is clear.

A product such as a Vulnerability Management product is only a “Window” to a Vulnerability Management solution. Its only a GUI. It has been tailored to be intuitive to use. Its the thin layer on top of the Vulnerability Management solution. The solution itself is much bigger than this. The product only generates list of vulnerabilities. Its how the organisation treats those vulnerabilities that is key – and the product does not help too much with the bigger picture.

Historically vulnerability management has been around for years. Then came along commercial products, which basically just slapped a GUI on processes and practices that existed for 20 years+, after which the jobs market decided to call the product the solution. The product is the skill now, whereas its really vulnerability management that is the skill.

The ability to adapt fast to new tools is a skill in itself but it also is a skill that should be built-in by default: a skill that should be inherent with all professionals who enter the field. Flexibility is the key.

The real skills are those associated with areas for large volumes of intellectual capital. These are core technologies. Say a person has 5 years+ experience of working in Unix environments as a system administrator and has shown interest in scripting. Then they learn some aspects of network penetration testing and are also not afraid of other technologies (such as Windows). I can guarantee that they will be up and running in less than one day with any new Vulnerability Management tool, or SIEM tool, or [insert marketing buzzphrase here] that vendors can magic up.

Different SIEM tools use different terms and phrases for the same basic idea. HP uses “flex connectors” whilst Splunk talks about “Forwarders” and “Heavy Forwarders” and so on. But guess what? I understand English but If i don’t know what the words mean, i can check in an online dictionary. I know what a SIEM is designed to do and i get the data flows and architecture concept. Network aggregation of syslog and Windows Events is not an alien concept to me, and neither are all layers of the TCP/IP stack (a really basic requirement for all Analysts – or should be). Therefore i can adapt very easily to new tools in this space.

IPS/IDS and Firewalls? Well they’re not even very functional devices. If you have ever setup Snort or iptables you’ll be fine with whatever product is out there. Recently myself and another Consultant were asked to configure a Tippingpoint device. We were up and running in 10 minutes. There were a few small items that we needed to check against the product documentation. I have 15 years+ experience in the field but the other guy is new. Nonetheless he had configured another IPS product before. He was immediately up and running with the product – no problem. Of course what to configure in the rule base – that is a bigger story and it requires knowledge of threats, attack techniques and vulnerabilities – but that area is GENERIC to security – its not specific to a product.

I’ve seen some truly crazy job specifications. One i saw was Websense Specialist!! Come on – its a web proxy! Its Squid with extra cosmetic functions. The position would be filled by a Websense “Olympian” probably. But what sort of job is that? Carpe Diem my friends, Carpe Diem.

If you run a security consultancy and you follow the usual market game of micro-boxed, pigeon-holed security skills, i don’t know how you can survive. A requirement comes up for a project that involves a number of different products. Your existing consultants don’t have those products written anywhere on their CVs, so you go to market looking for contractors at 600 USD per day. You either find the people somehow, or you turn the project down.  Either way you lose out massively. Or – you could have a base of 6 (its that number again) consultants with core skills that complement each other.

If the over-specialisation issue were addressed, businesses would save considerably on human resource and also find it easier to attract the right people. Pigeon-holed jobs are boring. It is possible and advisable to acquire human resource able to cover more bases in risk management.

There are those for and against accreditation in security. I think there is a solution here which is covered in more detail of Chapter 11 of Security De-engineering.

So how many Security Analysts does it take to change a light bulb? The answer is 6, but typically in real life the number is the mark of the beast: 666.

Information Security Careers: The Merits Of Going In-house

Job hunting in information security can be a confusing game. The lack of any standard nomenclature across the sector doesn’t help in this regard. Some of the terms used to describe open positions can be interpreted in wildly different ways. “Architect” is a good example. This term can have a non-technical connotation with some, and a technical connotation with others.

There are plenty of pros who came into security, perhaps via the network penetration testing route, who only ever worked for consultancies that provide services, mainly for businesses such as banks and telcos. The majority of such “external” services are centered around network penetration testing and application testing.

I myself started out in infosec on the consultancy path. My colleagues were whiz kids and some were well known in the field. Some would call them “hackers”, others “ethical” or “white hat” network penetration testers. This article does not cover ethics or pander to some of the verdicts that tend to be passed outside of the law.

Many Analysts and Consultants will face the decision to go in-house at some point in their careers, or remain in a service provider capacity. Others may be in-house and considering the switch to a consultancy. This post hopefully can help the decision making process.

The idea of going in-house and, for example, taking up an Analyst position with a big bank – it usually doesn’t hold much appeal with external consultants. The idea prevails that this type of position is boring or unchallenging. I also had this viewpoint and it was largely derived from the few visions and sound bytes I had witnessed behind the veil. However, what I discovered when I took up an analyst position with a large logistics firm was that nothing could be further from the truth. Going in-house can benefit one’s career immensely and open the eyes to the real challenges in security.

Of course my experiences do not apply across the whole spectrum of in-house security positions. Some actually are boring for technically oriented folk. Different organisations do things in different ways. Some just use their security department for compliance purposes with no attention to detail. However there are also plenty that engage effectively with other teams such as IT operations and development project teams.

As an Analyst in a large, complex environment, the opportunity exists to learn a great deal more about security than one could as an external consultant.  An external consultant’s exposure to an organisation’s security challenges will only usually come in the form of a network or application assessment, and even if the testing is conducted thoroughly and over a period of weeks, the view will be extremely limited. The test report is sent to the client, and its a common assumption that all of the problems described in the report can be easily addressed. In the vast majority of cases, nothing could be further from the truth. What becomes apparent at a very early stage in one’s life as an in-house Analyst, is that very few vulnerabilities can be mitigated easily.

One of the main pillars of a security strategy is Vulnerability Management. The basis of any vulnerability management program is the security standard – the document that depicts how, from a security perspective, computer operating systems, DBMS, network devices, and so on, should be configured. So an Analyst will put together a list of configuration items and compose a security standard. Next they will meet with another team, usually IT operations, in an attempt to actually implement the standard in existing and future platforms. For many, this will be the point where they realize the real nature of the challenges.

Taking an example, the security department at a bank is attempting to introduce a Redhat Enterprise Linux security standard as a live document. How many of the configuration directives can be implemented across the board with an acceptable level of risk in terms of breaking applications or impacting the business in any way? The answer is “not many”. This will come as a surprise for many external consultants. Limiting factors can come from surprising sources. Enlightened IT ops and dev teams can open security’s eyes in this regard and help them to understand how the business really functions.

The whole process of vulnerability management, minus VM product marketeers’ diatribe, is basically detection, then deduce the risk, then take decisions on how to address the risk (i.e. don’t address the vulnerability and accept the risk, or address / work around the vulnerability and mitigate the risk). But as an external consultant, one will only usually get to hand a client a list of vulnerabilities and that will be the end of the story. As an in-house Security Analyst, one gets to take the process from start to finish and learn a great deal more in the process.

As a security consultant passing beyond the iron curtain, the best thing that can possibly happen to their careers is that they find themselves in a situation where they get to interface with the enlightened ones in IT operations, network operations (usually there are a few in net ops who really know their security quite well), and application architects (and that’s where it gets to be really fun).

For the Security Consultant who just metamorphosized into an in-house Analyst, it may well be the first time in their careers that they get to encounter real business concerns. IT operations teams live in fear of disrupting applications that generate huge revenues per minute. The fear will be palpable and it encourages the kind of professionalism that one may never have a direct need to have as an external consultant. Generally, the in-house Analyst gets to experience in detail how the business translates into applications and then into servers, databases, and data flows. Then the risks to information assets seem much more real.

The internal challenge versus the external challenge in security is of course one of protection versus breaking-in. Security is full of rock stars who break into badly defended customer networks and then advertise the feat from the roof tops. In between commercial tests and twittering school yard insults, the rock stars are preparing their next Black Hat speech with research into the latest exotic sploit technique that will never be used in a live test, because the target can easily be compromised with simple methods.

However the rock stars get all the attention and security is all about reversing and fuzzing so we hear. But the bigger challenge is not breaking in, its protection, but then protection is a lot less exotic and sexy than breaking in. So there lies the main disadvantage of going in-house. It could mean less attention for the gifted Analyst. But for many, this won’t be such an issue, because the internal job is much more challenging and interesting, and it also lights up a CV, especially if the names are those in banking and telecoms.

How about going full circle? How about 3 years with a service provider, then 5 years in-house, then going back to consulting? Such a consultant is indeed a powerful weapon for consultancies and adds a whole new dimension for service providers (and their portfolio of services can be expanded). In fact such a security professional would be well positioned to start their own consultancy at this stage.

So in conclusion: going in-house can be the best thing that a Security Consultant can do with their careers. Is going in-house less interesting? Not at all. Does it mean you will get less attention? You can still speak at conferences probably.

One Infosec Accreditation Program To Bind Them All

May 2013 saw a furious debate ensue after a post by Brian Honan (Is it time to professionalize information security?) that suggested that things need to be improved, which was followed by some comments to the effect that accreditation should be removed completely.

Well, a suggestion doesn’t really do it for me. A strong demand doesn’t really do it either, in fact we’re still some way short. No – to advocate the strength of current accreditation schemes is ludicrous. But to then say that we don’t need accreditations at all is completely barking mad.

Brian correctly pointed out “At the moment, there is not much that can be done to prevent anyone from claiming to be an information security expert.” Never a truer phrase was spoken.

Other industry sectors have professional accreditation and it works. The stakes are higher in areas such as Civil Engineering and Medicine? Well – if practitioners in those fields screw up, it cost lives. True, but how is this different from Infosec? Are the stakes really lower when we’re talking about our economic security? We have adversaries and that makes infosec different or more complex?

Infosec is complex – you can bet ISC2’s annual revenue on that. But doesn’t that make security even more deserving of some sort of accreditation scheme that works and generates trust?

I used the word “trust”, and I used it because that what’s we’re ultimately trying to achieve. Our customers are C-levels, other internal departments, end users, home users, and so on. At the moment they don’t trust infosec professionals and who can blame them? If we liken infosec to medicine, much of the time, forget about the treatment, we’re misdiagnosing. Infosec is still in the dark ages of drilling holes in heads in order to cure migraine.

That lack of trust is why, in so many organizations, security has been as marginalized as possible without actually being vaporized completely. Its also why security has been reduced down to the level of ticks in boxes, and “just pass the audit”.

Even though an organization has the best security pros in the world working for them, they can still have their crown jewels sucked out through their up-link in an unauthorized kinda way. Some could take this stance and advocate against accreditation because ultimately, the best real-world defenses can fail. However, nobody is pretending that the perfect, “say goodbye to warez – train your staff with us” security accreditation scheme can exist. But at the same time we do want to be able to configure detection and cover some critical areas of protection. To say that we don’t need training and/or accreditation in security is to say the world doesn’t need accreditation ever again. No more degrees and PhDs, no more CISSPs, and so on.

We certainly do need some level of proof of at least base level competence. There are some practices and positions taken by security professionals that are really quite deceptive and result in resources being committed in areas where there is 100% wastage. These poor results will emerge eventually. Maybe not tomorrow, but eventually the mess that was swept under the carpet will be discovered. We do need to eliminate these practices.

So what are we trying to achieve with accreditation? The link with IT needs to be re-emphasized. The full details of a proposal are covered in chapter 11 of Security De-engineering, but basically what we need first is to ensure the connection at the Analyst level with IT, mainly because of the information element of information technology and information security (did you notice the common word in IT and IT security? Its almost as though there might be a connection between them). 80% of information is now held in electronic form. So businesses need expertise to assist them with protection of that information.

Security is about both business and IT of course. Everybody knows this even if they can’t admit it. There is an ISMS element that is document and process based, which is critical in terms of future proofing the business and making security practices more resource-efficient. A baseline security standard is a critical document and cannot be left to gather dust on a shelf – it does need to be a “living” document. But the “M” in ISMS stands for Management, and as such its an area for…manage-ers. What is quite common is to find a security department of 6 or more Analysts who specialize in ISMS and audits. That does not work.

There has to be a connection with IT and probably the best way to ensure that is to advocate that a person cannot metamorphosize into a Security Analyst until they have 5 years served in IT operations/administration, network engineer, or as a DBA, or developer. Vendor certs such as those from IBM, Microsoft, Cisco – although heavily criticized they can serve to indicate some IT experience but the time-served element with a signed off testimonial from a referee is critical.

There can be an entrance exam for life as an Analyst. This exam should cover a number of different bases. Dave Shackleford’s assertion that creative thinkers are needed is hard to argue with. Indeed, what i think is needed is a demonstration of such creativity and some evidence of coding experience goes a long way towards this.

Flexibility is also critical. Typically IT ops folk cover one major core technology such as Unix or Windows or Cisco. Infosec needs people who can demonstrate flexibility and answer security questions in relation to two or more core technologies. As an Analyst, they can have a specialization with two major platforms plus an area such as application security, but a broad cross-technology base is critical. Between the members of a team, each one can have a specialization, but the members of the team have knowledge that compliments each other, and collectively the full spectrum of business security concerns can be covered.

There can be specializations but also proportional rewards for Analysts who can demonstrate competence in increasing numbers of areas of specialization. There is such a thing as a broad-base experienced Security Analyst and such a person is the best candidate for niche areas such as forensics, as opposed to a candidate who got a forensics cert, learned how to use Encase, plastered forensics on their CV, and got the job with no other Analyst experience (yes – it does happen).

So what emerges is a pattern for an approximate model of a “graduation”-based career path. And then from 5 years time-served as an Analyst, there can be another exam for graduation into the position of Security Manager or Architect. This exam could be something similar to the BCS’s CISMP or ISACA’s CISA (no – I do not have any affiliations with those organizations and I wasn’t paid to write this).

Nobody ever pretended that an accreditation program can solve all our problems, but we do need base assurances in order for our customers to trust us.

How Much Of A CASE Are You?

This piece is adapted from Chapter 3 of Security De-engineering, titled “Checklists and Standards Evangelists”.

My travels in information security have taken to me to 3 different continents and 15 different countries. I have had the pleasure and pain to deal with information security problems in every industry sector that ever existed since the start of the Industrial Revolution (but mostly finance’y/bank’y of course), and I’ve had the misfortune and pleasure to meet a whole variety of species and sub-species of the genus Information Security Professional.

In the good old days of the 90s, it was clear there were some distinctive features that were hard-wired into the modus operandi of the Information Security Professional. This earlier form of life, for want of a better name, I call the “Hacker”, and I will talk about them in my next post.

In the pre-holocene mid to late 90s, the information security professional was still plausibly human, in that they weren’t afraid to display distinguishing characteristics. There was no great drive to “fit in”, to look the same, talk the same, and act the same as all the other information security professionals. There was a class that was information security professional, and at the time, there was only one instance of that class.

Then during the next few years, going into the 2000s, things started to change in response to the needs of ego and other head problems, mostly variants of behaviour born out of insecurity. The need to defend territory, without possession of the necessary intellectual capital to do so, gave birth to a new instance of the class Information Security Professional – the CASE (Checklists and Standards Evangelist). The origin of the name will become clear.

My first engagement in the security world was with a small, ex-countries (mostly former Yugoslavia and Soviet Union) testing team in the late 90s. Responding incorrectly to the perceived needs of the market, around 2001/2 there were a couple of rounds of Hacker lay-offs – a common global story at the time. A few weeks after the second batch of lay-offs, there was a regional team event, wherein our Operations Manager (with a strong background in hotel management) opened the event with “security is no longer about people with green hair and piercings”. Well, ok, but what was it about then? The post 2000s version of “It” is the focus of this post, and I will cover a very scientific methodology for self-diagnosing the level of CASE for the reader.

Ok, so here are some of the elements of CASE’dom that are more commonly witnessed. Feel free to run a self-diagnosis, scoring from 0 to 5 for each point, based either on what you actually believe (how closely you agree with the points), or how closely you see yourself, or how closely you can relate to these points based on your experiences in infosec:

  • “Technical” is a four letter word.
  • Anything “technical”, to do with security (firewall configuration, SIEM, VM, IDS/IPS, IDM etc) comes under the remit of IT/Network Operations.
  • Security is not a technical field – its nothing to do with IT, its purely a business function. Engineers have no place at the table. If a candidate is interviewed for a security position and they use a tech term  such as “computer” or “network”, then they clearly have no security experience and at best they should apply for an lowly ops position.
  • You were once a hacker, but you “grew out of it”.
  • Any type of risk assessment methodology can be reduced down to a CHECKLIST, and recited parrot fashion, thereby replacing the need for actual expertise and thinking. Cost of safeguard versus risk issues are never very complex and can be nailed just by consultation of a check list.
  • Automated vulnerability scanners are a good replacement for manual testing, and therefore manual testers, and by entering target addresses and hitting an enter button, there is no need for any other type of vulnerability assessment, and no need for tech staff in security.
  • There is a standard, universally recognised vocabulary to be used in security which is based on whichever CISSP study guide you read.
  • Are you familiar with this situation: you find yourself in a room with people who talk about the same subject as you, but they use different terms and phrases, and you get angry at them in the belief that your terminology is the correct version?
  • CISSP is everything that was, is, and ever will be. CISSP is the darkness and the light, and the only thing that matters, the alpha and omega. There is a principle: “I am a CISSP, therefore I am”, and if a person does not have CISSP (or it “expired”), then they are not an effective security professional.
  • You are a CEH and therefore a skilled penetration tester.
  • “Best Practice” is a phrase which is ok to use on a regular basis, despite the fact that there is no universally accepted body of knowledge to corroborate the theory that the prescribed practice is the best practice, and business/risk challenges are all very simple to the extent that a fixed solution can be re-used and applied repeatedly to good effect.
  • Ethics is a magnificent weapon to use when one feels the need to defend one’s territory from a person who speaks at, or attends “hacker conferences”. If an analyst has ever used a “hacking tool” in any capacity, then they are not ethical, and subject to negative judgment outside of the law. They are in fact a criminal, regardless of evidence.
  • You look in the mirror and notice that you have a square head and a fixed, stern grimace. At least during work hours, you have no sense of humour and are unable to smile.
  • For a security professional in an in-house situation: it is their job to inform other business units of security standard and policy directives, without assessment of risk on a case-by-case basis, and also no offering of guidance as to how the directives might be realised. As an example: a dev team must be informed that they MUST use two-factor authentication regardless of the risk or the additional cost of implementation. Furthermore, it is imperative to remind the dev team that the standards were signed off by the CEO, and generally to spread terror whilst offering no further guidance.
  • You are a security analyst, but your job function is one of “management” – not analysis or assessment or [insert nasty security term]. Your main function is facilitating external audits and/or processing risk exemption forms.
  • Again for in-house situations: silence is golden. The standard response to any inter-department query is defiance. The key element of any security professional’s arsenal is that of silence, neutrality, and generally not contributing anything. This is a standard defence against ignorance. If a security professional can maintain a false air of confidence while ignoring any form of communique, and generally just not contributing anything, then a bright future awaits. The mask that is worn is one of not actually needing to answer, because you’re too important, and time is too valuable.
  • You fill the gap left by the modern security world by adding in words like “Evangelist” in your job title, or “thought leader”. Subject Matter Expert (SME) also is quite an attractive title. “Senior” can also be used if you have 1 second of experience in the field, or a MBA warrants such a prefix.
  • Your favourite term is “non-repudiation”, because it has that lovely counter-intuitive twist in its meaning. The term has a decent shelf-life, and can be used in any meeting where management staff are present, regardless of applicability to the subject under discussion.
  • Security incident” and “security department” both have the word “security” in them. Management notices this common word, so when there’s an incident and ops refuse to get involved, the baton falls to the security department which has no tools, either mental or otherwise, for dealing with incidents. So, security analysts live in fear of incidents. This is all easily fixed by hiring folk who both need to “fit in” with the rest of the team and also who use words like “forensics” and “incident” on their CV (and they are CISSPs).
  • “Cloud Security” is a new field of security, that only came into existence recently, and is an area of huge intellectual capital. If one has a cloud-related professional accreditation, it means they are very, very special and possess powers other mortals can only dream of. No, really. Cloud adoption is not merely a change in architecture, or places an emphasis on crypto and legal coverage! It’s way more than that!
  • Unlike Hackers, you have unique “access” to C-level management, because you are mature, and can “communicate effectively”.
  • You applied for a job which was advertised as highly technical as per the agent’s (bless ’em eh) job description that was passed on by HR. On day one you realise a problem. You may never see a command shell prompt ever again.

A maximum score of 110 points will be seen as very good or very bad by your management team, hopefully the former for your sake, hopefully the latter for the business’s sake.

Somewhere in the upper area of 73 to 110 points is max’xed-out CASE. This is as CASE as it gets. I wouldn’t want to advocate a new line of work to anyone really, but it might just be the case than an alternative career would lead to a greater sense of fulfilment and happiness.

There is hope for anyone falling in the less than 73 area. For example, its not too late to go through that [insert core technology] Security Standard, try and understand the technical risks, talk to operations about it, and see it all anew. If “tech” really is something that is against your nature, then you will probably be in the 73 to 110 class. Less than 73 is manageable. Of course by getting more tech, you could be alienating yourself or upsetting the apple cart. Its your choice ultimately…

The statement that information security is not actually anything to do with information technology, is of course nothing more than pre-tense, and more and more of our customers are starting to realise this.