Fintechs and Security – Part Three

  • Prologue – covers the overall challenge at a high level
  • Part One – Recruiting and Interviews
  • Part Two – Threat and Vulnerability Management – Application Security
  • Part Three – Threat and Vulnerability Management – Other Layers
  • Part Four – Logging
  • Part Five – Cryptography and Key Management, and Identity Management
  • Part Six – Trust (network controls, such as firewalls and proxies), and Resilience
Threat and Vulnerability Management (TVM) – Other Layers

This article covers the key principles of vulnerability management for cloud, devops, and devsecops, and herein addresses the challenges faced by fintechs.

The previous post covered TVM from the application security point of view, but what about everything else? Being cloud and “dynamic”, even with Kubernetes and the mythical Immutable Architecture, doesn’t mean you don’t have to worry about the security of the operating systems and many devices in your cloud. The devil loves to hear claims to the effect that devops never SSHs to VM instances. And does SaaS help? Well that depends if SaaS is a good move – more on that later.

Fintechs are focussing on application security, which is good, but not so much in the security of other areas such as containers, IaaS/SaaS VMs, and little thought is ever given to the supply of patches and container images (they need to come from an integral source – preferably not involving pulling from the public Internet, and the patches and images need to be checked for integrity themselves).

And in general with vulnerability assessment (VA), we in infosec are still battling a popular misconception, which after a quarter of a decade is still a popular misconception – and that is the value, or lack of, of unauthenticated scanners such as OpenVAS and Nessus. More on this later.

The Overall Approach

The design process for a TVM capability was covered in Part One. Capabilities are people, process, and technology. They’re not just technology. So the design of TVM is not as follows: stick an OpenVAS VM in a VPC, fill it with target addresses, send the auto-generated report to ops. That is actually how many fintechs see the TVM challenge, or they just see it as being a purely application security show.

So there is a vulnerability reported. Is it a false positive? If not, then what is the risk? And how should the risk be treated? In order to get a view of risk, security professionals with an attack mindset need to know

  • the network layout and data flows – think from the point of view of an attacker – so for example if a front end web micro-service is compromised, what can the attacker can do from there? Can they install recon tools such as a port scanner or sniffer locally and figure out where the back end database is? This is really about “trust relationships”. That widget that routes connections may in itself seem like a device that isn’t worthy of attention, but it routes connections to a database hosting crown jewels…you can see its an important device and its configuration needs some intense scrutiny.
  • the location and sensitivity of critical information assets.
  • The ease and result of an exploit – how easy is it to gain a local shell presence and then what is the impact?

The points above should ideally be covered as part of threat modelling, that is carried out before any TVM capability design is drafted.

if the engineer or analyst or architect has the experience in CTF or simulated attack, they are in a good position to speak confidently about risk.

Types of Tool

I covered appsec tools in part two.

There are two types: unauthenticated and credentialed or authenticated scanners.

Many years ago i was an analyst running VA scans as part of an APAC regional accreditation service. I was using Nessus mostly but some other tools also. To help me filter false positives, I set up a local test box with services like Apache, Sendmail, etc, pointed Nessus at the box, then used Ethereal (now Wireshark) to figure out what the scanner was actually doing.

What became abundantly obvious with most services, is that the scanner wasn’t actually doing anything. It grabs a service banner and then …nothing. tumbleweed

I thought initially there was a problem with my setup but soon eliminated that doubt. There are a few cases where the scanner probes for more information but those automated efforts are somewhat ineffectual and in many cases the test that is run, and then the processing of the result, show a lack of understanding of the vulnerability. A false negative is likely to result, or at best a false positive. The scanner sees a text banner response such as “apache 2.2.14”, looks in its database for public disclosed vulnerability for that version, then barfs it all out as CRITICAL, red colour, etc.

Trying to assess vulnerability of an IaaS VM with unauthenticated VA scanners is like trying to diagnose a problem with your car without ever lifting the hood/bonnet.

So this leads us to credentialed scanners. Unfortunately the main players in the VA space pander to unauthenticated scans. I am not going to name vendors here, but its clear the market is poorly served in the area of credentialed scanning.

It’s really very likely that sooner rather than later, accreditation schemes will mandate credentialed scanning. It is slowly but surely becoming a widespread realisation that unauthenticated scanners are limited to the above-mentioned testing methodology.

So overall, you will have a set of Technical Security Standards for different technologies such as Linux, Cisco IoS, Docker, and some others. There are a variety of tools out there that will get part of the job done with the more popular operating systems and databases. But in order to check compliance to your Technical Security Standards, expect to have to bridge the gap with your own scripting. With SSH this is infinitely feasible. With Windows, it is harder, but check Ansible and how it connects to Windows with Python.

Asset Management

Before you can assess for vulnerability, you need to know what your targets are. Thankfully Cloud comes with fewer technical barriers here. Of course the same political barriers exist as in the on-premise case, but the on-premise case presents many technical barriers in larger organisations.

Google Cloud has a built-in feature, and with AWS, each AWS Service (eg Amazon EC2, Amazon S3) have their own set of API calls and each Region is independent. AWS Config is highly useful here.

SaaS

I covered this issue in more detail in a previous post.

Remember the old times of on-premise? Admins were quite busy managing patches and other aspects of operating systems. There are not too many cases where a server is never accessed by an admin for more than a few weeks. There were incompatibilities and patch installs often came with some banana skins around dependencies.

The idea with SaaS is you hand over your operating systems to the CSP and hope for the best. So no access to SMB, RDP, or SSH. You have no visibility of patches that were installed, or not (!), and you have no idea which OS services are enabled or not. If you ask your friendly CSP for more information here, you will not get a reply, and if you do they will remind you that handed over your 50-million-lines-of-source-code OSes to them.

Here’s an example – one variant of the Conficker virus used the Windows ‘at’ scheduling service to keep itself prevalent. Now cloud providers don’t know if their customers need this or not. So – they verge on the side of danger and assume that they do. They will leave it enabled to start at VM boot up.

Note that also – SaaS instances will be invisible to credentialed VA scanners. The tool won’t be able to connect to SSH/RDP.

I am not suggesting for a moment that SaaS is bad. The cost benefits are clear. But when you moved to cloud, you saved on managing physical data centers. Perhaps consider that also saving on management of operating systems maybe taking it too far.

Patching

Don’t forget patching and look at how you are collecting and distributing patches. I’ve seen some architectures where the patching aspect is the attack vector that presents the highest danger, and there have been cases where malicious code was introduced as a result of poor patching.

The patches need to come from an integral source – this is where DNSSEC can play a part but be aware of its limitations – e.g. update.microsoft.com does not present a ‘dnskey’ Resource Record. Vendors sometimes provide a checksum or PGP cryptogram.

Some vendors do not present any patch integrity checksums at all and will force users to download a tarball. This is far from ideal and a workaround will be critical in most cases.

Redhat has their Satellite Network which will meet most organisations’ requirements.

For cloud, the best approach will usually be to ingress patches to a management VPC/Vnet, and all instances (usually even across differing code maturity level VPCs), can pull from there.

Delta Testing

Doing something like scanning critical networks for changes in advertised listening services is definitely a good idea, if not for detecting hacker shells, then for picking up on unauthorised changes. There is no feasible means to do this manually with nmap, or any other port scanner – the problem is time-outs will be flagged as a delta. Commercial offerings are cheap and allow tracking over long histories, there’s no false positives, and allow you to create your own groups of addresses.

Penetration Testing

There’s ideal state, which for most orgs is going to be something like mature vulnerability management processes (this is vulnerability assessment –> deduce risk with vulnerability –> treat risk –> repeat), and the red team pen test looks for anything you may have missed. Ideally, internal sec teams need to know pretty much everything about their network – every nook and cranny, every switch and firewall config, and then the pen test perhaps tells them things they didn’t already know.

Without these VM processes, you can still pen test but the test will be something like this: you find 40 holes of the 1000 in the sieve. But it’s worse than that, because those 40 holes will be back in 2 years.

There can be other circumstances where the pen test by independent 3rd party makes sense:

  • Compliance requirement.
  • Its better than nothing at all. i.e. you’re not even doing VA scans, let alone credentialed scans.

Wrap-up

  • It’s far from all about application security. This area was covered in part two.
  • Design a TVM capability (people, process, technology), don’t just acquire a technology (Qualys, Rapid 7, Tenable SC. etc), fill it with targets, and that’s it.
  • Use your VA data to formulate risk, then decide how to treat the risk. Repeat. Note that CVSS ratings are not particularly useful here. You need to ascertain risk for your environment, not some theoretical environment.
  • Credentialed scanning is the only solution worth considering, and indeed it’s highly likely that compliance schemes will soon start to mandate credentialed scanning.
  • Use a network delta tester to pick up on hacker shells and unauthorised changes in network services and firewalls.
  • Being dynamic with Kubernetes and microservices has not yet killed your platform risk or the OS in general.
  • SaaS may be a step too far for many, in terms of how much you can outsource.
  • When you SaaS’ify a service, you hand over the OS to a CSP, and also remove it from the scope of your TVM VA credentialed scanning.
  • Penetration testing has a well-defined place in security, which isn’t supposed to be one where it is used to inform security teams about their network! Think compliance, and what ideal state looks like here.

Share This:

Fintechs and Security – Part Two

  • Prologue – covers the overall challenge at a high level
  • Part One – Recruiting and Interviews
  • Part Two – Threat and Vulnerability Management – Application Security
  • Part Three – Threat and Vulnerability Management – Other Layers
  • Part Four – Logging
  • Part Five – Cryptography and Key Management, and Identity Management
  • Part Six – Trust (network controls, such as firewalls and proxies), and Resilience

Threat and Vulnerability Management (TVM) – Application Security

This part covers some high-level guider points related to the design of the application security side of TVM (Threat and Vulnerability Management), and the more common pitfalls that plague lots of organisations, not just fintechs. I won’t be covering different tools in the SAST or DAST space apart from one known-good. There are some decent SAST tools out there but none really stand out. The market is ever-changing. When i ask vendors to explain what they mean by [new acronym] what usually results is nothing, or a blast of obfuscation. So I’m not here to talk about specific vendor offerings, especially as the SAST challenge is so hard to get even close to right.

With vulnerability management in general, ${VENDOR} has succeeded in fouling the waters by claiming to be able to automate vulnerability management. This is nonsense. Vulnerability assessment can to some limited degree be automated with decent results, but vulnerability management cannot be automated.

The vulnerability management cycle has also been made more complicated by GRC folk who will present a diagram representing a cycle with 100 steps, when really its just assess –> deduce risk –> treat risk –> GOTO 1. The process is endless, and in the beginning it will be painful, but if handled without redundant theory, acronyms-for-the-sake-of-acronyms-for-the-same-concept-that-already-has-lots-of-acronyms, rebadging older concepts with a new name to make them seem revolutionary, or other common obfuscation techniques, it can be easily integrated as an operational process fairly quickly.

The Dawn Of Application Security

If you go back to the heady days of the late 90s, application security was a thing, it just wasn’t called “application security”. It was called penetration testing. Around the early 2000s, firewall configurations improved to the extent that in a pen test, you would only “see” port 80 and/or 443 exposing a web service on Apache, Internet Information Server, or iPlanet (those were the days – buffer overflow nirvana). So with other attack channels being closed from the perimeter perspective, more scrutiny was given to web-based services.

Attackers realised you can subvert user input by intercepting it with a proxy, modifying some fields, perhaps inject some SQL or HTML, and see output that perhaps you wouldn’t expect to see as part of the business goals of the online service.

At this point the “application security” world was formed and vulnerabilities were classified and given new names. The OWASP Top Ten was born, and the world has never been the same since.

SAST/DAST

More acronyms have been invented by ${VENDOR} since the early early pre-holocene days of appsec, supposedly representing “brand new” concepts such as SAST (Static Application Security Testing) and DAST (Dynamic Application Security Testing), which is the new equivalent of white box and black box testing respectively. The basic difference is about access to the source code. SAST is source code testing while DAST is an approach that will involve testing for OWASP type vulnerabilities while the software is running and accepting client connection requests.

The SAST scene is one that has been adopted by fintechs in more recent times. If you go back 15 years, you would struggle to find any real commercial interest in doing SAST – so if anyone ever tells you they have “20” or even “10” years of SAST experience, suggest they improve their creativity skills. The general feeling, not unjustified, was that for a large, complex application, assessing thousands of lines of source code at a vital organ/day couldn’t be justified.

SAST is more of a common requirement these days. Why is that? The rise of fintechs, whose business is solely about generation of applications, is one side of it, and fintechs can (and do) go bust if they suffer a breach. Also – ${VENDOR}s have responded to the changing Appsec landscape by offering “solutions”. To be fair, the offerings ARE better than 10 years ago, but it wouldn’t take much to better those Hello World scripts. No but seriously, SAST assessment tools are raved about by Gartner and other independent sources, and they ARE better than offerings from the Victorian era, but only in certain refined scenarios and with certain programming languages.

If it was possible to be able to comprehensively assess lots of source code for vulnerability and get accurate results, then theoretically DAST would be harder to justify as a business undertaking. But as it is, SAST + DAST, despite the extensive resources required to do this effectively, can be justified in some cases. In other cases it can be perfectly fine to just go with DAST. It’s unlikely ever going to be ok to just go with SAST because of the scale of the task with complex apps.

Another point here – i see some fintechs using more than one SAST tool, from different vendors. There’s usually not much to gain from this. Some tools are better with some programming languages than others, but there is nothing cast in stone or any kind of majority-view here. The costs of going with multiple vendors is likely going to be harder and harder to justify as time goes on.

Does Automated Vulnerability Assessment Help?

The problem of appsec is still too complex for decent returns from automation. Anyone who has ever done any manual testing for issues such as XSS knows the vast myriad of ways in which such issues can be manifested. The blackbox/blind/DAST scene is still not more than Burp, Dirbuster, but even then its mostly still manual testing with proxies. Don’t expect to cover all OWASP top 10 issues for a complex application that presents an admin plus a user interface, even in a two-week engagement with four analysts.

My preferred approach is still Fred Flinstone’y, but since the automation just isn’t there yet, maybe its the best approach? This needs to happen when an application is still in the conceptual white board architecture design phase, not a fully grown [insert Hipster-given-name], and it goes something like this: white board, application architect – zero in on the areas where data flows involve transactions with untrusted networks or users. Crpyto/key management is another area to zoom in on.

Web Application Firewall

The best thing about WAFs, is they only allow propagation of the most dangerous attacks. But seriously, WAF can help, and in some respects, given the above-mentioned challenges of automating code testing, you need all the help you can get, but you need to spend time teaching the WAF about your expected URL patterns and tuning it – this can be costly. A “dumb” default-configured WAF can probably catch drive-by type issues for public disclosed vulnerabilities as long as you keep it updated. A lot depends on your risk profile, but note that you don’t need a security engineer to install a WAF and leave it in default config. Pretty much anyone can do this. You _do_ need an experienced security engineer or two to properly understand an application and configure a WAF accordingly.

Python and Ruby – Web Application Frameworks

Web application frameworks such as Ruby on Rails (RoR) and Django are in common usage in fintechs, and are at least in some cases, developed with security in mind in that they do offer developers features that are on by default. For example, with Django, if you design a HTML form for user input, the server side will have been automagically configured with the validation on the server side, depending on the model field type. So an email address will be validated client and server-side as an email address. Most OWASP issues are the result of failures to validate user input on the server side.

Note also though that with Django you can still disable HTML tag filtering of user input with a “| safe” in the template. So it’s dangerous to assume that all user input is sanitised.

In Django Templates you will also see a CSRF token as a hidden form field if you include a Form object in your template.

The point here is – the root of all evil in appsec is server-side validation, and much of your server-side validation effort in development will be covered by default if you go with RoR or Django. That is not the end of the story though with appsec and Django/RoR apps. Vulnerability of the host OS and applications can be problematic, and it’s far from the case that use of either Django or RoR as a dev framework eliminates the need for DAST/SAST. However the effort will be significantly reduced as compared to the C/Java/PHP cases.

Wrap-up

Overall i don’t want to too take much time bleating about this topic because the take away is clear – you CAN take steps to address application security assessment automation and include the testing as part of your CI/CD pipeline, but don’t expect to catch all vulnerabilities or even half of what is likely an endless list.

Expect that you will be compromised and plan for it – this is cheaper than spending zillions (e.g. by going with multiple SAST tools as i’ve seen plenty of times) on solving an unsolvable problem – just don’t let an incident result in a costly breach. This is the purpose of security architecture and engineering. It’s more to deal with the consequences of an initial exploit of an application security fail, than to eliminate vulnerability.

Share This:

Fintechs and Security – Part One

  • Prologue – covers the overall challenge at a high level
  • Part One – Recruiting and Interviews
  • Part Two – Threat and Vulnerability Management – Application Security
  • Part Three – Threat and Vulnerability Management – Other Layers
  • Part Four – Logging
  • Part Five – Cryptography and Key Management, and Identity Management
  • Part Six – Trust (network controls, such as firewalls and proxies), and Resilience

Recruiting and Interviews

In the prologue of this four-stage process, I set the scene for what may come to pass in my attempt to relate my experiences with fintechs, based on what i am hearing on the street and what i’ve seen myself. In this next instalment, i look at how fintechs are approaching the hiring conundrum when it comes to hiring security specialists, and how, based on typical requirements, things could maybe be improved.

The most common fintech setup is one of public-cloud (AWS, Azure, GCP, etc), They’re developing, or have developed, software for deployment in cloud, with a mobile/web front end. They use devops tools to deploy code, manage and scale (e.g. Kubernetes), collaborate (Git variants) and manage infrastructure (Ansible, Terraform, etc), perhaps they do some SAST. Sometimes they even have different Virtual Private Clouds (VPCs) for different levels of code maturity, one for testing, and one for management. And third party connections with APIs are not uncommon.

Common Pitfalls

  • Fintechs adopt the stance: “we don’t need outside help because we have hipsters. They use acronyms and seem quite confident, and they’re telling me they can handle it”. While not impossible that this can work – its unlikely that a few devops peeps can give a fintech the help they need – this will become apparent later.
  • Using devops staff to interview security engineers. More on this problem later.
  • Testing security engineers with a list of pre-prepared questions. This is unlikely to not end in tears for the fintech. Security is too wide and deep an area for this approach. Fintechs will be rejecting a lot of good candidates by doing this. Just have a chat! For example, ask the candidate their opinions on the usefulness of VA scanners. The length of the response is as important as its technical accuracy. A long response gives an indication of passion for the field.
  • Getting on the security bandwagon too late (such as when you’re already in production!) you are looking at two choices – engage an experienced security hand and ignore their advice, or do not ignore their advice and face downtime, and massive disruption. Most will choose the first option and run the project at massive business risk.

The Security Challenge

Infosec is important, just as checking to see if cars are approaching before crossing the road is important. And the complexity of infosec mandates architecture. Civil engineering projects use architecture. There’s a good reason for that – which doesn’t need elaborating on.

Collapsing buildingWhenever you are trying to build something complex with lots of moving parts, architecture is used to reduce the problem down to a manageable size, and help to build good practices in risk management. The end goal is protective monitoring of an infrastructure that is built with requirements for meeting both risk and compliance challenges.

Because of the complexity of the challenge, it’s good to split the challenge into manageable parts. This doesn’t require talking endlessly about frameworks such as SABSA. But the following six capabilities (people, process, technology) approach is sleek and low-footprint enough for fintechs:

  • Threat and Vulnerability Management (TVM)
  • Logging – not “telemetry” or Threat intelligence, or threat hunting. Just logging. Not even necessarily SIEM.
  • Cryptography and Key Management
  • Identity Management
  • Business Continuity Management
  • Trust (network segmentation, firewalls, proxies).

I will cover these 6 areas in the next two articles, in more detail.

The above mentioned capabilities have an engineering and architecture component and cover very briefly the roles of security engineers and architects. A SABSA based approach without the SABSA theory can work. So an architect takes into account risk (maybe with a threat modelling approach) and compliance goals in a High Level Design (HLD), and generates requirements for the Low Level Design (LLD), which will be compiled by a security engineer. The LLD gives a breakdown of security controls to meet the requirements of the HLD, and how to configure the controls.

Security Engineers and Devops Tools

What happens when a devops peep interviews a security peep? Well – they only have their frame of reference to go by. They will of course ask questions about devops tools. How useful is this approach? Not very. Is this is good test of a security engineer? Based on the security requirements for fintechs, the answer is clear.

Security engineers can use devops tools, and they do, and it doesn’t take a 2 week training course to learn Ansible. There is no great mystery in Kubernetes. If you hire a security engineer with the right background (see the previous post in this series) they will adapt easily. The word on the street is that Terraform config isn’t the greatest mystery in the world and as long as you know Linux, and can understand what the purpose of the tool is (how it fits in, what is the expected result), the time taken to get productive is one day or less.

The point is: if i’m a security engineer and i need to, for example, setup a cloud SIEM collector: some fintechs will use one Infrastructure As Code (IaC) tool, others use another one – one will use Chef, another Ansible, and there are other permutations. Is a lack of familiarity with the tool a barrier to progress? No. So why would you test a security engineer’s suitability for a fintech role by asking questions about e.g. stanzas in Ansible config? You need to ask them questions about the six capabilities I mentioned above – i.e. security questions for a security professional.

Security Engineers and Clouds

Again – what was the transition period from on-premise to cloud? Lets take an example – I know how networking works on-premise. How does it work in cloud? There is this thing called a firewall on-premise. In Azure it’s called a Network Security Group. In AWS its called a …drum roll…firewall. In Google Cloud its called a …firewall. From the web-based portal UI for admin, these appear to filter by source and destination addresses and services, just like an actual non-virtual firewall. They can also filter by service account (GCP), or VM tag.

There is another thing called VPN. And another thing called a Virtual Router. On the world of on-premise, a VPN is a …VPN. A virtual router is a…router. There might be a connection here!

Cloud Service Providers (CSP) in general don’t re-write IT from the ground up. They still use TCP/IP. They host virtual machines (VM) instead of real machines, but as VMs have operating systems which security engineers (with the right background) are familiar with, where is the complication here?

The areas that are quite new compared to anything on-premise are areas where the CSP has provided some technology for a security capability such as SIEM, secrets management, or Identity Management. But these are usually sub-standard for the purpose they were designed for – this is deliberate – the CSPs want to work with Commercial Off The Shelf (COTS) vendors such as Splunk and Qualys, who will provide a IaaS or SaaS solution.

There is also the subject of different clouds. I see some organisations being fussy about this, e.g. a security engineer who worked a lot with Azure but not AWS, is not suitable for a fintech that uses AWS. Apparently. Well, given that the transition from on-premise to cloud was relatively painless, how painful is it to transition from Azure to AWS or …? I was on a project last summer where the fintech used Google Cloud Platform. It was my first date with GCP but I had worked with AWS and Azure before. Was it a problem? No. Do i have an IQ of 160? Hell no!

The Wrap-up

Problems we see in fintech infosec hiring represent what is most likely a lack of understanding of how they can best manage risk with a budget that is considerably less than a large MNC for example. But in security we haven’t been particularly helpful for fintechs – the problem is on us.

The security challenge for fintechs is not just about SAST/DAST of their code. The challenge is wider and be represented as six security capabilities that need to be designed with an architecture and engineering view. This sounds expensive, but its a one-off design process that can be covered in a few weeks. The on-going security challenge, whereby capabilities are pushed through into the final security operations stage, can be realised with one or two security engineers.

The lack of understanding of requirements in security leads to some poor hiring practices, the most common of which is to interview a security engineer with a devops guru. The fintech will be rejecting lots of good security engineers with this approach.

In so many ways, the growth of small to medium development houses has exposed the weaknesses in the infosec sector more than they were ever exposed with large organisations. The lack of the sector’s ability to help fintechs exposes a fundamental lack of skilled personnel, more particularly at the strategic/advisory level than others.

Share This:

Fintechs and Security – Prologue

  • Prologue – covers the overall challenge at a high level
  • Part One – Recruiting and Interviews
  • Part Two – Threat and Vulnerability Management – Application Security
  • Part Three – Threat and Vulnerability Management – Other Layers
  • Part Four – Logging
  • Part Five – Cryptography and Key Management, and Identity Management
  • Part Six – Trust (network controls, such as firewalls and proxies), and Resilience

Fintechs and Security – A Match Made In Heaven?

Well, no. Far from it actually. But again, as i’ve been repeating for 20 years now, its not on the fintechs. It’s on us in infosec, and infosec has to take responsibility for these problems in order to change. If i’m a CTO of a fintech, I would be confused at the array of opinions and advice which vary radically from one expert to another

But there shouldn’t be such confusion with fintech challenges. Confusion only reigns where there’s FUD. FUD manifests itself in the form of over-lengthy coverage and excessive focus on “controls” (the archetypal shopping list of controls to be applied regardless of risk – expensive), GRC, and “hacking/”[red,blue,purple,yellow,magenta/teal/slate grey] team”/”appsec.

Really what’s needed is something like this (in order):

  • Threat modelling lite – a one off, reviewed periodically.
  • Architecture lite – a one off, review periodically.
  • Engineering lite – a one off, review periodically.
  • Secops lite – the result of the previous 3 – an on-going protective monitoring capability, the first level of monitoring and response for which can be outsourced to a Managed Service Provider.

I will cover these areas in more details in later episodes but what’s needed is, for example, a security design that only provides the answer to “What is the problem? How are we going to solve it?” – so a SIEM capability design for example – not more than 20 pages. No theory. Not even any justifications. And one that can be consumed by non-security folk (i.e. it’s written in the language of business and IT).

Fintechs and SMBs – How Is The Infosec Challenge Unique?

With a lower budget, there is less room for error. Poor security advice can co-exist with business almost seamlessly in the case of larger organisations. Not so with fintechs and Small and Medium Businesses (SMBs). There has been cases of SMBs going under as a result of a security incident, whereas larger businesses don’t even see a hit on their share price.

Look For A Generalist – They Do Exist!

The term “generalist” is seen as a four-letter word in some infosec circles. But it is possible for one or two generalists to cover the needs of a fintech at green-field, and then going forward into operations, its not unrealistic to work with one in-house security engineer of the right background, the key ingredients of which are:

  • Spent at least 5 years in IT, in a complex production environment, and outgrew the role.
  • Has flexibility – the old example still applies today – a Unix fan has tinkered with Windows. So i.e. a technology lover. One who has shown interest in networking even though they’re not a network engineer by trade. Or one who sought to improve efficiency by automating a task with shell scripting.
  • Has an attack mindset – without this, how can they evaluate risk or confidently justify a safeguard?

I have seen some crazy specialisations in larger organisations e.g. “Websense Security Engineer”! If fintechs approached security staffing in the same way as larger organisations, they would have more security staff than developers which is of course ridiculous.

So What’s Next?

In “On Hiring For DevSecOps” I covered some common pitfalls in hiring and explained the role of a security engineer and architect.

There are “fallback” or “retreat” positions in larger organisations and fintechs alike, wherein executive decisions are made to reduce the effort down to a less-than-advisable position:

  • Larger organisations: compliance driven strategy as opposed to risk based strategy. Because of a lack of trustworthy security input, execs end up saying “OK i give up, what’s the bottom line of what’s absolutely needed?”
  • Fintechs: Application security. The connection is made with application development and application security – which is quite valid but the challenge is wider. Again, the only blame i would attribute here is with infosec. Having said that, i noticed this year that “threat modelling” has started to creep into job descriptions for Security Engineers.

So for later episodes – of course the areas to cover in security are wider than appsec, but again there is no great complication or drama or arm-waiving:

  • Part One – Hiring and Interviews – I expand on “On Hiring For DevSecOps“. I noticed some disturbing trends in 2019 and i cover these in some more detail.
  • Part Two – Security Architecture and Engineering I – Threat and Vulnerability Management (TVM)
  • Part Three – Security Architecture and Engineering II – Logging (not necessarily SIEM). No Threat Hunting, Telemetry, or Threat “Intelligence”. No. Just logging. This is as sexy as it needs to be. Any more sexy than this should be illegal.
  • Part Four – Security Architecture and Engineering III – Identity Management (IDAM) and Cryptography and Key Management (CKM).
  • Part Five – Security Architecture and Engineering IV – Trust (network trust boundary controls – e.g. firewalls and forward proxies), and Business Resilience Management (BRM).

I will try and get the first episode on hiring and interviewing out before 2020 hits us but i can’t make any promises!

Share This:

On Hiring For DevSecOps

Based on personal experience, and second hand reports, there’s still some confusion out there that results in lots of wasted time for job seekers, hiring organisations, and recruitment agents.

There is a want or a need to blame recruiters for any hiring difficulties, but we need to stop that. There are some who try to do the right thing but are limited by a lack of any sector experience. Others have been inspired by Wolf Of Wall Street while trying to sound like Simon Cowell.

It’s on the hiring organisation? Well, it is, but let’s take responsibility for the problem as a sector for a change. Infosec likes to shift responsibility and not take ownership of the problem. We blame CEOs, users, vendors, recruiters, dogs, cats, “Russia“, “China” – anyone but ourselves. Could it be we failed as a sector to raise awareness, both internally and externally?

So What Are Common Understandings Of Security Roles?

After 25 years+ we still don’t have universally accepted role descriptions, but at least we can say that some patterns are emerging. Security roles involve looking at risk holistically, and sometimes advising on how to deal with risk:

  • Security Engineers assess risk and design and sometimes also implement controls. BTW some sectors, legal in particular, still struggle with this. Someone who installs security products is in an IT ops role. Someone who upgrades and maintains a firewall is an IT ops role. The fact that a firewall is a security control doesn’t make this a security engineering function.
  • Security Architects take risk and compliance goals into account when they formulate requirements for engineers.
  • Security Analysts are usually level 2 SOC analysts, who make risk assessments in response to an alert or vulnerability, and act accordingly.

This subject evokes as much emotion as CISSP. There are lots of opinions out there. We owe to ourselves to be objective. There are plenty of sources of information on these role definitions.

No Aspect Of Risk Assessment != Security. This is Devops.

If there is no aspect of risk involved with a role, you shouldn’t looking for a security professional. You are looking for DEVOPS peeps. Not security peeps.

If you want a resource to install and configure tools in cloud – that is DEVOPS. It is not Devsecops. It is not Security Engineering or Architecture. It is not Landscape Architecture or Accounting. It is not Professional Dog Walker. it is DEVOPS. And you should hire a DEVOPS person. If you want a resource to install and configure appsec tools for CI/CD – that is DEVOPS. If you want a resource to advise on or address findings from appsec tools, that is a Security Analyst in the first case, DEVSECOPS in the 2nd case. In the 2nd case you can hire a security bod with coding experience – they do exist.

Ok Then So What Does A DevSecOps Beast Look Like?

DevSecOps peeps have an attack mindset from their time served in appsec/pen testing, and are able to take on board the holistic view of risk across multiple technologies. They are also coders, and can easily adapt to and learn multiple different devops tools. This is not a role for newly graduated peeps.

Doing Security With Non-Security Professionals Is At Best Highly Expensive

Another important point: what usually happens because of the skills gap in infosec:

  • Cloud: devops fills the gap.
  • On-premise: Network Engineers fill the gap.

Why doesn’t this work? I’ve met lots of folk who wear the aforementioned badges. Lots of them understand what security controls are for. Lots of them understand what XSS is. But what none of them understand is risk. That only comes from having an attack mindset. The result will be overspend usually – every security control ever conceived by humans will be deployed, while also having an infrastructure that’s full of holes (e.g. default install IDS and WAF is generally fairly useless and comes with a high price tag).

Vulnerability assessment is heavily impacted by not engaging security peeps. Devops peeps can deploy code testing tools and interpret the output. But a lack of a holistic view or an attack mindset, will result in either no response to the vulnerability, or an excessive response. Basically, the Threat And Vulnerability Management capability is broken under these circumstances – a sadly very common scenario.

SIEM/Logging is heavily impacted – what will happen is either nothing (default logging – “we have Stackdriver, we’re ok”), or a SIEM tool will be provisioned which becomes a black hole for events and also budgets. All possible events are configured from every log source. Not so great. No custom use cases will be developed. The capability will cost zillions while also not alerting when something bad is going down.

Identity Management – is not deploying a ForgeRock (please know what you’re getting into with this – its a fork of Sun Microsystems/Oracle’s identity management show) or an Azure AD and that’s it, job done. If you just deploy this with no thought of the problem you’re trying to solve in identity management, you will be fired.

One of the classic risk problems that emerges when no security input is taken: “there is no personally identifiable information in development Virtual Private Clouds, so there is no need for security controls”. Well – intelligence vulnerability such as database schema – attackers love this. And don’t you want your code to be safe and available?

You see a pattern here. It’s all or nothing. Either of which ends up being very expensive or worse. But actually come to think of it, expensive is the goal in some cases. Hold that thought maybe.

A Final Word

So – if the word risk doesn’t appear anywhere in the job description, it is nothing to do with security. You are looking for devops peeps in this case. And – security is an important consideration for cloud migrations.

Share This:

Prevalent DNS Attacks – is DNSSEC The Answer?

Recently the venerable Brian Krebs covered a mass-DNS hijacking attack wherein suspected Iranian attackers intercepted highly sensitive traffic from public and private organisations. Over the course of the last decade, DNS issues such as cache poisoning and response/request hijacking have caused financial headaches for many organisations.

Wired does occasionally dip into the world of infosec when there’s something major to cover, as they did here, and Arstechnica published an article in January this year that quotes warnings about DNS issues from Federal authorities and private researchers. Interestingly DNSSEC isn’t covered in either of these.

The eggheads behind the Domain Name System Security Extensions (obvious really – you could have worked that out from the use of ‘DNSSEC’) are keeping out of the limelight, and its unknown as to exactly how DNSSEC was conceived, although if you like RFCs (and who doesn’t?) there is a strong clue from RFC 3833 – 2004 was a fine year for RFCs.

The idea that responses from DNS servers may be untrustworthy goes way back, indeed the Council of Elrond behind RFC 3833 called out the year 1993 as being the one where the discussion on this matter was introduced, but the idea was quashed – the threats were not clearly seen in the early 90s. An even more exploitable issue was around lack of access control with networks, but the concept of private networks with firewalls at choke points was far from widespread.

DNSSEC Summarised

For a well-balanced look at DNSSEC, check Cloudfare’s version. Here’s the headline paragraph which serves as a decent summary “DNSSEC creates a secure domain name system by adding cryptographic signatures to existing DNS records. These digital signatures are stored in DNS name servers alongside common record types like A, AAAA, MX, CNAME, etc. By checking its associated signature, you can verify that a requested DNS record comes from its authoritative name server and wasn’t altered en-route, opposed to a fake record injected in a man-in-the-middle attack.”

DNSSEC Gripes

There is no such thing as a “quick look” at a technical coverage of DNSSEC. There is no “birds eye view” aside from “it’s used for DNS authentication”. It is complex – so much so that’s it’s amazing that it even works at all. It is PKI-like in its complexity but PKIs do not generally live almost entirely on the Public Internet – the place where nothing bad ever happened and everything is always available.

The resources required to make DNSSEC work, with key rotation, are not negligible. A common scenario – architecture designs call out a requirement for authentication of DNS responses in the HLD, then the LLD speaks of DNSSEC. But you have to ask yourself – how do client-side resolvers know what good looks like? If you’re comparing digital signatures, doesn’t that mean that the client needs to know what a good signature is? There’s some considerable work needed to get, for example, a Windows 10/Server 2k12 environment DNSSEC-ready: client side configuration.

DNSSEC is far from ubiquitous. Indeed – here’s a glaring example of that:


iantibble$ dig update.microsoft.com dnskey

via GIPHY

So, maybe i’m missing something, but i’m not seeing any Resource Records for DNSSEC here. And that’s bad, especially when threat modelling tells us that in some architectures, controls can be used to mitigate risk with most attack vectors, but if WSUS isn’t able to make a call on whether or not its pulling patches from an authentic source, this opens the door for attackers to introduce bad stuff into the network. DNSSEC isn’t going to help in this case.

Overall the provision of DNSSEC RRs for .com domains is less than 10%, and there are some interesting stats here that show that the most commonly used Domain Name registrars do not allow users to add DNSSEC records even if they wanted to.

Don’t forget key rotation – DNSSEC is subject to key management. The main problem with Cryptography in the business world has been less about brute-forcing keys and exploiting algorithm weaknesses than is has been about key management weaknesses – keys need to be stored, rotated, and transported securely. Here’s an example of an epic fail in this area, in this case with the NSA’s IAD site. The page linked to by that tweet has gone missing.

For an organisation wishing to authenticate DNS responses, DNSSEC really does have to be ubiquitous – and that can be a challenge with mobile/remote workers. In the article linked above from Brian Krebs, the point was made that the two organisations involved are both vocal proponents and adopters of DNSSEC, but quoting from Brian’s article: “On Jan. 2, 2019 — the same day the DNSpionage hackers went after Netnod’s internal email system — they also targeted PCH directly, obtaining SSL certificates from Comodo for two PCH domains that handle internal email for the company. Woodcock said PCH’s reliance on DNSSEC almost completely blocked that attack, but that it managed to snare email credentials for two employees who were traveling at the time. Those employees’ mobile devices were downloading company email via hotel wireless networks that — as a prerequisite for using the wireless service — forced their devices to use the hotel’s DNS servers, not PCH’s DNNSEC-enabled systems.”

Conclusion

Organisations do need to take DNS security more seriously – based on what i’ve seen most are not even logging DNS queries and answers, occasionally even OS and app layer logs are AWOL on the servers that handle these requests (these are typically serving AD to the organisation in a MS Windows world!).

But we do need DNS. The alternative is manually configuring IP addresses in a load balanced and forward-proxied world where the Origin IP address of web services isn’t at all clear. We are really back in pen and paper territory if there’s no DNS. And there’s also no real, planet earth alternative to DNSSEC.

DNSSEC does actually work as it was intended and its a technically sound concept, and as in Brian’s article, it has thwarted or delayed attacks. It comes with the management costs of any key management system, and relies on private and public organisations to DNSSEC-ize themselves (as well as manage their keys).

While I regard myself an advocate of DNSSEC deployment, it’s clear there are legitimate criticisms of DNSSEC. But we need some way of authentication of answers we receive from public DNS servers. DNSSEC is a key management system that works in principle.

If the private sector applies enough pressure, we won’t be seeing so many articles about either DNS attacks or DNSSEC, because it will be one of those aspects of engineering that has been addressed and seen as a mandatory aspect of security architecture.

Share This: