About itibble@gmail.com

Author of Security De-engineering, CTO at Seven Stones (Indonesia)

Fintechs and Security – Prologue

A Match Made In Heaven?

Well, no. Far from it actually. But again, as i’ve been repeating for 20 years now, its not on the fintechs. It’s on us in infosec, and infosec has to take responsibility for these problems in order to change. If i’m a CTO of a fintech, I would be confused at the array of opinions and advice which vary radically from one expert to another

But there shouldn’t be such confusion with fintech challenges. Confusion only reigns where there’s FUD. FUD manifests itself in the form of over-lengthy coverage and excessive focus on “controls” (the archetypal shopping list of controls to be applied regardless of risk – expensive), GRC, and “hacking/”[red,blue,purple,yellow,magenta/teal/slate grey] team”/”appsec.

Really what’s needed is something like this (in order):

  • Threat modelling lite – a one off, reviewed periodically.
  • Architecture lite – a one off, review periodically.
  • Engineering lite – a one off, review periodically.
  • Secops lite – the result of the previous 3 – an on-going protective monitoring capability, the first level of monitoring and response for which can be outsourced to a Managed Service Provider.

I will cover these areas in more details in later episodes but what’s needed is, for example, a security design that only provides the answer to “What is the problem? How are we going to solve it?” – so a SIEM capability design for example – not more than 20 pages. No theory. Not even any justifications. And one that can be consumed by non-security folk (i.e. it’s written in the language of business and IT).

Fintechs and SMBs – How Is The Infosec Challenge Unique?

With a lower budget, there is less room for error. Poor security advice can co-exist with business almost seamlessly in the case of larger organisations. Not so with fintechs and Small and Medium Businesses (SMBs). There has been cases of SMBs going under as a result of a security incident, whereas larger businesses don’t even see a hit on their share price.

Look For A Generalist – They Do Exist!

The term “generalist” is seen as a four-letter word in some infosec circles. But it is possible for one or two generalists to cover the needs of a fintech at green-field, and then going forward into operations, its not unrealistic to work with one in-house security engineer of the right background, the key ingredients of which are:

  • Spent at least 5 years in IT, in a complex production environment, and outgrew the role.
  • Has flexibility – the old example still applies today – a Unix fan has tinkered with Windows. So i.e. a technology lover. One who has shown interest in networking even though they’re not a network engineer by trade. Or one who sought to improve efficiency by automating a task with shell scripting.
  • Has an attack mindset – without this, how can they evaluate risk or confidently justify a safeguard?

I have seen some crazy specialisations in larger organisations e.g. “Websense Security Engineer”! If fintechs approached security staffing in the same way as larger organisations, they would have more security staff than developers which is of course ridiculous.

So What’s Next?

In “On Hiring For DevSecOps” I covered some common pitfalls in hiring and explained the role of a security engineer and architect.

There are “fallback” or “retreat” positions in larger organisations and fintechs alike, wherein executive decisions are made to reduce the effort down to a less-than-advisable position:

  • Larger organisations: compliance driven strategy as opposed to risk based strategy. Because of a lack of trustworthy security input, execs end up saying “OK i give up, what’s the bottom line of what’s absolutely needed?”
  • Fintechs: Application security. The connection is made with application development and application security – which is quite valid but the challenge is wider. Again, the only blame i would attribute here is with infosec. Having said that, i noticed this year that “threat modelling” has started to creep into job descriptions for Security Engineers.

So for later episodes – of course the areas to cover in security are wider than appsec, but again there is no great complication or drama or arm-waiving:

  • Part One – Hiring and Interviews – I expand on “On Hiring For DevSecOps“. I noticed some disturbing trends in 2019 and i cover these in some more detail.
  • Part Two – Security Architecture and Engineering I – Threat and Vulnerability Management (TVM), logging (I didn’t say Security Information and Event Management (SIEM)), and Cryptography and Key Management.
  • Part Three – Security Architecture and Engineering II – Business Continuity, Identity Management, and Trust Management (firewalls, proxies, etc).

I will try and get the first episode on hiring and interviewing out before 2020 hits us but i can’t make any promises!

Share This:

On Hiring For DevSecOps

Based on personal experience, and second hand reports, there’s still some confusion out there that results in lots of wasted time for job seekers, hiring organisations, and recruitment agents.

There is a want or a need to blame recruiters for any hiring difficulties, but we need to stop that. There are some who try to do the right thing but are limited by a lack of any sector experience. Others have been inspired by Wolf Of Wall Street while trying to sound like Simon Cowell.

It’s on the hiring organisation? Well, it is, but let’s take responsibility for the problem as a sector for a change. Infosec likes to shift responsibility and not take ownership of the problem. We blame CEOs, users, vendors, recruiters, dogs, cats, “Russia“, “China” – anyone but ourselves. Could it be we failed as a sector to raise awareness, both internally and externally?

So What Are Common Understandings Of Security Roles?

After 25 years+ we still don’t have universally accepted role descriptions, but at least we can say that some patterns are emerging. Security roles involve looking at risk holistically, and sometimes advising on how to deal with risk:

  • Security Engineers assess risk and design and sometimes also implement controls. BTW some sectors, legal in particular, still struggle with this. Someone who installs security products is in an IT ops role. Someone who upgrades and maintains a firewall is an IT ops role. The fact that a firewall is a security control doesn’t make this a security engineering function.
  • Security Architects take risk and compliance goals into account when they formulate requirements for engineers.
  • Security Analysts are usually level 2 SOC analysts, who make risk assessments in response to an alert or vulnerability, and act accordingly.

This subject evokes as much emotion as CISSP. There are lots of opinions out there. We owe to ourselves to be objective. There are plenty of sources of information on these role definitions.

No Aspect Of Risk Assessment != Security. This is Devops.

If there is no aspect of risk involved with a role, you shouldn’t looking for a security professional. You are looking for DEVOPS peeps. Not security peeps.

If you want a resource to install and configure tools in cloud – that is DEVOPS. It is not Devsecops. It is not Security Engineering or Architecture. It is not Landscape Architecture or Accounting. It is not Professional Dog Walker. it is DEVOPS. And you should hire a DEVOPS person. If you want a resource to install and configure appsec tools for CI/CD – that is DEVOPS. If you want a resource to advise on or address findings from appsec tools, that is a Security Analyst in the first case, DEVSECOPS in the 2nd case. In the 2nd case you can hire a security bod with coding experience – they do exist.

Ok Then So What Does A DevSecOps Beast Look Like?

DevSecOps peeps have an attack mindset from their time served in appsec/pen testing, and are able to take on board the holistic view of risk across multiple technologies. They are also coders, and can easily adapt to and learn multiple different devops tools. This is not a role for newly graduated peeps.

Doing Security With Non-Security Professionals Is At Best Highly Expensive

Another important point: what usually happens because of the skills gap in infosec:

  • Cloud: devops fills the gap.
  • On-premise: Network Engineers fill the gap.

Why doesn’t this work? I’ve met lots of folk who wear the aforementioned badges. Lots of them understand what security controls are for. Lots of them understand what XSS is. But what none of them understand is risk. That only comes from having an attack mindset. The result will be overspend usually – every security control ever conceived by humans will be deployed, while also having an infrastructure that’s full of holes (e.g. default install IDS and WAF is generally fairly useless and comes with a high price tag).

Vulnerability assessment is heavily impacted by not engaging security peeps. Devops peeps can deploy code testing tools and interpret the output. But a lack of a holistic view or an attack mindset, will result in either no response to the vulnerability, or an excessive response. Basically, the Threat And Vulnerability Management capability is broken under these circumstances – a sadly very common scenario.

SIEM/Logging is heavily impacted – what will happen is either nothing (default logging – “we have Stackdriver, we’re ok”), or a SIEM tool will be provisioned which becomes a black hole for events and also budgets. All possible events are configured from every log source. Not so great. No custom use cases will be developed. The capability will cost zillions while also not alerting when something bad is going down.

Identity Management – is not deploying a ForgeRock (please know what you’re getting into with this – its a fork of Sun Microsystems/Oracle’s identity management show) or an Azure AD and that’s it, job done. If you just deploy this with no thought of the problem you’re trying to solve in identity management, you will be fired.

One of the classic risk problems that emerges when no security input is taken: “there is no personally identifiable information in development Virtual Private Clouds, so there is no need for security controls”. Well – intelligence vulnerability such as database schema – attackers love this. And don’t you want your code to be safe and available?

You see a pattern here. It’s all or nothing. Either of which ends up being very expensive or worse. But actually come to think of it, expensive is the goal in some cases. Hold that thought maybe.

A Final Word

So – if the word risk doesn’t appear anywhere in the job description, it is nothing to do with security. You are looking for devops peeps in this case. And – security is an important consideration for cloud migrations.

Share This:

Exorcising Dark Reading’s Cloud Demons

Dark Reading recently covered what it says are Cloud “blind spots”. Really, there is some considerable FUD here. Is there really anything unique to cloud with the issues called out?

I’m not pro or anti-cloud. It is “just another organisation’s computer platform” as they say, and whereas this phrase was coined as a warning about cloud, it also serves as a reminder that the differences with on-premise aren’t as much as many would have you believe. In some areas cloud makes things easier and its a chance to go greed field and fix the architecture. Some fairly new concepts such as “immutable architecture” will change security architecture radically but there is nothing carved in stone which says this can only be implemented in cloud. In case it needed calling out – you can implement microservices in your data centre if microservices are what does it for you!

Migrating to cloud – there’s a lot to talk about regards infosec, and i can’t make an attempt at doing that comprehensively here. But some points to make based on what seems to be popular misconceptions:

  • Remember you will never get access to the Cloud Service Provider’s (CSP) hypervisors. Your hardware is in their hands. Check your SLA’s and contract terms.
  • SaaS – it many cases it can be bad to hand over your operating systems to CSPs, and not just from a security perspective. In the case of on-premise, it was deemed a good business choice to have skilled staff to administer complex resources that present many configuration options that can be used and abused for fun and profit. So why does moving to cloud change that?
  • Saas and SIEM/VA: Remember this now means you lose most of your Vulnerability Assessment coverage. And SaaS and SIEM is getting more popular. Due to the critical nature of a SIEM manager, with trust relationships across the board, personally i want access to the OS of such a critical device, but that’s just me.

So picking out the areas covered briefly….

  • Multi-cloud purchasing – “The problem for security professionals is that security models and controls vary widely across providers” – if it takes a few weeks to switch from on-premise to Cloud, then for example, from Azure to Google, or AWS, chances are they were struggling with on-premise. Sorry but there’s no “ignorance” here. A machine is now a VM but it still speaks TCP/IP, and a firewall is now not something in the basement…ok i’ll leave it there.
  • Hybrid Architecture – tracking assets is easier in cloud than it is on-premise. If they find it hard to track assets in cloud, they certainly find it hard on-premise. Same for “monitoring activity”.
  • Likewise with Cloud Misconfiguration – they are also finding it hard on-premise if they struggle with Cloud.
  • Business Managed IT – not a cloud specific problem.
  • Containers – “new platforms like Kubernetes are introducing new classes of misconfigurations and vulnerabilities to cloud environments faster than security teams can even wrap their arms around how container technology works. ” – WHAT!!? So does the world hold back on these devops initiatives while infosec plays catch up? There is “devsecops” which is supposed to be the area of security of security which is specialised in devops. If they also struggle, then what does it say about security? I have to say that on a recent banking project, most of the security team would certainly not know what Docker is. This is not a problem with Cloud, its a problem with security professionals coming into the field with the wrong background.
  • Dark Data – now you’re just taking the proverbial.
  • Forensics and Threat-Hunting Telemetry – show yourself Satan! Don’t lurk in the shadows! Ignore the fancy title – this is about logging. “Not only do organizations struggle to get the right information fed from all of their different cloud resources” – This is a logging problem, not a cloud problem. Even SaaS SIEM solutions do not present customers with issues getting hold of log data.

What happened Dark Reading? You used to be good. Why does’t thou vex us so? Cloud is just someone else’s computer, and just someone else’s network – there are a few unique challenges with cloud migrations, but none of those are mentioned here.

Share This:

“Cybersecurity Is About To Explode” – But in What Way?

I recently had the fortune to stumble across an interesting article: http://thetechnews.com/2019/08/17/cybersecurity-is-about-to-explode-heres-why/

The article probably was aimed at generating revenue for the likes of ISC2 (CISSP exam revenue) and so on, but i am open minded to the possibility that it was genuinely aimed at helping the sector. It is however hopelessly misleading. I would hate to think that this article was the thing that led a budding security wannabe to finally sign up. Certainly a more realistic outlook is needed.

Some comments on some of the points in said article:

“exciting headlines about data breaches” – exciting for who? The victims? 
“organizations have more resources to fight back ” – no they don’t. They spend lots but still cannot fight back.
“It’s become big enough that thought leaders, lawyers, and even academics are weighing in” – who are the thought leaders who are weighing in? If they are leading thought, i would like to know who they are.
“today’s cybercriminals are much more sophisticated than they were twenty years ago”. Do they need to be? I mean Wannacry exploited a basic firewall config problem. Actually firewall configs were better 20 years ago than they are today.
“employing the services of ethical hackers ” – i’m glad the hackers are ethical. They wouldn’t have the job if they had a criminal record. So what is the ‘ethical’ qualifier for? Does it mean the hackers are “nice” or… ?
“Include the use of new security technology like the blockchain and using psychology to trick, mislead, and confuse hackers before they ever reach sensitive data.” Psychology isn’t a defence method, it’s an attack method. Blockchain – there are no viable blue team use cases. 
“313,735 job openings in the cybersecurity field” – all of them are filled if this number is real (unlikely).
“since the need for security experts isn’t likely to drop anytime soon.” see Brexit. It’s dropping now. Today. Elsewhere its flat-line.
“You can take your pick of which industry you want to work in because just about every company needs to be concerned about the safety and security of their networks.” – “needing” to be concerned isn’t the same as being concerned. No. All sectors are still in the basic mode of just getting compliance. 
“Industries like healthcare, government, and fintech offer extensive opportunities for those who want to work in cybersecurity” – no, they do not. 
“90% of payment companies plan to switch over to blockchain technology by 2020” – can you tell your audience the source of this information?

Share This:

A Desperate Call For More Effective Information Security Accreditation

CISSP has to be the most covered topic in the world of infosec. Why is that? The discussions are mostly of course aimed at self-promotion (both by folk condemning the accreditation and then the same in the defensive responses) and justifying getting the accreditation. How many petabytes are there covering this subject? If you think about it, the sheer volume of the commentary on CISSP is proportional to the level of insecurity felt by infosec peeps. It’s a symptom of a sector that is really very ill indeed, and the sheer volume of the commentary is a symptom of how ineffective CISSP is in accreditation, and also the frustration felt by people who know we can do better.

We need _something_. We do need some kind of accreditation. Right now CISSP is the only recognised accreditation. But if you design an accreditation that attempts to cover the whole of infosec in one exam, what did you think the result would be? And there is no room for any argument or discussion on this. Its time to cut the defensiveness and come clean and honest.

The first stage of solving a problem is acknowledgment of its existence. And we’re not there yet. There are still 1000s in this field who cling onto CISSP like a lifebuoy out on the open ocean. There is a direct correlation between the buoy-clingers and the claim that “security is not about IT” …stop that!! You’re not fooling anybody. Nobody believes it! All it does is make the whole sector look even more like a circus to our customers than it already does. The lack of courage to admit the truth here is having a negative impact on society in general.

Seems to me that the “mandatory” label for CISSP in job qualifications is now rare to see. But CISSP is still alive and is better than nothing. Just stop pretending that it’s anything other than an inch thick and a mile wide.

Really we need an entry-level accreditation that tests a baseline level of technical skills and the possession of an attack mindset. We can’t attack or defend, or make calls on risk without an attack mindset. GRC is a thing in security and its a relevant thing – but it doesn’t take up much intellectual space so therefore it needs to be a small part of the requirements. Level 2 SOC Analysts need to understand risk and the importance of application availability, and the value of electronic information to the business, but this doesn’t require them to go and get a dedicated accreditation. Information Security Manage-ment is really an area for Manage-ers – the clue is in the name.

What are the two biggest challenges in terms of intellectual capital investment? They’re still operating systems (and ill-advised PaaS and SaaS initiatives haven’t changed this) and applications. So let’s focus on these two as the biggest chunks of stuff that an infosec team has to cover, and test entry-level skills in these areas.

Share This:

Musang – Oracle DB Security Testing – v1.8

Musang is a Docker-supplied Python/Django project for authenticated security testing scans for Oracle Database, for on-premise, or cloud SaaS, PaaS or IaaS stacks.

The last update of Musang (1.4) was in 2013! So this latest update (1.8) brings Musang into the modern age of old software by introducing support for 12c databases – after all i’m sure there are lots who upgraded from 9g. Right?

No but seriously, Oracle Database 12c introduces the concept of Containers and Pluggable Databases (PDB). It also stores password hashes in a dramatically different way (The mighty Pete Finnegan explains about these hashes so I don’t have to). The user doesn’t have to lose sleep worrying about such things – Musang auto-detects Database versions and selects the corresponding library of tests. However the connection string passed to the Database has to use the right SID of course – do you want to connect to a container or PDB?

Other major changes centred around Python 2.x going EOL – and with it the shift to Django 2.x.

An account privileges dump (for non-SYS, SYSTEM users) has been added also, and the output links each account to the account status (locked, expired, open, etc) – see below …

A rundown of the major changes…

  • Now covers 12c.
  • For the connection test, when Musang detects 12c it dumps the configuration of containers and pluggable databases – if you connect to the container instance, you see a very different output in some tests – notably the password hashes.
  • To get hashes for SYS and SYSTEM you need to connect to the container, with SYS and “as SYSDBA”. Musang now has as the classic “as SYSDBA” option.
  • Added a privileges dump for non SYS and SYSTEM DBAs – see above screen dump.
  • Phased out django-dajaxice and replaced those calls with native AJAX – basically Dajaxice is terrible and was not maintained for years.
  • Few other changes related to 12c and lots related to the above upgrades.
  • Celery is now 4.2.1.
  • Multiple other changes related to stability and 12c intricacies

Share This:

Prevalent DNS Attacks – is DNSSEC The Answer?

Recently the venerable Brian Krebs covered a mass-DNS hijacking attack wherein suspected Iranian attackers intercepted highly sensitive traffic from public and private organisations. Over the course of the last decade, DNS issues such as cache poisoning and response/request hijacking have caused financial headaches for many organisations.

Wired does occasionally dip into the world of infosec when there’s something major to cover, as they did here, and Arstechnica published an article in January this year that quotes warnings about DNS issues from Federal authorities and private researchers. Interestingly DNSSEC isn’t covered in either of these.

The eggheads behind the Domain Name System Security Extensions (obvious really – you could have worked that out from the use of ‘DNSSEC’) are keeping out of the limelight, and its unknown as to exactly how DNSSEC was conceived, although if you like RFCs (and who doesn’t?) there is a strong clue from RFC 3833 – 2004 was a fine year for RFCs.

The idea that responses from DNS servers may be untrustworthy goes way back, indeed the Council of Elrond behind RFC 3833 called out the year 1993 as being the one where the discussion on this matter was introduced, but the idea was quashed – the threats were not clearly seen in the early 90s. An even more exploitable issue was around lack of access control with networks, but the concept of private networks with firewalls at choke points was far from widespread.

DNSSEC Summarised

For a well-balanced look at DNSSEC, check Cloudfare’s version. Here’s the headline paragraph which serves as a decent summary “DNSSEC creates a secure domain name system by adding cryptographic signatures to existing DNS records. These digital signatures are stored in DNS name servers alongside common record types like A, AAAA, MX, CNAME, etc. By checking its associated signature, you can verify that a requested DNS record comes from its authoritative name server and wasn’t altered en-route, opposed to a fake record injected in a man-in-the-middle attack.”

DNSSEC Gripes

There is no such thing as a “quick look” at a technical coverage of DNSSEC. There is no “birds eye view” aside from “it’s used for DNS authentication”. It is complex – so much so that’s it’s amazing that it even works at all. It is PKI-like in its complexity but PKIs do not generally live almost entirely on the Public Internet – the place where nothing bad ever happened and everything is always available.

The resources required to make DNSSEC work, with key rotation, are not negligible. A common scenario – architecture designs call out a requirement for authentication of DNS responses in the HLD, then the LLD speaks of DNSSEC. But you have to ask yourself – how do client-side resolvers know what good looks like? If you’re comparing digital signatures, doesn’t that mean that the client needs to know what a good signature is? There’s some considerable work needed to get, for example, a Windows 10/Server 2k12 environment DNSSEC-ready: client side configuration.

DNSSEC is far from ubiquitous. Indeed – here’s a glaring example of that:


iantibble$ dig update.microsoft.com dnskey

via GIPHY

So, maybe i’m missing something, but i’m not seeing any Resource Records for DNSSEC here. And that’s bad, especially when threat modelling tells us that in some architectures, controls can be used to mitigate risk with most attack vectors, but if WSUS isn’t able to make a call on whether or not its pulling patches from an authentic source, this opens the door for attackers to introduce bad stuff into the network. DNSSEC isn’t going to help in this case.

Overall the provision of DNSSEC RRs for .com domains is less than 10%, and there are some interesting stats here that show that the most commonly used Domain Name registrars do not allow users to add DNSSEC records even if they wanted to.

Don’t forget key rotation – DNSSEC is subject to key management. The main problem with Cryptography in the business world has been less about brute-forcing keys and exploiting algorithm weaknesses than is has been about key management weaknesses – keys need to be stored, rotated, and transported securely. Here’s an example of an epic fail in this area, in this case with the NSA’s IAD site. The page linked to by that tweet has gone missing.

For an organisation wishing to authenticate DNS responses, DNSSEC really does have to be ubiquitous – and that can be a challenge with mobile/remote workers. In the article linked above from Brian Krebs, the point was made that the two organisations involved are both vocal proponents and adopters of DNSSEC, but quoting from Brian’s article: “On Jan. 2, 2019 — the same day the DNSpionage hackers went after Netnod’s internal email system — they also targeted PCH directly, obtaining SSL certificates from Comodo for two PCH domains that handle internal email for the company. Woodcock said PCH’s reliance on DNSSEC almost completely blocked that attack, but that it managed to snare email credentials for two employees who were traveling at the time. Those employees’ mobile devices were downloading company email via hotel wireless networks that — as a prerequisite for using the wireless service — forced their devices to use the hotel’s DNS servers, not PCH’s DNNSEC-enabled systems.”

Conclusion

Organisations do need to take DNS security more seriously – based on what i’ve seen most are not even logging DNS queries and answers, occasionally even OS and app layer logs are AWOL on the servers that handle these requests (these are typically serving AD to the organisation in a MS Windows world!).

But we do need DNS. The alternative is manually configuring IP addresses in a load balanced and forward-proxied world where the Origin IP address of web services isn’t at all clear. We are really back in pen and paper territory if there’s no DNS. And there’s also no real, planet earth alternative to DNSSEC.

DNSSEC does actually work as it was intended and its a technically sound concept, and as in Brian’s article, it has thwarted or delayed attacks. It comes with the management costs of any key management system, and relies on private and public organisations to DNSSEC-ize themselves (as well as manage their keys).

While I regard myself an advocate of DNSSEC deployment, it’s clear there are legitimate criticisms of DNSSEC. But we need some way of authentication of answers we receive from public DNS servers. DNSSEC is a key management system that works in principle.

If the private sector applies enough pressure, we won’t be seeing so many articles about either DNS attacks or DNSSEC, because it will be one of those aspects of engineering that has been addressed and seen as a mandatory aspect of security architecture.

Share This:

Infosec in APAC – A Very Summarised View

I spent a total of 16 years working in infosec in APAC – across the region as a whole except for India and mainland China. I was based initially in a pen test/research lab in Thailand with regional customers, and then later spent some time with big-4 in Thailand, before moving base to Jakarta for what will probably be my final stint in the region. As well as the aforementioned places i spent lots of time in Singapore, Taiwan, and HK. Less so in Malaysia, and i never worked in either of Vietnam, Cambodia, Laos, Myanmar, or the Philippines.

I was in APAC for most of the period between 1999 and 2013. My time with the consultancy which was based in Bangkok (although there was only one client account in Thailand) made up the formative, simulated-attack experience of my career – not a bad place to start. There were some brief spells away in the UK and Czech Republic (the best blue team experience one can hope to find). Overall i was lucky with the places I worked in, and especially the people I worked with – some of whom quit infosec not long after the Great Early Noughties Infosec Brain Drain. 

Appetite for risk is high in APAC – just look at the stats for insurance sales in the region. What results in infosec, even in banking and finance though, is exactly the same as the west – base compliance only. The difference is something like this: western CEOs showed interest and worried about cyber at some point in time, but when they went looking for answers they didn’t find any, other than buzzwords from CISSPs – result: base compliance – aka lets just get thru the audit. In Asia the CEOs didn’t go looking for answers – its just base compliance, do not pass go. But before you pass judgment on this statement – read on.

Where APAC countries were better was the lack of any pretence around GRC. You will never hear anything along the lines “security is not about IT” – i.e. there is no community of self-serving non-technical GRC folk spouting acronyms. Western countries blow billions down the dunny on this nonsense.

So both regions have poor security. Both face a significant threat. But if you measure security performance in terms of how much is spent, versus the results – there’s a clear winner, and that is APAC. Both have poor security, but one spends more for poor security than the other.

Share This:

Django and Celery – Two Sites, Single Host

Documenting this here because Celery’s documentation isn’t the best in general, but moreover because I hadn’t seen a write-up for this scenario – which I would imagine is not an uncommon situation.

So here’s the summarised scenario:

The challenge was to have 2+ Django sites on one VM without confusing the Celery backend. This requires the creation of a “Queue”, “Exchange”, and “Routing Key”. The Celery documentation gives major clues but doesn’t cover Django to any large degree. It does cover some first steps with Django, but nothing about deploying custom Queues in terms of Django python files and what goes where, or at least i couldn’t find it.

I’m sure there are many ways of achieving the same result but this is what worked for me…

The application won’t be able to find celery if you don’t initiate it. The project here is ‘netdelta’. Under the <app> dir create a file, say celery_app.py:

from __future__ import absolute_import, unicode_literals
import os
from celery import Celery
from kombu import Exchange, Queue



# set the default Django settings module for the 'celery' program.
os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'netdelta.settings')

app = Celery('nd')

# Using a string here means the worker doesn't have to serialize
# the configuration object to child processes.
# - namespace='CELERY' means all celery-related configuration keys
#   should have a `CELERY_` prefix.
app.config_from_object('django.conf:settings')

# Load task modules from all registered Django app configs.
#app.autodiscover_tasks(lambda: settings.INSTALLED_APPS)

app.conf.task_queues = (
    Queue('cooler',  Exchange('cooler'),   routing_key='cooler'),
)
app.conf.task_default_queue = 'cooler'
app.conf.task_default_exchange_type = 'direct'
app.conf.task_default_routing_key = 'cooler'

@app.task(bind=True)
def debug_task(self):
    print('Request: {0!r}'.format(self.request))

Which is called when the project fires up with use of __init.py__ under <app> (‘nd’ in this case):

from __future__ import absolute_import, unicode_literals

# This will make sure the app is always imported when
# Django starts so that shared_task will use this app.
from .celery_app import app as celery_thang

__all__ = ('celery_thang',)

<django root>/<proj>/<proj>/settings.py is where the magic happens and it turns out to be a few lines. ‘scheduled_scan’ is the <proj>.tasks.<celery_job_name>:

CELERY_QUEUES = {"coller": {"exchange": "cooler",
                              "routing_key": "cooler"}}

CELERY_ROUTES = {
    'nd.tasks.scheduled_scan': {'queue': "cooler",
                                'exchange': "cooler",
                                'routing_key': "cooler"},
}

Now do the same for the other site(s).

Launch a worker thusly …

python manage.py celery worker -Q cooler -n cooler --loglevel=info -B
  • ‘cooler’ is the site name
  • ‘nd’ is the app name

The final step – your app.task has to call the custom queue you defined. In my case i could afford to set the default QUEUE to my wanted QUEUE because i had need for only one queue. But in multiple QUEUE scenarios, you will need to define the queue. I was setting jobs to run under a schedule using the djcelery admin panel that was created by default. The result are database entries – using an ‘INSERT’ statement to show the table structure (The queue, exchange, and routing key are highlighted):

INSERT INTO `djcelery_periodictask` (`id`, `name`, `task`, `args`, 
`kwargs`, `queue`, `exchange`, `routing_key`, `expires`, `enabled`, 
`last_run_at`, `total_run_count`, `date_changed`, `description`, 
`crontab_id`, `interval_id`) VALUES
(2, 'Linode-67257', 'nd.tasks.scheduled_scan', '["Linode"]', '{}', 
'coller', 'coller', 'coller', NULL, 1, 
'2017-10-15 23:40:00', 18, '2017-10-15 23:41:29', '', 2, NULL);

 

My virtualenv is as below (note i’m using the fork of libnmap that doesn’t use multiprocessor:

>pip list

  • amqp (2.2.2)
  • anyjson (0.3.3)
  • appdirs (1.4.3)
  • billiard (3.5.0.3)
  • celery (4.1.0)
  • Django (1.11.6)
  • django-celery-beat (1.0.1)
  • django-celery-results (1.0.1)
  • django-dajaxice (0.7)
  • html2text (2017.10.4)
  • iptools (0.6.1)
  • kombu (4.1.0)
  • MySQL-python (1.2.5)
  • netaddr (0.7.19)
  • packaging (16.8)
  • pip (9.0.1)
  • pyparsing (2.2.0)
  • python-libnmap (0.7.0)
  • pytz (2017.2)
  • setuptools (36.6.0)
  • six (1.11.0)
  • vine (1.1.4)
  • wheel (0.30.0)

Share This:

#WannaCry and The Rise and Fall of the Firewall

The now infamous WannaCry Ransomware outbreak was the most widespread malware outbreak since the early 2000s. There was a very long gap between the early 2000s “worm” outbreaks (think Sasser, Blaster, etc) and this latest 2017 WannaCry outbreak. The usage of the phrase “worm” was itself widespread, especially as it was included in CISSP exam syllabuses, but then it died out. Now its seeing a resurgence, that started last weekend – but why? Why is the worm turning for the worm (I know – it’s bad – but it had to go in here somewhere)?

As far as WannaCry goes, there has been some interesting developments over the past few days – contrary to popular belief, it did not affect Windows XP, the most commonly affected was Windows 7, and according to some experts, the leading suspect in the case is the Lazarus group with ties to North Korea.

But this post is not about WannaCry. I’m going to say it: I used WannaCry to get attention (and with this statement i’m already more honest than the numerous others who jumped on the WannaCry bandwagon, including our beloved $VENDOR). But I have been meaning to cover the rise and fall of the firewall for some time now, and this instance of a widespread and damaging worm, that spreads by exploiting poor firewall configurations, brought this forward by a few months.

A worm is malware that “uses a computer network to spread itself, relying on security failures on the target computer”. If we think of malware delivery and propagation as two different things – lots of malware since 2004 used email (think Phishing) as a delivery mechanism but spread using an exploit once inside a private network. Worms use network propagation to both deliver and spread. And that is the key difference. WannaCry is without doubt a Worm. There is no evidence to suggest WannaCry was delivered on the back of successful Phishing attacks – as illustrated by the lack of WannaCry home user victims (who sit behind the protection of NAT’ing home routers). Most of the early WannaCry posts were covering Phishing, mostly because of the lack of belief that Server Message Block ports would never be exposed to the public Internet.

The Infosec sector is really only 20 years old in terms of the widespread adoption of security controls in larger organisations. So we have only just started to have a usable, relatable history in infosec. Firewalls are still, in 2017, the security control that delivers most value for investment, and they’ve been around since day one. But in the past 20 years I have seen firewall configurations go thru a spectacular rise in the early 2000s, to a spectacular fall a decade later.

Late 90s Firewall

If we’re talking late 90s, even with some regional APAC banks, you would see huge swaths of open ports in port scan results. Indeed, a firewall to many late 90s organisations was as in the image to the left.

However – you can ask a firewall what it is, even a “Next Gen” firewall, and it will answer “I’m a firewall, i make decisions on accepting or rejecting packets based on source and destination addresses and services”. Next Gen firewall vendors tout the ability of firewalls to do layer 7 DPI stuff such as IDS, WAF, etc, but from what I am hearing, many organisations don’t use these features for one reason or another. Firewalls are quite a simple control to understand and organisations got the whole firewall thing nailed quite early on in the game.

When we got to 2002 or so, you would scan a perimeter subnet and only see VPN and HTTP ports. Mind you, egress controls were still quite poor back then, and continue to be lacking to the present day, as is also the case with internal firewalls other than a DMZ (if there are any). 2002 was also the year when application security testing (OWASP type vulnerability testing) took off, and I doubt it would ever have evolved into a specialised area if organisations had not improved their firewalls. Ultimately organisations could improve their firewalls but they still had to expose web services to the planet. As Marcus Ranum said, when discussing the “ultimate firewall”, “You’ll notice there is a large hole sort of in the centre [of the ultimate firewall]. That represents TCP Port 80. Most firewalls have a big hole right about there, and so does mine.”

During testing engagements for the next decade, it was the case that perimeter firewalls would be well configured in the majority of cases. But then we entered an “interesting” period. It started for me around 2012. I was conducting a vulnerability scan of a major private infrastructure facility in the UK…and “what the…”! RDP and SMB vulnerabilities! So the target organisation served a vital function in national infrastructure and they expose databases, SMB, and terminal services ports to the planet. In case there’s any doubt – that’s bad. And since 2012, firewall configs have fallen by the wayside.

WannaCry is delivered and spreads using a SMB vulnerability, as did Blaster and Sasser all those years ago. If we look at Shodan results for Internet exposure of SMB we find 1.5 million cases. That’s a lot.

So how did we get here? Well there are no answers born out of questionnaires and research but i have my suspicions:

  • All the talk of “Next Generation” firewalls and layer 7 has led to organisations taking their eye off the ball when it comes to layer 3 and 4.
  • All the talk of magic $VENDOR snake oil silver bullets in general has led organisations away from the basics. Think APT-Buster ™.
  • All the talk of outsourcing has led some organisations, as Dr Anton Chuvakin said, to outsource thinking.
  • Talk of “distortion” of the perimeter (as in “in this age of mobile workforces, where is our perimeter now?”). Well the perimeter is still the perimeter – the clue is in the name. The difference is now there are several perimeters. But WannaCry has reminded us that the old perimeter is still…yes – a perimeter.
  • There are even some who advocated losing the firewall as a control, but one of the rare success stories for infosec was the subsequent flaming of such opinions. BTW when was that post published? Yes – it was 2012.

So general guidelines:

  • The Internet is an ugly place with lots of BOTs and humans with bad intentions, along with those who don’t intend to be bad but just are (I bet there are lots of private org firewall logs which show connection attempts of WannaCry from other organisations).
  • Block incoming for all ports other than those needed as a strict business requirement. Default-deny is the easiest way to achieve this.
  • Workstations and mobile devices can happily block all incoming connections in most cases.
  • Egress is important – also discussed very eloquently by Dave Piscitello. Its not all about ingress.
  • Other pitfalls with firewalls involve poor usage of NAT and those pesky network dudes who like to bypass inner DMZ firewalls with dual homing.
  • Watch out for connections from any internal subnet from which human-used devices derive to critical infrastructure such as databases. Those can be blocked in most cases.
  • Don’t focus on WannaCry. Don’t focus on Ransomware. Don’t focus on malware. Focus on Vulnerability Management.

So then perimeter firewall configurations, it seems, go through the same cycles that economies and seasonal temperature variations go through. When will the winter pass for firewall configurations?

Share This: