#WannaCry and The Rise and Fall of the Firewall

The now infamous WannaCry Ransomware outbreak was the most widespread malware outbreak since the early 2000s. There was a very long gap between the early 2000s “worm” outbreaks (think Sasser, Blaster, etc) and this latest 2017 WannaCry outbreak. The usage of the phrase “worm” was itself widespread, especially as it was included in CISSP exam syllabuses, but then it died out. Now its seeing a resurgence, that started last weekend – but why? Why is the worm turning for the worm (I know – it’s bad – but it had to go in here somewhere)?

As far as WannaCry goes, there has been some interesting developments over the past few days – contrary to popular belief, it did not affect Windows XP, the most commonly affected was Windows 7, and according to some experts, the leading suspect in the case is the Lazarus group with ties to North Korea.

But this post is not about WannaCry. I’m going to say it: I used WannaCry to get attention (and with this statement i’m already more honest than the numerous others who jumped on the WannaCry bandwagon, including our beloved $VENDOR). But I have been meaning to cover the rise and fall of the firewall for some time now, and this instance of a widespread and damaging worm, that spreads by exploiting poor firewall configurations, brought this forward by a few months.

A worm is malware that “uses a computer network to spread itself, relying on security failures on the target computer”. If we think of malware delivery and propagation as two different things – lots of malware since 2004 used email (think Phishing) as a delivery mechanism but spread using an exploit once inside a private network. Worms use network propagation to both deliver and spread. And that is the key difference. WannaCry is without doubt a Worm. There is no evidence to suggest WannaCry was delivered on the back of successful Phishing attacks – as illustrated by the lack of WannaCry home user victims (who sit behind the protection of NAT’ing home routers). Most of the early WannaCry posts were covering Phishing, mostly because of the lack of belief that Server Message Block ports would never be exposed to the public Internet.

The Infosec sector is really only 20 years old in terms of the widespread adoption of security controls in larger organisations. So we have only just started to have a usable, relatable history in infosec. Firewalls are still, in 2017, the security control that delivers most value for investment, and they’ve been around since day one. But in the past 20 years I have seen firewall configurations go thru a spectacular rise in the early 2000s, to a spectacular fall a decade later.

Late 90s Firewall

If we’re talking late 90s, even with some regional APAC banks, you would see huge swaths of open ports in port scan results. Indeed, a firewall to many late 90s organisations was as in the image to the left.

However – you can ask a firewall what it is, even a “Next Gen” firewall, and it will answer “I’m a firewall, i make decisions on accepting or rejecting packets based on source and destination addresses and services”. Next Gen firewall vendors tout the ability of firewalls to do layer 7 DPI stuff such as IDS, WAF, etc, but from what I am hearing, many organisations don’t use these features for one reason or another. Firewalls are quite a simple control to understand and organisations got the whole firewall thing nailed quite early on in the game.

When we got to 2002 or so, you would scan a perimeter subnet and only see VPN and HTTP ports. Mind you, egress controls were still quite poor back then, and continue to be lacking to the present day, as is also the case with internal firewalls other than a DMZ (if there are any). 2002 was also the year when application security testing (OWASP type vulnerability testing) took off, and I doubt it would ever have evolved into a specialised area if organisations had not improved their firewalls. Ultimately organisations could improve their firewalls but they still had to expose web services to the planet. As Marcus Ranum said, when discussing the “ultimate firewall”, “You’ll notice there is a large hole sort of in the centre [of the ultimate firewall]. That represents TCP Port 80. Most firewalls have a big hole right about there, and so does mine.”

During testing engagements for the next decade, it was the case that perimeter firewalls would be well configured in the majority of cases. But then we entered an “interesting” period. It started for me around 2012. I was conducting a vulnerability scan of a major private infrastructure facility in the UK…and “what the…”! RDP and SMB vulnerabilities! So the target organisation served a vital function in national infrastructure and they expose databases, SMB, and terminal services ports to the planet. In case there’s any doubt – that’s bad. And since 2012, firewall configs have fallen by the wayside.

WannaCry is delivered and spreads using a SMB vulnerability, as did Blaster and Sasser all those years ago. If we look at Shodan results for Internet exposure of SMB we find 1.5 million cases. That’s a lot.

So how did we get here? Well there are no answers born out of questionnaires and research but i have my suspicions:

  • All the talk of “Next Generation” firewalls and layer 7 has led to organisations taking their eye off the ball when it comes to layer 3 and 4.
  • All the talk of magic $VENDOR snake oil silver bullets in general has led organisations away from the basics. Think APT-Buster ™.
  • All the talk of outsourcing has led some organisations, as Dr Anton Chuvakin said, to outsource thinking.
  • Talk of “distortion” of the perimeter (as in “in this age of mobile workforces, where is our perimeter now?”). Well the perimeter is still the perimeter – the clue is in the name. The difference is now there are several perimeters. But WannaCry has reminded us that the old perimeter is still…yes – a perimeter.
  • There are even some who advocated losing the firewall as a control, but one of the rare success stories for infosec was the subsequent flaming of such opinions. BTW when was that post published? Yes – it was 2012.

So general guidelines:

  • The Internet is an ugly place with lots of BOTs and humans with bad intentions, along with those who don’t intend to be bad but just are (I bet there are lots of private org firewall logs which show connection attempts of WannaCry from other organisations).
  • Block incoming for all ports other than those needed as a strict business requirement. Default-deny is the easiest way to achieve this.
  • Workstations and mobile devices can happily block all incoming connections in most cases.
  • Egress is important – also discussed very eloquently by Dave Piscitello. Its not all about ingress.
  • Other pitfalls with firewalls involve poor usage of NAT and those pesky network dudes who like to bypass inner DMZ firewalls with dual homing.
  • Watch out for connections from any internal subnet from which human-used devices derive to critical infrastructure such as databases. Those can be blocked in most cases.
  • Don’t focus on WannaCry. Don’t focus on Ransomware. Don’t focus on malware. Focus on Vulnerability Management.

So then perimeter firewall configurations, it seems, go through the same cycles that economies and seasonal temperature variations go through. When will the winter pass for firewall configurations?

Skeleton Key – A Worthy Name?

Vulnerabilities have been announced in recent months with scary names like Shellshock , which came after Heartbleed. Also “Evil Twin” (used to describe a copy-cat wifi rogue AP deployment). This is a new art form so it seems, one which promotes vulnerabilities in a marketing sense. No doubt those vulnerabilities were worthy of attention by organisations, but the initial scare factor was higher than justified based on the technical analysis. “Evil Twin” is a genuine concern as a very easy and very effective means of capturing personal data from wifi users, but the others had the potential of impactful exploit across a smaller percentage of organisations.

With both Shellshock and Heartbleed there were misleading reports and over-playing on the risk element. Shellshock was initially touted by some as an exploit that completely compromised the target from a remote source, with no authentication challenge! It was far from that. With Skeleton Key, i dare say there will be reports that suggest that immediate remote access by any user to anything under Active Directory will be gained easily. Again – this is not the case. Far from it. But once inside a network, Skeleton Key does as it says – it does unlock everything that uses Active Directory for authentication for those who know a specific password.

So “Skeleton Key”? Yes, but as with any key, you have to have possession of it first – and this is the tricky part. It is not the case that the doors are all unlocked.

As a very brief summary:

  • Admin rights are first needed to deploy Skeleton Key, but once deployed, unfettered access to all devices under Active Directory (AD) are granted.
  • Any AD account can be used, but the NTLM hash that was used in the deployment is the password. This must be known by anyone looking to take advantage of a successful deployment.
  • The malware is not persistent – once a DC is rebooted (such as after a patch install) it needs to be re-deployed
  • IDS/IPS doesn’t help. Detective controls around logging are the only defence currently

A 12th January report by Dell’s SecureWorks Counter Threat Unit gives some details on a new malware pattern, one that appears to allow complete bypass of Active Directory authentication.

Skeleton Key is not a persistent malware package in that the behaviour seen thus far by researchers is for the code to be resident only temporarily. A restart of a Domain Controller will remove the malicious code from the system. Typically however, critical domain controllers are not rebooted frequently.

The Dell researchers initially observed a Skeleton Key sample named ole64.dll on a compromised network. But they then found an older version msuta64.dll on a host that was previously compromised by (probably) the same attackers on a staging system.
ole.dll is another file name used by Skeleton Key. Windows systems include a legitimate ole32.dll file, but it is not related to this malware.

The malware is not compatible with 32-bit Windows versions or with Windows Server versions beginning with Windows Server 2012 (6.2).

Note it is not the case that any user can authenticate as any user for AD environments under Skeleton Key influence. The NTLM hash of the password configured by the attackers has to be known, but this password can be used to authenticate under any user account. Normal user access happens in the same way – there is no impact on existing user accounts and passwords.

Impact

To be clear, administrative rights are required for this malware to be introduced in the first place. But once in place, any service that uses Active Directory can be bypassed if it only uses single-factor authentication. Such services as VPN gateways and webmail will be freely accessible.

Most compromises of systems result in a listening service on a higher port, or a connection initiated out to a remote host. Skeleton Key succeeds in removing the controls implemented by a central authentication and user management system, thereby opening a whole network to unauthorised access with one step. In this way Skeleton Key could be seen as a kind of Swiss Army Knife for remote attackers, who could trick users or administrators into installing malicious software, then gain admin rights, then completely bypass Active Directory controls with only the second or third major step in their attack attempt.

In the case of the investigation that unearthed Skeleton Key, a global company headquartered in London, was found infected with a Remote Access Trojan (RAT), in order to give attackers continued access.

Mitigation

Two-factor authentication clearly resolves remote unauthorised connection issues, but at the time of writing this is the only ready-made preventative control.
Detection is possible, but not from the network perspective. IDS/IPS isn’t going to be helpful because the behaviour of Skeleton Key does not involve network-based activity.
A YARA signature is given in the researchers write-up of the investigation. Aside from this, knowledge of the malware propagation behaviour can be used to configure Windows auditing, hopefully to improve the odds of detection. More details on the behaviour, particularly in the use of psexec.exe, are given in the Dell researchers’ verdict. Other signs can be:

• Process arguments that resemble NTLM hashes
• Unexpected process start/stops
• Domain replication issues

Compared with Heartbleed, Shellshock.

There is a similarity with Shellshock in that the initial attack vector isn’t as easy for attackers to deploy as was first publicised, but the effects of a successful first attack step can potentially be devastating. In the case of Skeleton Key though, it is more impactful in that an entire Windows domain can be compromised easily. In the case of Shellshock, local shell access is gained only on the machine that is compromised but the privileges of the shell are only that of the process that was compromised.

The main difference is that with Skeleton Key, administrator rights needs to be gained on one system in the domain first. No such requirement exists with either Heartbleed or Shellshock. Heartbleed needed no privileges for a successful exploit but the results of the exploit were unlikely to mean the immediate compromise of the network.

References

The Dell researchers’ detailed write-up:
• http://www.secureworks.com/cyber-threat-intelligence/threats/skeleton-key-malware-analysis/
SC Magazine’s coverage:
• http://www.scmagazine.com/skeleton-key-bypasses-authentication-on-ad-systems/article/392368/

What’s Next For BYOD – 2013 And Beyond

There are security and business case arguments about BYOD. They cast different aspects and there’s peta-bytes of valid points out there.

The security argument? Microsoft Windows is still the corporate OS of choice and still therefore the main target for malware writers. As a pre-qualifier – there is no bias towards one Operating System or another here.

Even considering that in most cases, when business asks for something, security considerations are secondary, there is also the point that Windows is by its nature, very hard to make malware-resistant. Plenty of malware problems are not introduced as a result of a lack of user awareness (for example, unknowingly installing malware in the form of faked anti-virus or browser plug-ins), plus plenty of services are required to run with SYSTEM privileges. These factors make Windows platforms hard to defend in a cost-effective, manageable way.

Certainly we have never been able to manage user OS rights/privileges and that isn’t going to change any time soon. There is no 3rd party product that can help. Does security actually make an effective argument in cases where users are asking for control over printers and Wifi management? Should such functions be locked anyway? Not necessarily. And once we start talking fine-grained admin rights control we’re already down a dark alley – at least security needs to justify to operations as to why they are making their jobs more difficult, the environment more complex and therefore less reliable. And with privilege controls, security also must justify to users (including C-levels) as to why their corporate device is less usable and convenient.

For the aforementioned reasons, the security argument is null and void. I don’t see BYOD as a security argument at all, mainly because the place where security is at these days, isn’t a place where we can effectively manage user device security – the doesn’t change with or without BYOD, and this is likely to be the case for some years to come yet. We lost that battle, and the security strategy has to be planned around the assumption that user subnets are compromised. I would agree that in a theoretical case where user devices are wandering freely, not at all subject to corporate controls, then the scope is there for a greater frequency of malware issues, but regardless, the stance has to be based on an assumption that one or more devices in corporate subnets has been compromised and the malware is designed to connect ingress and egress.

How about other OS flavors, such as Apple OS X for example? With other OS flavors, it is possible to manage privileges and lock them down to a much larger degree than we can with Windows, but as has been mentioned plenty of times now, once another OS goes mainstream and grows in corporate popularity, then it also shows up on the radars of malware writers. Reports of  malware designed to exploit vulnerabilities in OS X software started surfacing earlier in 2012, with “The Flashback Trojan” given the widest coverage.

I would venture that at least the possibility exists to use technical controls to lock down Unix-based devices to a much larger degree, as compared with MS Windows variants, but of course the usability experience has to match the needs of business. Basically, regardless of whether our view is utopic or realistic, there will be holes, and quite sizable holes too.

For the business case? Having standard build user workstations and laptops does make life easier for admins, and it allows for manageability and efficiency, but there is a wider picture of course. The business case for BYOD is a harder case to make than we might have at first thought. There are no obvious answers here. One of the more interesting con articles was from CIO Magazine earlier in 2012: BYOD: If You Think You’re Saving Money, Think Again and then Cisco objectively report that there are plenty in the pro corner too: Cisco Study: IT Saying Yes To BYOD.

So what does all this bode for the future? The manageability aspect and therefore the business aspect is centered around the IT costs and efficiency analysis. This is more of an operational discussion than an information risk management discussion.

The business case is inconclusive, with plenty in the “say no to BYOD” camp. The security picture is without foundation – we have a security nightmare with user devices, regardless of who owns the things.

Overall the answer naturally lies in management philosophy, if we can call it that. There is what we should do, and what we will do….and of course these are often out by 180 degrees from each other. The lure of BYOD will be strong at the higher levels who usually only have the balance sheet as evidence, along with the sales pitches of vendors. Accountant-driven organisations will love the idea and there will be variable levels of bravery, confidence, and technical backing in the IT rationalization positions. Similar discussions will have taken place with regard to cloud’ing and outsourcing.

The overall conclusion: BYOD will continue to grow in 2013 and probably beyond. Whether that should be the case or not? That’s a question for operations to answer, but there will be plenty of operations departments that will not support the idea after having analyzed the costs versus benefits picture.

How To Break Into Security – Planet Earth Edition

The venerable Brian Krebs has recently been running some stories from various demigods of the infosec world, aimed at those wishing to enter the information security field – aspiring graduate ninjas, and others seeking the mythical pot of gold at the end of the rainbow.

First up there was there was Tomas Ptacek’s edition, then we had some pearls of wisdom from Bruce Schneier, Jeremiah Grossman, Richard Bejtlich, and then Charlie Miller.

Thomas Ptacek claimed about the security field: “It’s one of the few technology jobs where the most fun roles are well compensated”, and “if you watched “Sneakers” and ideated a life spent breaking or defending software, great news: infosec can be more fun in real life, and it’s fairly lucrative.” Well…I am not refuting any of this, but it is certainly quite unusual for jobs in information security to be fun.

Thomas talks of the benefits of having extensive programming experience – and this is something I advocate myself quite strenuously (more on that later). Thomas’s viewpoint was centered around appsec, which is fine. I think for myself, in terms of defending networks, we need to look at two main areas: appsec and operating systems / databases. There is some good advice in Thomas’s article about breaking into application security, although I wouldn’t say that this area is everything. There’s a little too much religious fervor about appsec in the article for my liking, just as one often sees a lack of balance in other areas, such as CISSP-worship, and malware “reverse engineering” – basically – “my area is the alpha and omega – all that was, is, and ever will be”.

There are other areas that matter in security other than application security. I wouldn’t say its all about appsec, but I would say though that the two main areas are appsec and operating system and database configuration. “But operating systems and databases are also applications” I hear you say. Yes, but when we’re talking appsec in infosec, we’re usually talking about web applications. There are few times when we suffer web attacks where that one single exploit leads to something really bad happening, apart from perhaps a SQLi that in itself reveals sensitive database hosted information.

With regard web applications I think Thomas is spot-on with his comments about learning about web application security assessment, and how to get clued up in this area. Also the comments about Nessus and getting into penetration testing – sad but true.

With regard Bruce Schneier’s “breaking into” edition – nothing he says is factually incorrect (most of what we talk about is subjective, neither black or white, but grey) but the comments are not at all close to the coal-face realities of most business’s in-house or service providers’ practices. Wannabe security pros reading this will be sure to get grandiose visions of their future lives as a security pro – but in 90%-plus of cases the vocational activities of security professionals do not match the picture painted by Bruce Schneier. I’ll explain more on this later.

Richard Bejtlich’s response was centered around getting into penetration testing with Metasploit and Jeremiah Grossman’s was the most all-encompassing and in my opinion, the response that had the most value for security pro wannabes – although Charlie Millar’s wasn’t far behind. In particular Mr Grossman plays down the effectiveness of accreditation programs, in favor of practical experience – wise words indeed. Charlie Millar had similar opinions.

In Charlie Millar’s response there was a lot of talk of really specialized nieche areas like reverse engineering and so on, but he does temper this with “I really do a lot of reverse engineering and binary analysis, which is unusual.” and “for those starting out, it probably makes more sense to learn some languages more useful for web applications, like PHP or Java or something.  The majority of jobs I come across in application security are web applications, so unless you’re a dinosaur like me, you probably want to become a web app expert.  Web application security is a lot easier to get started in as well.”

Charlie Millar’s response sort of segues me into the wider scope here, and that is of the realities “out there”. The articles are based on responses by folk who’ve rightly become esteemed professionals in their field and there is some really valuable insight there. The thing is – there is a lot of talk of the security field being a place for artists and magicians, and of being technically demanding, but there are very few places where technical acrobatics skills in security are seen as having any value to businesses, or even security line managers for that matter – and therefore such intellectual capital just does not appear on the balance books of these businesses.

We’ve been through a vicious 70 to 80% of a sine wave of pain in security since the late 90s. The security world painted by the fellowship of five assembled by Brian Krebs (speaking of whom, it would be nice to hear his version of the “breaking into” story) seems much more like the world of the late 90s than today. Security was heavily de-engineered through the 2000s. Things started to change around 2010, but we’re still very much in non-tech territory, and many of the security line managers who will interview prospective Security Analysts will not have an IT background, and their security practice will be hands-off, non-tech, check-list based. Anything “tech” to do with information risk management will be handled by an ops team, but there won’t be any “reverse engineering” or “fuzzing” over there…far from it. More like – firewall configuration, running bad vulnerability management suites, monitoring IDS/SIEM logs.

Picking up on some of Bruce Schneier’s comments: “You can be an expert in viruses, or policies, or cryptography.” Policies? OK, this part is true – if you want to be an expert in policies, whatever that is, you can certainly find this in 90%+ of businesses – but is this something that is an economically viable and sustainable position, or even for that matter anything that any homo sapien would ever really want to do? Probably accountancy would be a better bet.

“Viruses?” Hmm. There are increasing numbers of openings for “malware reverse engineers”, where really what they’re looking for is incident response – they want to know what happened after they discovered that some of their laptops were connecting out to various addresses in places they hadn’t heard of, prior to the click of doom. If you get interviewed for one of these positions, be prepared to answer questions about SIEM technologies and incident response. These openings are not usually associated with reverse engineering to the level of detail of those pattern-makers in the anti-virus software market- and if they are, they needn’t be, and the line manager will get to realize this after a while.

And “cryptography”? We have Bruce’s comments and then we have a title heading from a book by Shostack and Stewart: “Amateurs Study Cryptography; Professionals Study Economics”. So who do you believe here? To be fair, the book was published at the height of the de-engineering phase of security and it sort of fitted with the agenda of the times. I would go with Bruce Schneier again here, but with some qualification (the final paragraph also talks about math): most security departments won’t ever go anywhere near anything mathematical or even crypto-related, and when they do, it will be with a checklist approach that goes something like “is a strong key used?”, “yes”, “ok, good, tick in a box”, or “is DES used?”, “yes”, “ok, that’s bad, I think, anyway use triple DES please” – with no further assistance for the dev team.

With regard Cryptography, right or wrong, it’s really only on tiny islands where the math is seen as relevant – places where they code the apps and people are assigned to review the security of the app. And these territories are keenly disputed. Most of the concerns in the rest of the business world will never get more techy than discussions about key management – which is the more common challenge with crypto anyway (mostly we’re using public crypto algorithms in security, so the challenge is in protection of the key).

There were comments from all respondents about testing applications and breaking into networks. Again, the places where skills like reverse engineering are actually relevant are so small. Bruce Schneier painted a grand picture of thinking like a hacker, not just a mere engineer, in order to be able to create systems that are difficult to compromise by the most advanced hackers. But most humans who design systems are not even thinking about security, or they’re on such a tight deadline (with related KPIs and bonuses) that they side-step security. So as a pen test guru wannabe, you may possess extremely high levels of fuzzing, exploit coding, and reversing skills, but you will never get to use them, in fact you will intimidate most interviewers, and you’ll be over-qualified. There will be easier ways to break into systems in most cases. In fact in general, as I commented in an earlier post, security is insufficiently mature in most organizations to warrant any manual penetration testing whatsoever.

So really, what I have had to say here may sound harsh or “negative”, but I would hate for anyone to get into a field that they thought was challenging, only to discover that it’s anything but. I believe things are changing, but it’s at rather a slow pace, and the field of security has been broken for so long, that there are very few around who know how to fix it. Security is getting more challenging, that’s for sure, but for the security pro who goes looking for a job in this field because of the tech challenge aspect: just be very careful about what you’re getting into. Many jobs sound great from the job descriptions posted to recruitment agents, but this is only a show. The reality inside the team is that you may be sent to Siberia if you so much as use a tech-sounding word like “computer” or “IP address” – while this sounds unreal I can assure the reader that such a scenario is most certainly real, although it is of course more often the case that the job would just not be offered to a “techie”.

Its impossible to cover the jobs aspect of information security in just this article. I had a more comprehensive stab at it in Chapter Six of Security De-engineering. I would say to the prospective security pro though, that the advice given by the five mentioned in this article is not bad advice at all – it’s just that you may push yourself to higher levels, and not see significant benefit from it in your careers any time soon. There will always be some benefit, just not as much as you might expect. Certainly, you will have more confidence, but also probably over-qualify yourself for your current position.

As a security pro with a tech inclination, getting into security might not be as hard as you thought. Thomas Ptacek mentioned “A good way to move into penetration testing: grab some industry standard tools and use an Amazon EC2 account to set up a “shooting range” to attack. Some of the best-known tools are available for free: the Nessus scanner, for instance, while not an application security tool, is free and can land you a network penetration testing role that you can use as a springboard to breaking applications.” Believe me, this is not a difficult target, but because of the way the security industry is, you could very well land a penetration testing job with the preparation as described by Mr Ptacek.

All I had to say here is aimed at managing expectations. You may well find that you have to market yourself down a bit in order just to get a foothold in the industry. Once there, by pushing yourself to learn more and get more advanced skills, it could be that you would eventually osmosize towards the ideal job of your dreams. However, these positions are so rare in reality. Many of the folk I worked with in the earlier days of my career had these ninja skills that have been discussed in the five articles mentioned here. Once we got to the early 2000s they realized that security was no longer a place for them. Has the demand for these advanced skills returned? It has, to some extent, but still the demand is miniscule compared to the usual skills required the vast majority of businesses.

ZDnet’s Interview with Mikko Hypponen – “The current state of the cybercrime ecosystem” – Highlights

Last week Dancho Danchev interviewed Mikko Hypponen (CSO @ F-Secure) on the subject of CaaS (Cybercrime as a Service), the recent Botnet takedowns, and OPsec within cybercrime “organsations”. The questions from the interviewer occupied 3 times as much real estate as the answers (!), so here is a distillation of some of the more salient points arising from the interview, covered fully in this ZDnet article . Also some of the questions provided a lot of information (:)) .

The lack of OPsec (operational security), whether there is a lack or not, is not how criminals and botnet masters are traced – it’s chiefly because they like to brag about their exploits on forums and chat. This makes them easier to trace than might be expected.

The traditional cybercrime marketplaces have been illuminated and the DarkMarket as its been called is not so dark any more – indeed some have even claimed that it no longer exists. Mikko Hypponen talks about Tor and Freenet and how services are moving to the “deep web” – and this worries law enforcement, but few details were forthcoming.

These days, everything from spam, phishing to launching malware attacks and coding custom malware is available as a professionally packaged service. Mikko replies there was little the good guys could do to prevent this. “These are not technological problems; they are mostly social problems. And social problems are always hard to fix”.

“Some criminals are sellings banking trojans and then other hackers are selling tailor-made configuration files for those trojans, targeting any particular bank. Going prices for such config customization seem to be around $500 at the moment.”

“Partnerka” affiliate networks with rogue AVs and ransom trojans have been highly successful for the bad guys, and this kind of affiliate model also means that the masters behind the schemes don’t need to get their hands dirty anymore.

Mac OS X and security: Historically the Flashback.K thing is very important – a turning point. Only 2 to 5% of all macs were infected, but this is huge nonetheless. It means that whereas in the past, Mac owners didn’t need anti-virus – now they do need it. However, there is still only one gang behind Mac malware – this is likely to change.

Despite the multiple claims from many media sources, the cybercrime marketplace does not generate more revenue than sales of hard drugs, but at the same time we do not posses the means to quantify the financial numbers. It is known that individual groups have made tens of millions of dollars. But not hundreds.

These days malware and trojans are not as much about exploiting Patch Tuesday issues as they are about using browser extensions and plugins. Drive-by-downloads via exploits targeting browser add-ons and plugins are clearly the most common way of getting infected.

Mozilla’s plugin check is quite effective but in practice the Chrome model of sandboxing and replacing third-party add-ons with their own replacements seems to work really well. Chrome has issues with privacy but in terms of security its better than the others. Chrome users get exploited less than the others.

Opt-in botnets have been a growing problem over the past two years – often this is about patriotic hacktivism, where users sometimes deliberately infect themselves with a DDoS agent. These are likely to be around for a very long time, and it’s been reported recently by Akamai that DDoS attacks have been launched from a botnet of mobile phones. We’re likely to see DDoS botnets move to totally new platforms in the future. Think cars and microwave ovens launching attacks. Tools as LOIC and HOIC have brought the “Opt-in botnet” model to the masses, and it works. Unfortunately.

Android has made malware for Linux a reality, as identified in a F-Secure report.  Quoting Mr Hypponen: “Old Symbian malware is going away. Nobody is targeting Windows Phone. Nobody is targeting iPhone. And Android is getting targeted more and more. iOS, the operating system in iPhone (and iPad and iPod) was released with the iPhone in the summer of 2007 – five years ago. The system has been targeted by attacker for five years, with no success. We still haven’t seen a single real-world malware attack against the iPhone. This is a great accomplishment and we really have to give credit to Apple for a job well done. Out of all Linux variants, Android is the clear leader in malware.”

Mobile malware vendors cashing out by sending text messages and placing calls to expensive premium-rate numbers – this will be around for at least the near future – It works and it’s easy to do. Eventually, we’ll probably see more mobile banking trojans and new trojans targeting micropayments.

Attacks against human rights activists are undeniably coming from China, according to Mr Hypponen. Some of the attacks came from the same source as attacks against defence contractors and governments – although proving it is hard.

Facebook, Twitter, Amazon’s EC2, LinkedIn, Baidu, Blogspot and Google Groups have all had criminal groups launching their campaigns from their networks in the past. Some of these are easily able to kick out abusers though, and spot them fairly quickly.

Anti-virus software and its failings aside…operators are in a key position to move security from a product to service and to protect the masses with both managed security solutions on end-user devices as well as behind-the-scenes monitoring and filtering of malicious traffic.

In March, 2011 Dancho proposed that all ISPs should quarantine their malware infected users until they prove they can use the Internet in a safe way. Mikko agrees this is a good idea, and is currently now being practised successfully with F-Secure’s solutions and several operators.