ZDnet’s Interview with Mikko Hypponen – “The current state of the cybercrime ecosystem” – Highlights

Last week Dancho Danchev interviewed Mikko Hypponen (CSO @ F-Secure) on the subject of CaaS (Cybercrime as a Service), the recent Botnet takedowns, and OPsec within cybercrime “organsations”. The questions from the interviewer occupied 3 times as much real estate as the answers (!), so here is a distillation of some of the more salient points arising from the interview, covered fully in this ZDnet article . Also some of the questions provided a lot of information (:)) .

The lack of OPsec (operational security), whether there is a lack or not, is not how criminals and botnet masters are traced – it’s chiefly because they like to brag about their exploits on forums and chat. This makes them easier to trace than might be expected.

The traditional cybercrime marketplaces have been illuminated and the DarkMarket as its been called is not so dark any more – indeed some have even claimed that it no longer exists. Mikko Hypponen talks about Tor and Freenet and how services are moving to the “deep web” – and this worries law enforcement, but few details were forthcoming.

These days, everything from spam, phishing to launching malware attacks and coding custom malware is available as a professionally packaged service. Mikko replies there was little the good guys could do to prevent this. “These are not technological problems; they are mostly social problems. And social problems are always hard to fix”.

“Some criminals are sellings banking trojans and then other hackers are selling tailor-made configuration files for those trojans, targeting any particular bank. Going prices for such config customization seem to be around $500 at the moment.”

“Partnerka” affiliate networks with rogue AVs and ransom trojans have been highly successful for the bad guys, and this kind of affiliate model also means that the masters behind the schemes don’t need to get their hands dirty anymore.

Mac OS X and security: Historically the Flashback.K thing is very important – a turning point. Only 2 to 5% of all macs were infected, but this is huge nonetheless. It means that whereas in the past, Mac owners didn’t need anti-virus – now they do need it. However, there is still only one gang behind Mac malware – this is likely to change.

Despite the multiple claims from many media sources, the cybercrime marketplace does not generate more revenue than sales of hard drugs, but at the same time we do not posses the means to quantify the financial numbers. It is known that individual groups have made tens of millions of dollars. But not hundreds.

These days malware and trojans are not as much about exploiting Patch Tuesday issues as they are about using browser extensions and plugins. Drive-by-downloads via exploits targeting browser add-ons and plugins are clearly the most common way of getting infected.

Mozilla’s plugin check is quite effective but in practice the Chrome model of sandboxing and replacing third-party add-ons with their own replacements seems to work really well. Chrome has issues with privacy but in terms of security its better than the others. Chrome users get exploited less than the others.

Opt-in botnets have been a growing problem over the past two years – often this is about patriotic hacktivism, where users sometimes deliberately infect themselves with a DDoS agent. These are likely to be around for a very long time, and it’s been reported recently by Akamai that DDoS attacks have been launched from a botnet of mobile phones. We’re likely to see DDoS botnets move to totally new platforms in the future. Think cars and microwave ovens launching attacks. Tools as LOIC and HOIC have brought the “Opt-in botnet” model to the masses, and it works. Unfortunately.

Android has made malware for Linux a reality, as identified in a F-Secure report.  Quoting Mr Hypponen: “Old Symbian malware is going away. Nobody is targeting Windows Phone. Nobody is targeting iPhone. And Android is getting targeted more and more. iOS, the operating system in iPhone (and iPad and iPod) was released with the iPhone in the summer of 2007 – five years ago. The system has been targeted by attacker for five years, with no success. We still haven’t seen a single real-world malware attack against the iPhone. This is a great accomplishment and we really have to give credit to Apple for a job well done. Out of all Linux variants, Android is the clear leader in malware.”

Mobile malware vendors cashing out by sending text messages and placing calls to expensive premium-rate numbers – this will be around for at least the near future – It works and it’s easy to do. Eventually, we’ll probably see more mobile banking trojans and new trojans targeting micropayments.

Attacks against human rights activists are undeniably coming from China, according to Mr Hypponen. Some of the attacks came from the same source as attacks against defence contractors and governments – although proving it is hard.

Facebook, Twitter, Amazon’s EC2, LinkedIn, Baidu, Blogspot and Google Groups have all had criminal groups launching their campaigns from their networks in the past. Some of these are easily able to kick out abusers though, and spot them fairly quickly.

Anti-virus software and its failings aside…operators are in a key position to move security from a product to service and to protect the masses with both managed security solutions on end-user devices as well as behind-the-scenes monitoring and filtering of malicious traffic.

In March, 2011 Dancho proposed that all ISPs should quarantine their malware infected users until they prove they can use the Internet in a safe way. Mikko agrees this is a good idea, and is currently now being practised successfully with F-Secure’s solutions and several operators.

A Tribute To Our Oldest And Dearest Of Friends – The Firewall (Part 2)

In the first part of my coverage on firewalls I mentioned about the usefulness of firewalls, and apart from being one of the few commercial offerings to actually deliver in security, the firewall really does do a great deal for our information security posture when its configured well.

Some in the field have advocated that the firewall has seen its day and its time for the knackers yard, but these opinions are borne from a considerable distance from the coal face in this business. Firewalls, when seen as something as in the movies, as in “breaking through”, “punching through” the firewall, can be seen as useless when bad folk have compromised networks seemingly effortlessly. One doesn’t “break through” a firewall. Your profile is assessed. If you fit a certain profile you are allowed through. If not, you absolutely shall not pass.

There have been counters to these arguments in support of firewalls, but the extent of the efficacy of well-configured firewalls has only been covered with some distance from the nuts and bolts, and so is not fully appreciated. What about segmentation for example? Are there any other security controls and products that can undisputedly be linked with cost savings? Segmentation allows us to devote more resources to more critical subnets, rather than blanket measures across a whole network. As a contractor with a logistics multinational in Prague, I was questioned a few times as to why I was testing all internal Linux resources, on a standard issue UK contract rate. The answer? Because they had a flat, wide open internal network with only hot swap redundant firewalls on the perimeter. Regional offices connecting into the data centre had frequent malware problems with routable access to critical infrastructure.

Back in the late 90s, early noughties, some service providers offered a firewall assessment service but the engagements lacked focus and direction, and then this service disappeared altogether…partly because of the lack of thought that went into preparation and also because many in the market really did believe they had nailed firewall configuration. These engagements were delivered in a way that was something like “why do you leave these ports open?”, “because this application X needs those ports open”…and that would be the end of that, because the service providers didn’t know application X, or where its IT assets were located, or the business importance of application X. After thirty minutes into the engagement there were already “why are we here?” faces in the room.

As a roaming consultant, I would always ask to see firewall configurations as part of a wider engagement – usually an architecture workshop whiteboard session, or larger scale risk assessment. Under this guise, there is license to use firewall rulebases to tell us a great deal about the organisation, rather than querying each micro-issue.

Firewall rulebases reveal a large part of the true “face” of an organization. Political divisions are revealed, along with the old classic: opening social networks, betting sites (and such-like) only for senior management subnets, and often times some interesting ports are opened only for manager’s secretaries.

Nine times out of ten, when you ask to see firewall rules, faces will change in the room from “this is a nice time wasting meeting, but maybe I’ll learn something about security” to mild-to-severe discomfort. Discomfort – because there is no hiding place any more. Network and IT ops will often be aware that there are some shortcomings, but if we don’t see their firewall rules, they can hide and deflect the conversation in subtle ways. Firewall rulebases reveal all manner of architectural and application – related issues.

To illustrate some firewall configuration and data flow/architectural issues, here are some examples of common issues:

– Internal private resources 1-to-1 NAT’d to pubic IP addresses: an internal device with a private RFC 1918 address (something like 10. or 192.168. …) has been allocated a public IP address that is routable from the public Internet and clearly “visible” on the perimeter. Why is this a problem? If this device is compromised, the attacker has compromised an internal device and therefore has access to the internal network. What they “see” (can port scan) from there depends on internal network segmentation but if they upload and run their own tools and warez on the compromised device, it won’t take long to learn a great deal about the internal network make-up in these cases. This NAT’ing problem would be a severe problem for most businesses.

– A listening service was phased out, but the firewall still considers the port to be open: this is a problem, the severity of which is usually quite high but just like everything else in security, it depends on a lot of factors. Usually, even in default configurations, firewalls “silently drop” packets when they are denied. So there is no answer to a TCP SYN request from a port scanner trying to fire-up some small talk of a long winter evening. However, when there is no TCP service listening on a higher port (for example) but the firewall also doesn’t block access to this port – there will be a quick response to the effect “I don’t want to talk, I don’t know how to answer you, or maybe you’re just too boring” – this is bad but at least there’s a response. Let’s say port 10000 TCP was left unfiltered. A port scanner like nmap will report other ports as “filtered” but 10000 as “closed”. “Closed” sounds bad but the attacker’s eye light up when seeing this…because they have a port with which to bind their shell – a port that will be accessible remotely. If all ports other than listening services are filtered, this presents a problem for the attacker, it slows them down, and this is what we’re trying to achieve ultimately.

– Dual-homed issues: Sometimes you will see internal firewalls with rules for source addresses that look out of place. For example most of the rules are defined with 10.30.x.x and then in amidst them you see a 172.16.x.x. Oh oh. Turns out this is a source address for a dual-homed host. One NIC has an address for a subnet on one side of a firewall, plus one other NIC on the other side of the firewall. So effectively the dual-homed device is bypassing firewall controls. If this device is compromised, the firewall is rendered ineffective. Nine times out of ten, this dual homing is only setup as a short cut for admins to make their lives easier. I did see this once for a DMZ, where the internal network NIC address was the same subnet as a critical Oracle database.

– VPN gateways in inappropriate places: VPN services should usually be listening on a perimeter firewall. This enables firewalls to control what a VPN user can “see” and cannot see once they are authenticated. Generally, the resources made available to remote users should be in a VPN DMZ – at least give it some consideration. It is surprising (or perhaps not) how often you will see VPN services on internal network devices. So on firewalls such as the inner firewall of a DMZ, you will see classic VPN TCP services permitted to pass inbound! So the VPN client authenticates and then has direct access to the internal network – a nice encrypted tunnel for syphoning off sensitive data.

Outbound Rules

Outbound filtering is often ignored, usually because the business is unaware of the nature of attacks and technical risks. Inbound filtering is usually quite decent, but its still the case as of 2012 that many businesses do not filter any outbound traffic – as in none whatsoever. There are several major concerns when it comes to egress considerations:

– Good netizen: if there is no outbound filtering, your site can be broadcasting all kinds of traffic to all networks everywhere. Sometimes there is nothing malicious in this…its just seen as incompetence by others. But then of course there is the possibility of internal staff hacking other sites, or your site can be used as a base from which to launch other attacks – with a source IP address registered under the ownership of the source of the attack – and this is no small matter.

– Your own firewall can be DOS’d: Border firewalls NAT outgoing traffic, with address translation from private to public space. With some malware outbreaks that involve a lot of traffic generation, the NAT pool can fill quickly and the firewall NAT’ing can fail to service legitimate requests. This wouldn’t happen if these packets are just dropped.

– It will be an essential function of most malware and manual attacks to be able to dial home once “inside” the target – for botnets for example, this is essential. Plus, some publicly available exploits initiate outbound connections rather than fire up listening shells.

Generally, as with ingress, take the standard approach: start with deny-all, then figure out which internal DNS and SMTP servers need to talk to which external devices, and take the same approach with other services. Needless to say, this has to be backed by corporate security standards, and made into a living process.

Some specifics on egress:

– Netbios broadcasts reveal a great deal about internal resources – block them. In fact for any type of broadcast – what possible reason can there be for allowing them outside your network? There are other legacy protocols which broadcast nice information for interested parties – Cisco Discovery Protocol for example.

– Related to the previous point: be as specific as possible with subnet masks. Make these as “micro” as possible.

– There is a general principle around proxies for web access and other services. The proxy is the only device that needs access to the Internet, others can be blocked.

– DNS: Usually there will be an internal DNS server in private space which forwards queries to a public Internet DNS service. Make sure the DNS server is the only device “allowed out”. Direct connections from other devices to public Internet services should be blocked.

– SMTP: Access to mail services is important for many malware variants, or there is mail client functionality in the malware. Internal mail servers should be the only devices permitted to connect to external SMTP services.

As a final note, for those wishing to find more detail, the book I mentioned in part 1 of this diatribe, “Building Internet Firewalls” illustrates some different ways to set up services such as FTP and mail, and explains very well the principles of segregated subnets and DMZs.