How To Break Into Information Security

I’ve been asked a few times recently, usually by operations folk, to give some advice about how to break into the security sector, so under much pain I decided to commit my thoughts on the subject to this web log post. I’ve commented on this subject before and more extensively in chapter 6 of Security De-engineering, but this version is more in line with the times (up to 2012 I was advising a wide pass-by trajectory of planet infosec) and it will be shorter – you have my word(s).

blog-image

First I’d just be wary about trying to get into security just because of financial reasons (David Froud has an excellent blog and one of his posts covered this point well). At the time of writing it is possible to get into the field just by having an IT background and a CISSP. But don’t do that unless you have what’s REALLY required (do not judge what is REALLY required for the field based on job descriptions – at the time of writing, there are still plenty of mistakes being made by organisations). Summarising this in a very brief way:

  • You feel like you have grown out of pure IT-based roles and sort of excelled in whatever IT field you were involved in. You’re the IT professional who doesn’t just clear their problem tickets and switch off. You are, for example, looking for ways to automate things, and self-teach around the subject.
  • Don’t think about getting into security straight from higher education. Whereas it is possible, don’t do it. Just…don’t. Operational Security (or opsec/devopssec) is an option but have some awareness of what this is (scroll down to the end for an explanation).
  • Flexibility: can jump freely from a Cisco switch to an Oracle Database on any Operating System. Taking an example: some IT folk are religious about Unix and experience a mental block when it comes to Windows – this doesn’t work for security. Others have some kind of aversion to Cloud, whereas a better mindset for the field is one that embraces the challenge. Security pros in the “engineer” box should be enthusiastic about the new opportunities for learning offered by extended use of YAML, choosing the ideal federated identity management solution, Puppet, Azure Powershell, and so on. [In theory] projects where on-premise applications are being migrated to Cloud are not [in theory] such a bad place to be in security [in theory].
  • You like coding. Maybe you did some Python or some other scripting. What i’ve noticed is that coding skills are more frequently being seen as requirements. In fact I heard that one organisation went as far as putting candidates through a programming test for a security role. Python, Ruby, Shell ([Li,U]nix) and Powershell are common requirements these days. But even if role descriptions don’t mention coding as a requirement – having these skills demonstrates the kind of flexibility and enthusiasm that go well with infosec. “Regex” comes up a lot but if you’ve done lots of Python/Ruby and/or Unix sed/awk you will be more than familiar with regular expressions.

There is a non-tech element to security (sometimes referred to as “GRC”) but this is something you can get into later. Being aware of international standards and checking to see what’s in a typical corporate security policy is a good idea, but don’t be under the impression that you need to be able to recite verses from these. Generally speaking “writing stuff” and communication is more of a requirement in security than other fields, but you don’t need to be polished at day zero. There are some who see the progression path as Security Analyst –> Security Consultant (Analyst who can communicate effectively).

Another common motivator is hacker conferences or Mr Robot. Infosec isn’t like that. Even the dark side – you see Elliott with a hoody writing code with electronic techno-beats in the background, but hackers don’t write code to compromise networks to any huge degree, if at all. All the code is written for them by others mostly. And as with the femtocell and Raspberry Pi incidents, they usually have to assume a physical presence on the inside, or they are an internal employee themselves, or they dupe someone on the inside of the organisation under attack. Even if you’re in a testing role on the light side, the tests are vastly restricted and there’s a very canned approach to the whole thing with performance KPIs based on reports or something else that doesn’t link to actual intellectual value. Its far from glamorous. There’s an awful lot of misunderstanding out there. What is spoken about at hacker confz is interesting but its not usually stuff that is required to prove the existence of vulnerability in a commercial penetration test – most networks are not particularly well defended, and very little attention is given to results, more so because in most cases the only concern is getting ticks in boxes for an audit – and the auditors are often 12 years old and have never seen a command shell. Quality is rarely a concern.

Its a good practice to build up a list of the more influential bloggers and build up a decent Twitter feed and check what’s happening daily, but also, here are the books that I found most useful in terms of starting out in the field:

  • TCP/IP Illustrated – there are 3 volumes. 1 and 2 are the most useful. Then…
  • Building Internet Firewalls – really a very good way to understand some of the bigger picture ideas behind network architecture design and data flows. I hear rolling of eyes from some sectors, but the same principles apply to Cloud and other “modern” ideas that are from the 90s. With Cloud you have less control over network aspects but network access control and trust relationships are still very much a concern.
  • Network Security Assessment – the earlier versions are also still pertinent unless you will never see a Secure Shell or SMB port (hint: you will).
  • Security Engineering – there’s a very good chapter on Cryptography and Key Management.
  • The Art of Software Security Assessment – whether or not you will be doing appsec for a living you should look at OWASP‘s site and check out Webgoat. They are reportedly looking to bolster their API security coverage, which is nice (a lot of APIs are full of the same holes that were plugged in public apps by the same orgs some years ago). But if you are planning on network penetration testing or application security as a day job, then read this book, its priceless and still very applicable today.
  • The Phoenix Project – a good background illustrative for gaining a better understanding of the landscape in devops.

Also – take a look at perhaps a Windows security standard from the range of CIS benchmarks.

Finally – as i alluded earlier – opsec is not security. Why do i say this? Because i did come across many who believe they made it as a security pro once they joined a SOC/NOC team and then switched off. Security is a holistic function that covers the entire organisation – not just its IT estate, but its people, management, availability and resilience concerns, and processes. As an example – you could be part of a SOC team analysing the alerts generated by a SIEM (BTW some of the best SIEM material online is that written by Dr Anton Chuvakin). This is a very product centric role. So what knowledge is required to architect a SIEM and design its correlation rules? This is security. The same applies to IDS. Responding to alerts and working with the product is opsec. Security is designing the rulebase on an internal node that feeds off a strategically placed network tap. You need to know how hackers work among other areas (see above). Security is a holistic function. A further example: opsec takes the alerts generated by vulnerability management enterprise suites and maybe does some base false positives testing. But how does the organisation respond effectively to a discovered vulnerability? This is security.

Share This:

Addressing The Information Security Skills Gap

We are told there is a skills gap in information security. I agree – there is, but recent suggestions to address the gap take us to dangerous places that are great for recruitment agencies, but not so great for the business world.

I want to steer away from use of the phrase ‘skills’ in this article because its too micro and the phrase has been violated by modern hiring practices. We are not looking for ‘Websense’, ‘DLP’ skills or as i saw recently ‘HSM’ skills. These requirements are silly unless it is the plan for organisations to spend 10 to 50 times more than they need on human resource, and have a security team of 300. Its healthier for organisations to look at ‘habits’ or ‘backgrounds’, and along those lines, in information security we’re looking for the following:

  • At least 5 years in an IT discipline: sys admin, DBA, devops bod, programmer for example
  • Evidence of having excelled in those positions and sort of grown out of them
  • Flexibility: for example, the crusty Radagast BSD-derivative disciple who has no fundamentalist views of other operating systems (think ‘Windows’) and not only can happily work with something like Active Directory, but they actually love working with Active Directory
  • A good-to-have-but-not-critical is past evidence of breaking or making things, but this should seen as a nice bonus. In its own right, it is insufficient – recruiting from hacker confz is far from guaranteed to work – too much to cover here

So really it should be seen that a career in infosec is a sort of ‘graduation’ on from other IT vocations. There should be an entrance exam based on core technologies and penetration testing. The career progression path goes something like: Analyst (5 years) –> Consultant -> Architect/Manager. Managers and architects cannot be effective if they do not have a solid IT background. An architect who doesn’t know her way around a Cisco router, implement a new SIEM correlation rule, or who cannot run or interpret the output from a packet sniffer is not an architect.

Analysts and Consultants should be skilled with the core building blocks to the level of being confidently handed administrative access to production systems. As it is, security pros find it hard to even get read-only access to firewall management suites. And having fast access to information on firewall rules – it can be critical.

Some may believe that individuals fitting the above profile are hard to find, and they’d be right. However, with the aforementioned model, the workforce will change from lots of people with micro-skills or product-based pseudo skills, to fewer people who are just fast learners and whose core areas complement each other. If you consider that a team of 300 could be reduced to 6 – the game has changed beyond recognition.

Quoting a recent article: “The most in demand cyber security certifications were Security+, Ethical Hacking, Network+, CISSP, and A+. The most in demand skills were Ethical Hacking, Computer Forensics, CISSP, Malware Analysis, and Advanced Penetration Testing”. There are more problems with this to describe in a reasonable time frame but none of these should ever be called ‘skills’. Of these, Penetration Testing (leave out the ‘ethics’ qualifier because it adds a distasteful layer of judgment on top of the law) is the only one that should be called a specification in its own right.

And yes, Governance, Risk and Control (GRC) is an area that needs addressing, but this must be the role of the Information Security Manager. There is a connection between Information Security Manage-ment and Information Security Manage-er.  Some organisations have separate GRC functions, the UK public sector usually has dedicated “assurance” functions, and as i’ve seen with some law firms, they are separated from the rest of security and IT.  Decision making on risk acceptance or mitigation, and areas such as Information Classification, MUST have an IT input and this is the role of the Information Security Manager. There must be one holistic security team consisting of a few individuals and one Information Security Manager.

In security we should not be leaving the impression that one can leave higher education, take a course in forensics, get accreditation, and then go and get a job in forensics. This is not bridging the security skills gap – its adding security costs with scant return. If you know something about forensics (usually this will be seen as ‘Encase‘ by the uninitiated) but don’t even have the IT background, let alone the security background, you will not know where to look in an investigation, or have a picture of risk. You will not have an inkling of how systems are compromised or the macro-techniques used by malware authors. So you may know how to use Encase and take an integral disk image for example, but that will be the limit of your contribution. Doesn’t sound like a particularly rewarding way to spend 200 business days per year? You’d be right.

Sticking with the forensics theme: an Analyst with the right mindset can contribute effectively in an incident investigation from day one. There are some brief aspects of incident response for them to consider, but it is not advisable to view forensics/incident response as a deep area. We can call it a specification, just as an involuntary action such as breathing is a specification, but if we do, we are saying that it takes more than one person to change a light bulb.

Incident response from the organisational / Incident Response Plan (IRP) formation point of view is a one-day training course or a few hours of reading. The tech aspects are 99% not distinct from the core areas of IT and network security. This is not a specialisation.

Other areas such as DLP, Threat Intelligence, SIEM, Cryptography and Key Management – these can be easily adopted by the right security minds. And with regard security products – it should be seen that security professionals are picking up new tools on-the-fly and don’t need 2 week training courses that cost $4000. Some of the tools in the VM and proxy space are GUIs for older open source efforts such as Nessus, OpenVAS, and Squid with which they will be well-versed, and if they’re not, it will take an hour to pick up the essentials.

There’s been a lot of talk of Operating Systems (OS) thus far. Operating Systems are not ‘a thing from 1998’. Take an old idea that has been labelled ‘modern’ as an example: ok, lets go with ‘Cloud’. Clouds have operating systems. VMs deployed to clouds have operating systems. When we deploy a critical service to a cloud, we cannot ignore the OS even if its a PaaS deployment. So in security we need people who can view an OS in the same way that a hacker views an OS – we need to think about Kill Chains and local privilege elevations. The Threat and Vulnerability Management (TVM) challenge does not disappear just because you have PaaS’d everything. Moreover if you have PaaS’d everything, you have immediately lost the TVM battle. As Beaker famously said in his cloud presentation – “Platforms Bitches”. Popular OS like Windows, *nix, Linux, and popular applications such as Oracle Database are going to be around for some time yet and its the OS where the front-lines are drawn.

Also what is a common misconception and does not work: a secops/network engineer going straight into security with no evidence of interest in other areas. ‘Secops’ is not good preparation for a security career, mainly because secops is sort of purgatory. Just as “there is no Dana, only Zool“, so “there is no secops, only ops”. There is only a security element to these roles because the role covers operational processes with security products. That is anti-security.

All Analyst roles should have an element of penetration testing and appsec, and when I say penetration testing, i do mean unrestricted testing as in an actual simulation. That means no restrictions on exploit usage or source address – because attackers do not have such restrictions. Why spend on this type of testing if its not an actual simulation?

Usage of Cisco Discovery Protocol (CDP) offers a good example of how a lack of penetration testing experience can impede a security team. If security is being done even marginally professionally in an organisation, there will exist a security standard for Cisco network devices that mandates the disabling of CDP.  But once asked to disable CDP, network ops teams will want justification. Any experienced penetration tester knows the value of intelligence in expediting the attack effort and CDP is a relative gold mine of intelligence that is blasted multicast around networks. It can, and often does, reveal the identity and IP address of a core switch. But without the testing experience or knowledge of how attacks actually go down, the point will be lost, and the confidence missing from the advisory.

The points i’ve just covered are not actually ground-breaking at all. Analysts with a good core background of IT and network security can easily move into any new area that marketeers can dream up.

There is an intuition that Information Security has a connection with Information Technology, if only for the common word in them both (that was ‘Information’ by the way, in case you didn’t get it). However, as Upton Sinclair said “It is difficult to get a man to understand something, when his salary depends upon his not understanding it”.

And please don’t create specialisations for Big Data or Internet of Things…woops, too late.

So, consider a small team of enthusiastic, flexible, fast learners, rather than a large team of people who can be trained at a high cost to understand the UI of an application that was designed in the international language and to be intuitive and easy to learn.

Consider using one person to change a light bulb, and don’t be the butt of future jokes.

Share This:

Ghost – Buffer Overflow in glibc Library

In the early hours of 28th January GMT+0 2015, news started to go mainstream about vulnerability in the open source glibc library. The issue has been given the vulnerability marketing term “Ghost” (the name derives from the fact that the vulnerability arises because of an exploitable bug in the gethostbyname() function).

The buffer overflow vulnerability has been given the CVE reference CVE-2015-0235.

glibc is not a program as such. It’s a library that is shared among one or more programs. It is most commonly found on systems running some form of Linux as an operating system.

Redhat mention on their site that DNS resolution uses the gethostbyname() function and this condition has supposedly been shown to be vulnerable. This makes the Ghost issue much more critical than many are claiming.

As with most things in security, the answer to the question “is it dangerous for me” is “it depends” – sorry – there is no simple binary yes or no here. Non-vulnerable versions of glibc have been available for some time now, so if the installation of the patch (read the last section on “Mitigation” for details) is non-disruptive, then just upgrade.

Most advisories are recommending the immediate installation of a patch, but read on…

Impact

glibc is a core component of Linux. The vulnerability impacts most Linux distributions released circa 2000 to mid-2013. This means that, similar to Heartbleed, it affects a wide range of applications that happen to call the vulnerable function.

Giving credit to the open source dev team, the bug was fixed in 2013 but some vendors continued to use older branches of glibc.

If the issue is successfully exploited, unauthorised commands can be executed locally or remotely.

To be clear: this bug is remotely exploitable. So far at least one attack vector has been identified and tested successfully. Exim mail server uses the glibc library and it was found to be remotely exploitable thru a listening SMTP service. But again – its not as simple as “I run Exim therefore I am vulnerable”. The default configuration of Exim is not vulnerable.

Just as with Shellshock, a handful of attack vectors were identified immediately and more began to surface over the following weeks. The number of “channels”, or “attack vectors” by which the vulnerability may be exploited determines the likelihood that an attack may be attempted against an organisation.

Also, just as with Shellshock, it cannot be said with any decisiveness whether or not the exploit of the issue gains immediate root access. It is not chiselled in stone that mail servers need to run with root privileges, but if the mail server process is running as root, then root will be gained by a successful exploit. It is safer to assume that immediate root access would be gained.

More recent versions of Exim mail server on Ubuntu 14 run under the privileges of the “Debian-exim” user, which is not associated with a command shell, but also default Ubuntu installations will be easy for moderately skilled attackers to compromise completely.

So there is a possibility that remote, unauthenticated access can be gained with root privileges.

Just as with Shellshock, we can expect there to be automated BOT – initiated scanners that look for signs of exploitable services on the public Internet and attempt to gain local access if a suitable candidate is found.

At the time of writing, Rapid 7 have said they are working on a Metasploit test for the condition, which as well as allowing organisations to test for their vulnerability to Ghost, also of course permits lower skilled attackers to exploit the issue.

Another example of an exploitable channel is Rapid 7’s own tool Nexpose, which if running on an Ubuntu 12 appliance, will be vulnerable. However this will not be remotely exploitable.

Redhat mention on their site that DNS resolution also uses the gethostbyname() function and “to exploit this vulnerability, an attacker must trigger a buffer overflow by supplying an invalid hostname argument to an application that performs a DNS resolution”. If this is true, then the seriousness of Ghost increases exponentially because DNS is a commonly “exposed” service thru Internet-facing firewalls.

A lot of software on Linux systems uses glibc. From this point of view, its likely there could be a lot more attack vectors appearing with Ghost as compared with Shellshock. Shellshock was a BASH vulnerability and BASH is present on most Linux systems. However, the accessibility of BASH from a remote viewpoint is likely to be less than that of Ghost.

Risk Evaluation

There is “how can it be fixed?” but first there is “what should we do about it?” – a factor that depends on business risk.

When the information security community declares the existence of new vulnerability, the risks to organisations cannot categorically be given even base indicators of “high”, “medium”, or “low”, mainly because different organisations, even in the same industry sector, can have radically differing exposure to the vulnerability. Factors such as network segmentation can have mitigating effects on vulnerability, whereas the commonplace DMZ plus flat RFC 1918 private space usually vastly increases the potential financial impact of an attack.

Take a situation where a vulnerable Exim mail service listens exposed to the public Internet, and exists in a DMZ with a flat, un-segmented architecture, the risk here is considerable. Generally with most networks, one host falls on a network, and then others can fall rapidly after this.

Each organisation will be different in terms of risk. We cannot even draw similarities between organisations in the same industry sector. Take a bank for example: if a bank exposes a vulnerable service to the Internet, and has a flat network as described above, it can be a matter of minutes before a critical database is compromised.

As always the cost of patching should be evaluated against the cost of not patching. In the case of Ghost, the former will be a lot cheaper in many cases. Remember that increasing numbers of attack vectors will become apparent with time. A lot of software uses glibc!! Try uninstalling glibc on a Linux system using a package manager such as Aptitude, and this fact becomes immediately apparent.

Mitigation

Ubuntu versions newer than 12.04 have already been upgraded to a non-vulnerable glibc library. Older Ubuntu versions (as well other Linux distributions) are still using older versions of glibc and are either waiting on a patch or a patch is already available.

Note that several services may be using glibc so patching should not take place until all dependencies are known and impacts evaluated.

The machine will need a reboot after the upgrade of glibc.

Share This:

Skeleton Key – A Worthy Name?

Vulnerabilities have been announced in recent months with scary names like Shellshock , which came after Heartbleed. Also “Evil Twin” (used to describe a copy-cat wifi rogue AP deployment). This is a new art form so it seems, one which promotes vulnerabilities in a marketing sense. No doubt those vulnerabilities were worthy of attention by organisations, but the initial scare factor was higher than justified based on the technical analysis. “Evil Twin” is a genuine concern as a very easy and very effective means of capturing personal data from wifi users, but the others had the potential of impactful exploit across a smaller percentage of organisations.

With both Shellshock and Heartbleed there were misleading reports and over-playing on the risk element. Shellshock was initially touted by some as an exploit that completely compromised the target from a remote source, with no authentication challenge! It was far from that. With Skeleton Key, i dare say there will be reports that suggest that immediate remote access by any user to anything under Active Directory will be gained easily. Again – this is not the case. Far from it. But once inside a network, Skeleton Key does as it says – it does unlock everything that uses Active Directory for authentication for those who know a specific password.

So “Skeleton Key”? Yes, but as with any key, you have to have possession of it first – and this is the tricky part. It is not the case that the doors are all unlocked.

As a very brief summary:

  • Admin rights are first needed to deploy Skeleton Key, but once deployed, unfettered access to all devices under Active Directory (AD) are granted.
  • Any AD account can be used, but the NTLM hash that was used in the deployment is the password. This must be known by anyone looking to take advantage of a successful deployment.
  • The malware is not persistent – once a DC is rebooted (such as after a patch install) it needs to be re-deployed
  • IDS/IPS doesn’t help. Detective controls around logging are the only defence currently

A 12th January report by Dell’s SecureWorks Counter Threat Unit gives some details on a new malware pattern, one that appears to allow complete bypass of Active Directory authentication.

Skeleton Key is not a persistent malware package in that the behaviour seen thus far by researchers is for the code to be resident only temporarily. A restart of a Domain Controller will remove the malicious code from the system. Typically however, critical domain controllers are not rebooted frequently.

The Dell researchers initially observed a Skeleton Key sample named ole64.dll on a compromised network. But they then found an older version msuta64.dll on a host that was previously compromised by (probably) the same attackers on a staging system.
ole.dll is another file name used by Skeleton Key. Windows systems include a legitimate ole32.dll file, but it is not related to this malware.

The malware is not compatible with 32-bit Windows versions or with Windows Server versions beginning with Windows Server 2012 (6.2).

Note it is not the case that any user can authenticate as any user for AD environments under Skeleton Key influence. The NTLM hash of the password configured by the attackers has to be known, but this password can be used to authenticate under any user account. Normal user access happens in the same way – there is no impact on existing user accounts and passwords.

Impact

To be clear, administrative rights are required for this malware to be introduced in the first place. But once in place, any service that uses Active Directory can be bypassed if it only uses single-factor authentication. Such services as VPN gateways and webmail will be freely accessible.

Most compromises of systems result in a listening service on a higher port, or a connection initiated out to a remote host. Skeleton Key succeeds in removing the controls implemented by a central authentication and user management system, thereby opening a whole network to unauthorised access with one step. In this way Skeleton Key could be seen as a kind of Swiss Army Knife for remote attackers, who could trick users or administrators into installing malicious software, then gain admin rights, then completely bypass Active Directory controls with only the second or third major step in their attack attempt.

In the case of the investigation that unearthed Skeleton Key, a global company headquartered in London, was found infected with a Remote Access Trojan (RAT), in order to give attackers continued access.

Mitigation

Two-factor authentication clearly resolves remote unauthorised connection issues, but at the time of writing this is the only ready-made preventative control.
Detection is possible, but not from the network perspective. IDS/IPS isn’t going to be helpful because the behaviour of Skeleton Key does not involve network-based activity.
A YARA signature is given in the researchers write-up of the investigation. Aside from this, knowledge of the malware propagation behaviour can be used to configure Windows auditing, hopefully to improve the odds of detection. More details on the behaviour, particularly in the use of psexec.exe, are given in the Dell researchers’ verdict. Other signs can be:

• Process arguments that resemble NTLM hashes
• Unexpected process start/stops
• Domain replication issues

Compared with Heartbleed, Shellshock.

There is a similarity with Shellshock in that the initial attack vector isn’t as easy for attackers to deploy as was first publicised, but the effects of a successful first attack step can potentially be devastating. In the case of Skeleton Key though, it is more impactful in that an entire Windows domain can be compromised easily. In the case of Shellshock, local shell access is gained only on the machine that is compromised but the privileges of the shell are only that of the process that was compromised.

The main difference is that with Skeleton Key, administrator rights needs to be gained on one system in the domain first. No such requirement exists with either Heartbleed or Shellshock. Heartbleed needed no privileges for a successful exploit but the results of the exploit were unlikely to mean the immediate compromise of the network.

References

The Dell researchers’ detailed write-up:
• http://www.secureworks.com/cyber-threat-intelligence/threats/skeleton-key-malware-analysis/
SC Magazine’s coverage:
• http://www.scmagazine.com/skeleton-key-bypasses-authentication-on-ad-systems/article/392368/

Share This:

Information Security Pseudo-skills and the Power of 6

How many Security Analysts does it take to change a light bulb? The answer should be one but it seems organisations are insistent on spending huge amounts of money on armies of Analysts with very niche “skills”, as opposed to 6 (yes, 6!) Analysts with certain core skills groups whose abilities complement each other. Banks and telcos with 300 security professionals could reduce that number to 6.

Let me ask you something: is Symantec Control Compliance Suite (CCS) a “skill” or a product or both? Is Vulnerability Management a skill? Its certainly not a product. Is HP Tippingpoint IPS a product or a skill?

Is McAfee Vulnerability Manager 7.5 a skill whereas the previous version is another skill? So if a person has experience with 7.5, they are not qualified to apply for a shop where the previous version is used? Ok this is taking it to the extreme, but i dare say there have been cases where this analogy is applicable.

How long does it take a person to get “skilled up” with HP Arcsight SIEM? I was told by a respected professional who runs his own practice that the answer is 6 months. My immediate thought is not printable here. Sorry – 6 months is ridiculous.

So let me ask again, is Symantec CCS a skill? No – its a product. Its not a skill. If you take a person who has experience in operational/technical Vulnerability Management – you know, vulnerability assessment followed by the treatment of risk, then they will laugh at the idea that CCS is a skill. Its only a skill to someone who has never seen a command shell before, tested manually for a false positive, or taken part in an unrestricted manual network penetration test.

Being a software product from a major vendor means the GUI has been designed to make the software intuitive to use. I know that in vulnerability assessment, i need to supply the tool with IP addresses of targets and i need to tell the tool which tests I want to run against those targets. Maybe the area where I supply the addresses of targets is the tab which has “targets” written on it? And I don’t want to configure the same test every time I run it, maybe this “templates” tab might be able to help me? Do i need a $4000 2-week training course and a nice certificate to prove to the world that I can work effectively with such a product? Or should there be an effective accreditation program which certifies core competencies (including evidence of the ability to adapt fast to new tools) in security? I think the answer is clear.

A product such as a Vulnerability Management product is only a “Window” to a Vulnerability Management solution. Its only a GUI. It has been tailored to be intuitive to use. Its the thin layer on top of the Vulnerability Management solution. The solution itself is much bigger than this. The product only generates list of vulnerabilities. Its how the organisation treats those vulnerabilities that is key – and the product does not help too much with the bigger picture.

Historically vulnerability management has been around for years. Then came along commercial products, which basically just slapped a GUI on processes and practices that existed for 20 years+, after which the jobs market decided to call the product the solution. The product is the skill now, whereas its really vulnerability management that is the skill.

The ability to adapt fast to new tools is a skill in itself but it also is a skill that should be built-in by default: a skill that should be inherent with all professionals who enter the field. Flexibility is the key.

The real skills are those associated with areas for large volumes of intellectual capital. These are core technologies. Say a person has 5 years+ experience of working in Unix environments as a system administrator and has shown interest in scripting. Then they learn some aspects of network penetration testing and are also not afraid of other technologies (such as Windows). I can guarantee that they will be up and running in less than one day with any new Vulnerability Management tool, or SIEM tool, or [insert marketing buzzphrase here] that vendors can magic up.

Different SIEM tools use different terms and phrases for the same basic idea. HP uses “flex connectors” whilst Splunk talks about “Forwarders” and “Heavy Forwarders” and so on. But guess what? I understand English but If i don’t know what the words mean, i can check in an online dictionary. I know what a SIEM is designed to do and i get the data flows and architecture concept. Network aggregation of syslog and Windows Events is not an alien concept to me, and neither are all layers of the TCP/IP stack (a really basic requirement for all Analysts – or should be). Therefore i can adapt very easily to new tools in this space.

IPS/IDS and Firewalls? Well they’re not even very functional devices. If you have ever setup Snort or iptables you’ll be fine with whatever product is out there. Recently myself and another Consultant were asked to configure a Tippingpoint device. We were up and running in 10 minutes. There were a few small items that we needed to check against the product documentation. I have 15 years+ experience in the field but the other guy is new. Nonetheless he had configured another IPS product before. He was immediately up and running with the product – no problem. Of course what to configure in the rule base – that is a bigger story and it requires knowledge of threats, attack techniques and vulnerabilities – but that area is GENERIC to security – its not specific to a product.

I’ve seen some truly crazy job specifications. One i saw was Websense Specialist!! Come on – its a web proxy! Its Squid with extra cosmetic functions. The position would be filled by a Websense “Olympian” probably. But what sort of job is that? Carpe Diem my friends, Carpe Diem.

If you run a security consultancy and you follow the usual market game of micro-boxed, pigeon-holed security skills, i don’t know how you can survive. A requirement comes up for a project that involves a number of different products. Your existing consultants don’t have those products written anywhere on their CVs, so you go to market looking for contractors at 600 USD per day. You either find the people somehow, or you turn the project down.  Either way you lose out massively. Or – you could have a base of 6 (its that number again) consultants with core skills that complement each other.

If the over-specialisation issue were addressed, businesses would save considerably on human resource and also find it easier to attract the right people. Pigeon-holed jobs are boring. It is possible and advisable to acquire human resource able to cover more bases in risk management.

There are those for and against accreditation in security. I think there is a solution here which is covered in more detail of Chapter 11 of Security De-engineering.

So how many Security Analysts does it take to change a light bulb? The answer is 6, but typically in real life the number is the mark of the beast: 666.

Share This:

Windows Vulnerability Management: A Cheat Sheet

So its been a long time since my last post. In fact it is almost an entire calendar year. But if we ask our delusional/non-cynical (delete as appropriate) colleagues, we discover that we should try to look to the positive. The last post was on 10th September 2013. So do not focus on the fact that its nearly 1 year since my last post, focus on the 1 week between this post and the 1 year mark. That is quite something! Remember people: positivity is key. 51 weeks instead of 52. All is good.

And to make up for the proportional short-fall of pearls of wisdom, I am hereby making freely available a spreadsheet I developed: its a compendium of information with regard to Windows 2008 Server vulnerabilities. In vulnerability management, the tool (McAfee VM 7.5 in this case, but it could be any tool) produces long lists of vulnerability. Some of these can be false positives. Others will be bona fide. Then, where there is vulnerability, there is risk, and the level of that risk to the business needs to be deduced.

The response of the organisation to the above-mentioned list of vulnerability is usually an IT response – a collaboration between security and operations, or at least it should be. Security typically is not in a position to deduce the risk and / or the remedial actions by themselves. This is where the spreadsheet comes in. All of the information is in one document and the information that is needed to deduce factors such as impact, ease of exploit, risk, false positive, etc…its all there.

Operating system security checklists are as thin on the ground (and in their content) as they are critical in the prevention (and also detection) world. Work in this area is seen as boring and unsexy. It doesn’t involve anything that could get you a place as a speaker at a Black Hat conference. There is nothing in this realm that involves some fanciful breach technique.

Overall, forget perimeter firewalls and anti-virus – operating system security is now the front line in the battle against unauthorised access.

The CIS Benchmarks are quoted by many as a source of operating system configuration security check items and fair play to the folks at CIS for producing these documents.

The columns are as such:

  • “CIS” : the CIS subtitle for the vulnerability
  • “Recommended VM Test”: yes or no, is this a test that is worth doing? (not all of them are worth it, some are outdated, some are just silly)
  • “McAfee Test Available”: yes or no
  • “McAfee Test ID”: numeric code of the test pattern with McAfee VM 7.5
  • “Comments”: summary of the CIS text that describes the vulnerability
  • “Test / Reg Key”: the registry key to check, or other test for the vulnerability
  • “Group Policy”: The GPO value related to the vulnerability if applicable
  • “Further comments”: Some rationale from experience. For example, likelihood of being a false positive, impact, risk, ease of exploit, how to exploit. Generally – more details that can help the Analyst when interfacing with Windows support staff.
  • “McAfee VM test notes”: This is a scratchpad for the Analyst to make notes, as a reference for any other Analysts who may be performing some testing. For example, if the test regularly yields a false positive, note the fact here.
  • “References”: URLs and other material that give some background information. Mostly these are relevant Microsoft Technet articles.

MVM-CIS-Spreadoe

So if anyone would like a copy of this spread sheet, please don’t hesitate to contact me. No – I will not spam you or share your contact details.

 

Share This:

Information Security Careers: The Merits Of Going In-house

Job hunting in information security can be a confusing game. The lack of any standard nomenclature across the sector doesn’t help in this regard. Some of the terms used to describe open positions can be interpreted in wildly different ways. “Architect” is a good example. This term can have a non-technical connotation with some, and a technical connotation with others.

There are plenty of pros who came into security, perhaps via the network penetration testing route, who only ever worked for consultancies that provide services, mainly for businesses such as banks and telcos. The majority of such “external” services are centered around network penetration testing and application testing.

I myself started out in infosec on the consultancy path. My colleagues were whiz kids and some were well known in the field. Some would call them “hackers”, others “ethical” or “white hat” network penetration testers. This article does not cover ethics or pander to some of the verdicts that tend to be passed outside of the law.

Many Analysts and Consultants will face the decision to go in-house at some point in their careers, or remain in a service provider capacity. Others may be in-house and considering the switch to a consultancy. This post hopefully can help the decision making process.

The idea of going in-house and, for example, taking up an Analyst position with a big bank – it usually doesn’t hold much appeal with external consultants. The idea prevails that this type of position is boring or unchallenging. I also had this viewpoint and it was largely derived from the few visions and sound bytes I had witnessed behind the veil. However, what I discovered when I took up an analyst position with a large logistics firm was that nothing could be further from the truth. Going in-house can benefit one’s career immensely and open the eyes to the real challenges in security.

Of course my experiences do not apply across the whole spectrum of in-house security positions. Some actually are boring for technically oriented folk. Different organisations do things in different ways. Some just use their security department for compliance purposes with no attention to detail. However there are also plenty that engage effectively with other teams such as IT operations and development project teams.

As an Analyst in a large, complex environment, the opportunity exists to learn a great deal more about security than one could as an external consultant.  An external consultant’s exposure to an organisation’s security challenges will only usually come in the form of a network or application assessment, and even if the testing is conducted thoroughly and over a period of weeks, the view will be extremely limited. The test report is sent to the client, and its a common assumption that all of the problems described in the report can be easily addressed. In the vast majority of cases, nothing could be further from the truth. What becomes apparent at a very early stage in one’s life as an in-house Analyst, is that very few vulnerabilities can be mitigated easily.

One of the main pillars of a security strategy is Vulnerability Management. The basis of any vulnerability management program is the security standard – the document that depicts how, from a security perspective, computer operating systems, DBMS, network devices, and so on, should be configured. So an Analyst will put together a list of configuration items and compose a security standard. Next they will meet with another team, usually IT operations, in an attempt to actually implement the standard in existing and future platforms. For many, this will be the point where they realize the real nature of the challenges.

Taking an example, the security department at a bank is attempting to introduce a Redhat Enterprise Linux security standard as a live document. How many of the configuration directives can be implemented across the board with an acceptable level of risk in terms of breaking applications or impacting the business in any way? The answer is “not many”. This will come as a surprise for many external consultants. Limiting factors can come from surprising sources. Enlightened IT ops and dev teams can open security’s eyes in this regard and help them to understand how the business really functions.

The whole process of vulnerability management, minus VM product marketeers’ diatribe, is basically detection, then deduce the risk, then take decisions on how to address the risk (i.e. don’t address the vulnerability and accept the risk, or address / work around the vulnerability and mitigate the risk). But as an external consultant, one will only usually get to hand a client a list of vulnerabilities and that will be the end of the story. As an in-house Security Analyst, one gets to take the process from start to finish and learn a great deal more in the process.

As a security consultant passing beyond the iron curtain, the best thing that can possibly happen to their careers is that they find themselves in a situation where they get to interface with the enlightened ones in IT operations, network operations (usually there are a few in net ops who really know their security quite well), and application architects (and that’s where it gets to be really fun).

For the Security Consultant who just metamorphosized into an in-house Analyst, it may well be the first time in their careers that they get to encounter real business concerns. IT operations teams live in fear of disrupting applications that generate huge revenues per minute. The fear will be palpable and it encourages the kind of professionalism that one may never have a direct need to have as an external consultant. Generally, the in-house Analyst gets to experience in detail how the business translates into applications and then into servers, databases, and data flows. Then the risks to information assets seem much more real.

The internal challenge versus the external challenge in security is of course one of protection versus breaking-in. Security is full of rock stars who break into badly defended customer networks and then advertise the feat from the roof tops. In between commercial tests and twittering school yard insults, the rock stars are preparing their next Black Hat speech with research into the latest exotic sploit technique that will never be used in a live test, because the target can easily be compromised with simple methods.

However the rock stars get all the attention and security is all about reversing and fuzzing so we hear. But the bigger challenge is not breaking in, its protection, but then protection is a lot less exotic and sexy than breaking in. So there lies the main disadvantage of going in-house. It could mean less attention for the gifted Analyst. But for many, this won’t be such an issue, because the internal job is much more challenging and interesting, and it also lights up a CV, especially if the names are those in banking and telecoms.

How about going full circle? How about 3 years with a service provider, then 5 years in-house, then going back to consulting? Such a consultant is indeed a powerful weapon for consultancies and adds a whole new dimension for service providers (and their portfolio of services can be expanded). In fact such a security professional would be well positioned to start their own consultancy at this stage.

So in conclusion: going in-house can be the best thing that a Security Consultant can do with their careers. Is going in-house less interesting? Not at all. Does it mean you will get less attention? You can still speak at conferences probably.

Share This:

One Infosec Accreditation Program To Bind Them All

May 2013 saw a furious debate ensue after a post by Brian Honan (Is it time to professionalize information security?) that suggested that things need to be improved, which was followed by some comments to the effect that accreditation should be removed completely.

Well, a suggestion doesn’t really do it for me. A strong demand doesn’t really do it either, in fact we’re still some way short. No – to advocate the strength of current accreditation schemes is ludicrous. But to then say that we don’t need accreditations at all is completely barking mad.

Brian correctly pointed out “At the moment, there is not much that can be done to prevent anyone from claiming to be an information security expert.” Never a truer phrase was spoken.

Other industry sectors have professional accreditation and it works. The stakes are higher in areas such as Civil Engineering and Medicine? Well – if practitioners in those fields screw up, it cost lives. True, but how is this different from Infosec? Are the stakes really lower when we’re talking about our economic security? We have adversaries and that makes infosec different or more complex?

Infosec is complex – you can bet ISC2’s annual revenue on that. But doesn’t that make security even more deserving of some sort of accreditation scheme that works and generates trust?

I used the word “trust”, and I used it because that what’s we’re ultimately trying to achieve. Our customers are C-levels, other internal departments, end users, home users, and so on. At the moment they don’t trust infosec professionals and who can blame them? If we liken infosec to medicine, much of the time, forget about the treatment, we’re misdiagnosing. Infosec is still in the dark ages of drilling holes in heads in order to cure migraine.

That lack of trust is why, in so many organizations, security has been as marginalized as possible without actually being vaporized completely. Its also why security has been reduced down to the level of ticks in boxes, and “just pass the audit”.

Even though an organization has the best security pros in the world working for them, they can still have their crown jewels sucked out through their up-link in an unauthorized kinda way. Some could take this stance and advocate against accreditation because ultimately, the best real-world defenses can fail. However, nobody is pretending that the perfect, “say goodbye to warez – train your staff with us” security accreditation scheme can exist. But at the same time we do want to be able to configure detection and cover some critical areas of protection. To say that we don’t need training and/or accreditation in security is to say the world doesn’t need accreditation ever again. No more degrees and PhDs, no more CISSPs, and so on.

We certainly do need some level of proof of at least base level competence. There are some practices and positions taken by security professionals that are really quite deceptive and result in resources being committed in areas where there is 100% wastage. These poor results will emerge eventually. Maybe not tomorrow, but eventually the mess that was swept under the carpet will be discovered. We do need to eliminate these practices.

So what are we trying to achieve with accreditation? The link with IT needs to be re-emphasized. The full details of a proposal are covered in chapter 11 of Security De-engineering, but basically what we need first is to ensure the connection at the Analyst level with IT, mainly because of the information element of information technology and information security (did you notice the common word in IT and IT security? Its almost as though there might be a connection between them). 80% of information is now held in electronic form. So businesses need expertise to assist them with protection of that information.

Security is about both business and IT of course. Everybody knows this even if they can’t admit it. There is an ISMS element that is document and process based, which is critical in terms of future proofing the business and making security practices more resource-efficient. A baseline security standard is a critical document and cannot be left to gather dust on a shelf – it does need to be a “living” document. But the “M” in ISMS stands for Management, and as such its an area for…manage-ers. What is quite common is to find a security department of 6 or more Analysts who specialize in ISMS and audits. That does not work.

There has to be a connection with IT and probably the best way to ensure that is to advocate that a person cannot metamorphosize into a Security Analyst until they have 5 years served in IT operations/administration, network engineer, or as a DBA, or developer. Vendor certs such as those from IBM, Microsoft, Cisco – although heavily criticized they can serve to indicate some IT experience but the time-served element with a signed off testimonial from a referee is critical.

There can be an entrance exam for life as an Analyst. This exam should cover a number of different bases. Dave Shackleford’s assertion that creative thinkers are needed is hard to argue with. Indeed, what i think is needed is a demonstration of such creativity and some evidence of coding experience goes a long way towards this.

Flexibility is also critical. Typically IT ops folk cover one major core technology such as Unix or Windows or Cisco. Infosec needs people who can demonstrate flexibility and answer security questions in relation to two or more core technologies. As an Analyst, they can have a specialization with two major platforms plus an area such as application security, but a broad cross-technology base is critical. Between the members of a team, each one can have a specialization, but the members of the team have knowledge that compliments each other, and collectively the full spectrum of business security concerns can be covered.

There can be specializations but also proportional rewards for Analysts who can demonstrate competence in increasing numbers of areas of specialization. There is such a thing as a broad-base experienced Security Analyst and such a person is the best candidate for niche areas such as forensics, as opposed to a candidate who got a forensics cert, learned how to use Encase, plastered forensics on their CV, and got the job with no other Analyst experience (yes – it does happen).

So what emerges is a pattern for an approximate model of a “graduation”-based career path. And then from 5 years time-served as an Analyst, there can be another exam for graduation into the position of Security Manager or Architect. This exam could be something similar to the BCS’s CISMP or ISACA’s CISA (no – I do not have any affiliations with those organizations and I wasn’t paid to write this).

Nobody ever pretended that an accreditation program can solve all our problems, but we do need base assurances in order for our customers to trust us.

Share This:

Oracle 10g EE Installation On Ubuntu 10

This is all 32 bit, no 64 bit software will be covered here.

To get Oracle 10g in 2013 requires a support account of course. Only 11g is available now. Basically I needed Oracle 10 because its still quite heavily used in global business circles. My security testing software may run into Oracle 10 (in fact, already has several times).

After some considerable problems with library linking related failures with Oracle 10g and Ubuntu 12 (12.04.2), I decided to just save time by backdating and using more compatible libraries. The install with Ubuntu 10.04.4 Lucid Lynx. The install with this older version (this is only for dev work, trust me i wouldn’t dream of using this in production) went like a dream.

Java

Note that many install guides insist on installing Oracle’s Java or some other JVM. I found that this was not necessary.

Other Libraries

and then libstdc++5 will be required. I found it here eventually…

http://old-releases.ubuntu.com/ubuntu/pool/universe/g/gcc-3.3/libstdc++5_3.3.6-17ubuntu1_i386.deb

and then …

dpkg -i libstdc++5_3.3.6-17ubuntu1_i386.deb

This process installs the library in the right place (at least where the installer for Oracle looks).

Users and Groups

sudo group add oinstall
sudo group add dba
sudo group add nobody
sudo user add -m oracle -g oinstall -G dba
sudo passwd oracle

Kernel Parameters

In /etc/sysctl.conf …

kernel.shmall = 2097152
kernel.shmmax = 2147483648
kernel.shmmni = 4096
kernel.sem = 250 32000 100 128
fs.file-max = 65536
net.ipv4.ip_local_port_range = 1024 65000

Reload to take effect…

root@vm-ubuntu-11:~# /sbin/sysctl -p

kernel.shmall = 2097152
kernel.shmmax = 2147483648
kernel.shmmni = 4096
kernel.sem = 250 32000 100 128
fs.file-max = 65536
net.ipv4.ip_local_port_range = 1024 65000

Change Limits

vi /etc/security/limits.conf

Add the following …

* soft nproc 2047
* hard nproc 16384
* soft nofile 1024
* hard nofile 65536

Change The System In A Suse Kinda Way

(Ubuntu isn’t a supported distro for Oracle DB and some subtle changes are needed)

sudo ln -s /usr/bin/awk /bin/awk
sudo ln -s /usr/bin/rpm /bin/rpm
sudoln -s /lib/libgcc_s.so.1 /lib/libgcc_s.so
sudo ln -s /usr/bin/basename /bin/basename

Oracle Base Directory

I went with more typical Oracle style directories here for some reason, but you can choose what’s best for you, as long as the ownership is set correctly (watch this space)…

sudo mkdir -p /u01/oracle
sudo chown -R oracle:oinstall /u01
sudo chmod -R 770 /u01

Update default profile

vi /etc/profile

Add the following …

export ORACLE_BASE=/u01/oracle
export ORACLE_HOME=/u01/oracle/product/10.2.0/db_1
export ORACLE_SID=orcl10
export PATH=$PATH:$ORACLE_HOME/bin

Convince Oracle that Ubuntu is Redhat

sudo vi /etc/redhat-release

Add this …
“Red Hat Enterprise Linux AS release 3 (Taroon)”

Run The Installer

The zip file from Oracle – you will have unzipped it, it can be anywhere on the system, lets say /opt.
So after unzipping you will see a /opt/database directory.

chown -R oracle:install /opt/database

Then what’s needed? Start up an X environment (su to Oracle and startx), open a Terminal and…

/opt/database/runInstaller

Installer Options

Do not select the “create starter database” here and selection of Enterprise Edition worked for me, with the Installation Type option.

The installer will ask you run 2 scripts as root. Its is wise to follow this advisory.

The install proceeded fast. I only had one error related to the RDBMS compliation (“Error in invoking target ‘all_no_orcl ihsodbc’ of makefile ‘/u01/oracle/product/10.2.0/db_1/rdbms/lib/ins_rdbms.mk'”), but this was because I had not installed the libstdc++5

Create a Listener

So what you have now is a database engine but with no database to serve, and no Listener to process client connections to said database.

Again. within the Oracle owned X environment…

netca

and default options will work here, just to get a database working. netca is in $ORACLE_HOME/bin and therefore in the shell $PATH. Easy.

Create A Database

First up you need to find the GID for the oinstall group you created earlier…

cat /etc/group | grep oinstall

In my case it was 1001.

As root (UID=0) hose this into the /proc hugetlb_shm_group thus…

echo "" > /proc/sys/vm/hugetlb_shm_group

Again, as oracle user, do this…

dbca

…and again, default options will work in most cases here.

The database name should match the ORACLE_SID environment variable you specified earlier.

Database Service Control

The install script created a oratab file under /etc.
It may look something similar to…

root@ubuntu:~# cat /etc/oratab
....[comments]
orcl10:/u01/oracle/product/10.2.0/db_1:Y

The last part of the stanza (the “Y”) implies “yes” please start this SID on system start. This is your choice of course.

dbstart is a shell script under $ORACLE_HOME/bin. One line needs to be changed here in most cases…this is a basic substitution of your $ORACLE_HOME in place of the “/ade/vikrkuma_new/oracle” in the line after the comment “Set this to bring up Oracle Net Listener”: “ORACLE_HOME_LISTNER=/ade/vikrkuma_new/oracle”

# Set this to bring up Oracle Net Listener

ORACLE_HOME_LISTNER=/ade/vikrkuma_new/oracle

if [ ! $ORACLE_HOME_LISTNER ] ; then
echo "ORACLE_HOME_LISTNER is not SET, unable to auto-start Oracle Net Listener"
else
LOG=$ORACLE_HOME_LISTNER/listener.log

And that should be sufficient to just get a database up and running.

To shutdown the database, Oracle provides $ORACLE_HOME/bin/dbshut and this won’t require any editing.

“service” Control Under Linux

Personally I like to be able to control the Oracle database service with service binary as in:
service oracle start
and
service oracle stop

The script here to go under /etc/init.d was the same as my script for Oracle Database 11g…

root@ubuntu:~# cat /etc/init.d/oracle
#!/bin/bash
#
# Run-level Startup script for the Oracle Instance and Listener
#
### BEGIN INIT INFO
# Provides: Oracle
# Required-Start: $remote_fs $syslog
# Required-Stop: $remote_fs $syslog
# Default-Start: 2 3 4 5
# Default-Stop: 0 1 6
# Short-Description: Startup/Shutdown Oracle listener and instance
### END INIT INFO

ORA_HOME="/u01/oracle/product/10.2.0/db_1"

ORA_OWNR="oracle"

# if the executables do not exist -- display error

if [ ! -f $ORA_HOME/bin/dbstart -o ! -d $ORA_HOME ]
then
echo "Oracle startup: cannot start"
exit 1
fi

# depending on parameter -- startup, shutdown, restart
# of the instance and listener or usage display

case "$1" in
start)
# Oracle listener and instance startup
echo -n "Starting Oracle: "
su - $ORA_OWNR -c "$ORA_HOME/bin/dbstart $ORA_HOME"
su - $ORA_OWNR -c "$ORA_HOME/bin/lsnrctl start"

#Optional : for Enterprise Manager software only
su - $ORA_OWNR -c "$ORA_HOME/bin/emctl start dbconsole"

touch /var/lock/oracle
echo "OK"
;;
stop)
# Oracle listener and instance shutdown
echo -n "Shutdown Oracle: "

#Optional : for Enterprise Manager software only
su - $ORA_OWNR -c "$ORA_HOME/bin/emctl stop dbconsole"

su - $ORA_OWNR -c "$ORA_HOME/bin/lsnrctl stop"
su - $ORA_OWNR -c "$ORA_HOME/bin/dbshut $ORA_HOME"
rm -f /var/lock/oracle
echo "OK"
;;
reload|restart)
$0 stop
$0 start
;;
*)
echo "Usage: $0 start|stop|restart|reload"
exit 1
esac
exit 0

Most likely the only change required will be the ORA_HOME setting which obviously is your $ORACLE_HOME.

Quick Test

So after all this, how do we know our database is up and running?
Try a local test…as Oracle user…

sqlplus / as sysdba

This should drop you into the antiquated text based app and look something like…

oracle@ubuntu:~$ sqlplus / as sysdba

SQL*Plus: Release 10.2.0.1.0 - Production on Fri Jun 21 07:57:43 2013

Copyright (c) 1982, 2005, Oracle. All rights reserved.

Connected to:
Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Production
With the Partitioning, OLAP and Data Mining options

SQL>

Credits

This post is based to some extent the following two posts:
http://www.excession.org.uk/blog/installing-oracle-on-ubuntu-karmic-64-bit.html
and
http://sqlandplsql.com/2011/12/02/installing-oracle-11g-on-ubuntu/

Some parts of these posts didn’t work for me (I had lots of linking errors), but nonetheless thanks go out to the authors of those blogs.

Share This:

Scangate Re-visited: Vulnerability Scanners Uncovered

I have covered VA tools before but I feel that one year later, the same misconceptions prevail. The notion that VA tools really can be used to give a decent picture of vulnerability is still heavily embedded, and that notion in itself presents a serious vulnerability for businesses.

A more concise approach at a run down on the functionality of VA warez may be worth a try. At least lets give it one last shot. On second thoughts, no, don’t shoot anything.

Actually forget “positive” or “negative” views on VAs before reading this. I am just going to present the facts based on what I know myself and of course I’m open to logical, objective discussion. I may have missed something.

Why the focus on VA? Well, the tools are still so commonplace and heavily used and I don’t believe that’s in our best interests.

What I discovered many years ago (it was actually 2002 at first) was that discussions around these tools can evoke some quite emotional responses. “Emotional” you quiz? Yes. I mean when you think about it, whole empires have been built using these tools. The tools are so widespread in security and used as the basis of corporate VM programs. VM market revenues runs at around 1 billion USD annually. Songs and poems have been written about VAs – OK I can’t back that up, but careers have been built and whole enterprise level security software suites built using a nasty open source VA engine.

I presented on the subject of automation in VA all those years ago, and put forward a notion that running VA tools doesn’t carry much more value as compared to something like this: nmap -v -sS -sV <targets> . Any Security Analyst worth their weight in spam would see open ports and service banners, and quickly deduce vulnerability from this limited perspective. “Limited”, maybe, but is a typical VA tool in a better position to interrogate a target autotragically?

One pre-qualifier I need to throw out is that the type of scanners I will discuss here are Nessus-like scanners, the modus operandi of which is to use unauthenticated means to scan a target. Nessus itself isn’t the main focus but it’s the tool that’s most well known and widely used. The others do not present any major advantages over Nessus. In fact Nessus is really as good as it gets. There’s a highly limited potential with these tools and Nessus reaches that limit.

Over the course of my infosec career I have had the privilege to be in a position where I have been coerced into using VAs extensively, and spent many long hours investigating false positives. In many cases I set up a dummy Linux target and used a packet sniffer to deduce what the tool was doing. As a summary, the findings were approximately:

  • Out of the 1000s of tests, or “patterns”, configured in the tools, only a few have the potential to result in accurate/useful findings. Some examples of these are SNMP community string tests, and tests for plain text services (e.g. telnet, FTP).
  • The vast majority of the other tests merely grab a service “banner”. For example, the tool port scans, finds an open port 80 TCP, then runs a test to grab a service banner (e.g. Apache 2.2.22, mickey mouse plug-in, bla bla). I was sort of expecting the tool to do some more probing having found a specific service and version, but in most cases it does not.
  • The tool, having found what it thinks is a certain application layer service and version, then correlates its finding with its database of public disclosed vulnerabilities for the detected service.

Even for some of the plan text services, some of the tests which have the potential to reveal useful findings have been botched by the developers. For example, tests for anonymous FTP only work with a very specific flavour of FTP. Other FTP daemons return different messages for successful anonymous logins and the tool does not accommodate this.

Also what happens if a service is moved from its default port? I had some spectacular failures with running Nessus against a FTP service on port 1980 TCP (usually it is listening on port 21). Different timing options were tested. Nessus uses a nmap engine for port scanning, but nmap by itself is usually able to find non-default port services using default settings.

So in summary, what the VA tools do is mostly just report that you are running ridiculous unencrypted blast-from-the-past services or old, down-level services – maybe. Really I would hope security teams wouldn’t need to spend 25K USD on an enterprise solution to tell them this.

False positives is one thing, but false negatives is quite another. Popular magazines always report something like 50% success rate in finding vulnerabilities in staged tests. Why is it always 50%? Remember also that the product under testing is usually one from a vendor who pays for a full spread ad in that magazine.

Putting numbers to false negatives makes little sense with huge, complex software packages of millions of lines of source code. However, it occurred to me not so long ago whilst doing some white box testing on a client’s critical infrastructure: how many of the vulnerabilities under testing could possibly be discovered by use of a VA tool? In the case of Oracle Database the answer was less than 5%. And when we’re talking Oracle, we’re usually talking critical, as in crown jewels critical.

If nothing else, the main aspect I would hope the reader would take out of this discussion is about expectation. The expectation that is set by marketing people with VA tools is that the tools really can be used to accurately detect a wide range of vulnerability, and you can bet your business on the tools by using them to test critical infrastructure. Ladies and gentlemen: please don’t be deceived by this!

Can you safely replace manual testing with use of these tools? Yes, but only if the target has zero value to the business.

 

Share This: