Ghost – Buffer Overflow in glibc Library

In the early hours of 28th January GMT+0 2015, news started to go mainstream about vulnerability in the open source glibc library. The issue has been given the vulnerability marketing term “Ghost” (the name derives from the fact that the vulnerability arises because of an exploitable bug in the gethostbyname() function).

The buffer overflow vulnerability has been given the CVE reference CVE-2015-0235.

glibc is not a program as such. It’s a library that is shared among one or more programs. It is most commonly found on systems running some form of Linux as an operating system.

Redhat mention on their site that DNS resolution uses the gethostbyname() function and this condition has supposedly been shown to be vulnerable. This makes the Ghost issue much more critical than many are claiming.

As with most things in security, the answer to the question “is it dangerous for me” is “it depends” – sorry – there is no simple binary yes or no here. Non-vulnerable versions of glibc have been available for some time now, so if the installation of the patch (read the last section on “Mitigation” for details) is non-disruptive, then just upgrade.

Most advisories are recommending the immediate installation of a patch, but read on…

Impact

glibc is a core component of Linux. The vulnerability impacts most Linux distributions released circa 2000 to mid-2013. This means that, similar to Heartbleed, it affects a wide range of applications that happen to call the vulnerable function.

Giving credit to the open source dev team, the bug was fixed in 2013 but some vendors continued to use older branches of glibc.

If the issue is successfully exploited, unauthorised commands can be executed locally or remotely.

To be clear: this bug is remotely exploitable. So far at least one attack vector has been identified and tested successfully. Exim mail server uses the glibc library and it was found to be remotely exploitable thru a listening SMTP service. But again – its not as simple as “I run Exim therefore I am vulnerable”. The default configuration of Exim is not vulnerable.

Just as with Shellshock, a handful of attack vectors were identified immediately and more began to surface over the following weeks. The number of “channels”, or “attack vectors” by which the vulnerability may be exploited determines the likelihood that an attack may be attempted against an organisation.

Also, just as with Shellshock, it cannot be said with any decisiveness whether or not the exploit of the issue gains immediate root access. It is not chiselled in stone that mail servers need to run with root privileges, but if the mail server process is running as root, then root will be gained by a successful exploit. It is safer to assume that immediate root access would be gained.

More recent versions of Exim mail server on Ubuntu 14 run under the privileges of the “Debian-exim” user, which is not associated with a command shell, but also default Ubuntu installations will be easy for moderately skilled attackers to compromise completely.

So there is a possibility that remote, unauthenticated access can be gained with root privileges.

Just as with Shellshock, we can expect there to be automated BOT – initiated scanners that look for signs of exploitable services on the public Internet and attempt to gain local access if a suitable candidate is found.

At the time of writing, Rapid 7 have said they are working on a Metasploit test for the condition, which as well as allowing organisations to test for their vulnerability to Ghost, also of course permits lower skilled attackers to exploit the issue.

Another example of an exploitable channel is Rapid 7’s own tool Nexpose, which if running on an Ubuntu 12 appliance, will be vulnerable. However this will not be remotely exploitable.

Redhat mention on their site that DNS resolution also uses the gethostbyname() function and “to exploit this vulnerability, an attacker must trigger a buffer overflow by supplying an invalid hostname argument to an application that performs a DNS resolution”. If this is true, then the seriousness of Ghost increases exponentially because DNS is a commonly “exposed” service thru Internet-facing firewalls.

A lot of software on Linux systems uses glibc. From this point of view, its likely there could be a lot more attack vectors appearing with Ghost as compared with Shellshock. Shellshock was a BASH vulnerability and BASH is present on most Linux systems. However, the accessibility of BASH from a remote viewpoint is likely to be less than that of Ghost.

Risk Evaluation

There is “how can it be fixed?” but first there is “what should we do about it?” – a factor that depends on business risk.

When the information security community declares the existence of new vulnerability, the risks to organisations cannot categorically be given even base indicators of “high”, “medium”, or “low”, mainly because different organisations, even in the same industry sector, can have radically differing exposure to the vulnerability. Factors such as network segmentation can have mitigating effects on vulnerability, whereas the commonplace DMZ plus flat RFC 1918 private space usually vastly increases the potential financial impact of an attack.

Take a situation where a vulnerable Exim mail service listens exposed to the public Internet, and exists in a DMZ with a flat, un-segmented architecture, the risk here is considerable. Generally with most networks, one host falls on a network, and then others can fall rapidly after this.

Each organisation will be different in terms of risk. We cannot even draw similarities between organisations in the same industry sector. Take a bank for example: if a bank exposes a vulnerable service to the Internet, and has a flat network as described above, it can be a matter of minutes before a critical database is compromised.

As always the cost of patching should be evaluated against the cost of not patching. In the case of Ghost, the former will be a lot cheaper in many cases. Remember that increasing numbers of attack vectors will become apparent with time. A lot of software uses glibc!! Try uninstalling glibc on a Linux system using a package manager such as Aptitude, and this fact becomes immediately apparent.

Mitigation

Ubuntu versions newer than 12.04 have already been upgraded to a non-vulnerable glibc library. Older Ubuntu versions (as well other Linux distributions) are still using older versions of glibc and are either waiting on a patch or a patch is already available.

Note that several services may be using glibc so patching should not take place until all dependencies are known and impacts evaluated.

The machine will need a reboot after the upgrade of glibc.

Information Security Pseudo-skills and the Power of 6

How many Security Analysts does it take to change a light bulb? The answer should be one but it seems organisations are insistent on spending huge amounts of money on armies of Analysts with very niche “skills”, as opposed to 6 (yes, 6!) Analysts with certain core skills groups whose abilities complement each other. Banks and telcos with 300 security professionals could reduce that number to 6.

Let me ask you something: is Symantec Control Compliance Suite (CCS) a “skill” or a product or both? Is Vulnerability Management a skill? Its certainly not a product. Is HP Tippingpoint IPS a product or a skill?

Is McAfee Vulnerability Manager 7.5 a skill whereas the previous version is another skill? So if a person has experience with 7.5, they are not qualified to apply for a shop where the previous version is used? Ok this is taking it to the extreme, but i dare say there have been cases where this analogy is applicable.

How long does it take a person to get “skilled up” with HP Arcsight SIEM? I was told by a respected professional who runs his own practice that the answer is 6 months. My immediate thought is not printable here. Sorry – 6 months is ridiculous.

So let me ask again, is Symantec CCS a skill? No – its a product. Its not a skill. If you take a person who has experience in operational/technical Vulnerability Management – you know, vulnerability assessment followed by the treatment of risk, then they will laugh at the idea that CCS is a skill. Its only a skill to someone who has never seen a command shell before, tested manually for a false positive, or taken part in an unrestricted manual network penetration test.

Being a software product from a major vendor means the GUI has been designed to make the software intuitive to use. I know that in vulnerability assessment, i need to supply the tool with IP addresses of targets and i need to tell the tool which tests I want to run against those targets. Maybe the area where I supply the addresses of targets is the tab which has “targets” written on it? And I don’t want to configure the same test every time I run it, maybe this “templates” tab might be able to help me? Do i need a $4000 2-week training course and a nice certificate to prove to the world that I can work effectively with such a product? Or should there be an effective accreditation program which certifies core competencies (including evidence of the ability to adapt fast to new tools) in security? I think the answer is clear.

A product such as a Vulnerability Management product is only a “Window” to a Vulnerability Management solution. Its only a GUI. It has been tailored to be intuitive to use. Its the thin layer on top of the Vulnerability Management solution. The solution itself is much bigger than this. The product only generates list of vulnerabilities. Its how the organisation treats those vulnerabilities that is key – and the product does not help too much with the bigger picture.

Historically vulnerability management has been around for years. Then came along commercial products, which basically just slapped a GUI on processes and practices that existed for 20 years+, after which the jobs market decided to call the product the solution. The product is the skill now, whereas its really vulnerability management that is the skill.

The ability to adapt fast to new tools is a skill in itself but it also is a skill that should be built-in by default: a skill that should be inherent with all professionals who enter the field. Flexibility is the key.

The real skills are those associated with areas for large volumes of intellectual capital. These are core technologies. Say a person has 5 years+ experience of working in Unix environments as a system administrator and has shown interest in scripting. Then they learn some aspects of network penetration testing and are also not afraid of other technologies (such as Windows). I can guarantee that they will be up and running in less than one day with any new Vulnerability Management tool, or SIEM tool, or [insert marketing buzzphrase here] that vendors can magic up.

Different SIEM tools use different terms and phrases for the same basic idea. HP uses “flex connectors” whilst Splunk talks about “Forwarders” and “Heavy Forwarders” and so on. But guess what? I understand English but If i don’t know what the words mean, i can check in an online dictionary. I know what a SIEM is designed to do and i get the data flows and architecture concept. Network aggregation of syslog and Windows Events is not an alien concept to me, and neither are all layers of the TCP/IP stack (a really basic requirement for all Analysts – or should be). Therefore i can adapt very easily to new tools in this space.

IPS/IDS and Firewalls? Well they’re not even very functional devices. If you have ever setup Snort or iptables you’ll be fine with whatever product is out there. Recently myself and another Consultant were asked to configure a Tippingpoint device. We were up and running in 10 minutes. There were a few small items that we needed to check against the product documentation. I have 15 years+ experience in the field but the other guy is new. Nonetheless he had configured another IPS product before. He was immediately up and running with the product – no problem. Of course what to configure in the rule base – that is a bigger story and it requires knowledge of threats, attack techniques and vulnerabilities – but that area is GENERIC to security – its not specific to a product.

I’ve seen some truly crazy job specifications. One i saw was Websense Specialist!! Come on – its a web proxy! Its Squid with extra cosmetic functions. The position would be filled by a Websense “Olympian” probably. But what sort of job is that? Carpe Diem my friends, Carpe Diem.

If you run a security consultancy and you follow the usual market game of micro-boxed, pigeon-holed security skills, i don’t know how you can survive. A requirement comes up for a project that involves a number of different products. Your existing consultants don’t have those products written anywhere on their CVs, so you go to market looking for contractors at 600 USD per day. You either find the people somehow, or you turn the project down.  Either way you lose out massively. Or – you could have a base of 6 (its that number again) consultants with core skills that complement each other.

If the over-specialisation issue were addressed, businesses would save considerably on human resource and also find it easier to attract the right people. Pigeon-holed jobs are boring. It is possible and advisable to acquire human resource able to cover more bases in risk management.

There are those for and against accreditation in security. I think there is a solution here which is covered in more detail of Chapter 11 of Security De-engineering.

So how many Security Analysts does it take to change a light bulb? The answer is 6, but typically in real life the number is the mark of the beast: 666.

Windows Vulnerability Management: A Cheat Sheet

So its been a long time since my last post. In fact it is almost an entire calendar year. But if we ask our delusional/non-cynical (delete as appropriate) colleagues, we discover that we should try to look to the positive. The last post was on 10th September 2013. So do not focus on the fact that its nearly 1 year since my last post, focus on the 1 week between this post and the 1 year mark. That is quite something! Remember people: positivity is key. 51 weeks instead of 52. All is good.

And to make up for the proportional short-fall of pearls of wisdom, I am hereby making freely available a spreadsheet I developed: its a compendium of information with regard to Windows 2008 Server vulnerabilities. In vulnerability management, the tool (McAfee VM 7.5 in this case, but it could be any tool) produces long lists of vulnerability. Some of these can be false positives. Others will be bona fide. Then, where there is vulnerability, there is risk, and the level of that risk to the business needs to be deduced.

The response of the organisation to the above-mentioned list of vulnerability is usually an IT response – a collaboration between security and operations, or at least it should be. Security typically is not in a position to deduce the risk and / or the remedial actions by themselves. This is where the spreadsheet comes in. All of the information is in one document and the information that is needed to deduce factors such as impact, ease of exploit, risk, false positive, etc…its all there.

Operating system security checklists are as thin on the ground (and in their content) as they are critical in the prevention (and also detection) world. Work in this area is seen as boring and unsexy. It doesn’t involve anything that could get you a place as a speaker at a Black Hat conference. There is nothing in this realm that involves some fanciful breach technique.

Overall, forget perimeter firewalls and anti-virus – operating system security is now the front line in the battle against unauthorised access.

The CIS Benchmarks are quoted by many as a source of operating system configuration security check items and fair play to the folks at CIS for producing these documents.

The columns are as such:

  • “CIS” : the CIS subtitle for the vulnerability
  • “Recommended VM Test”: yes or no, is this a test that is worth doing? (not all of them are worth it, some are outdated, some are just silly)
  • “McAfee Test Available”: yes or no
  • “McAfee Test ID”: numeric code of the test pattern with McAfee VM 7.5
  • “Comments”: summary of the CIS text that describes the vulnerability
  • “Test / Reg Key”: the registry key to check, or other test for the vulnerability
  • “Group Policy”: The GPO value related to the vulnerability if applicable
  • “Further comments”: Some rationale from experience. For example, likelihood of being a false positive, impact, risk, ease of exploit, how to exploit. Generally – more details that can help the Analyst when interfacing with Windows support staff.
  • “McAfee VM test notes”: This is a scratchpad for the Analyst to make notes, as a reference for any other Analysts who may be performing some testing. For example, if the test regularly yields a false positive, note the fact here.
  • “References”: URLs and other material that give some background information. Mostly these are relevant Microsoft Technet articles.

MVM-CIS-Spreadoe

So if anyone would like a copy of this spread sheet, please don’t hesitate to contact me. No – I will not spam you or share your contact details.

 

Oracle 10g EE Installation On Ubuntu 10

This is all 32 bit, no 64 bit software will be covered here.

To get Oracle 10g in 2013 requires a support account of course. Only 11g is available now. Basically I needed Oracle 10 because its still quite heavily used in global business circles. My security testing software may run into Oracle 10 (in fact, already has several times).

After some considerable problems with library linking related failures with Oracle 10g and Ubuntu 12 (12.04.2), I decided to just save time by backdating and using more compatible libraries. The install with Ubuntu 10.04.4 Lucid Lynx. The install with this older version (this is only for dev work, trust me i wouldn’t dream of using this in production) went like a dream.

Java

Note that many install guides insist on installing Oracle’s Java or some other JVM. I found that this was not necessary.

Other Libraries

and then libstdc++5 will be required. I found it here eventually…

http://old-releases.ubuntu.com/ubuntu/pool/universe/g/gcc-3.3/libstdc++5_3.3.6-17ubuntu1_i386.deb

and then …

dpkg -i libstdc++5_3.3.6-17ubuntu1_i386.deb

This process installs the library in the right place (at least where the installer for Oracle looks).

Users and Groups

sudo group add oinstall
sudo group add dba
sudo group add nobody
sudo user add -m oracle -g oinstall -G dba
sudo passwd oracle

Kernel Parameters

In /etc/sysctl.conf …

kernel.shmall = 2097152
kernel.shmmax = 2147483648
kernel.shmmni = 4096
kernel.sem = 250 32000 100 128
fs.file-max = 65536
net.ipv4.ip_local_port_range = 1024 65000

Reload to take effect…

root@vm-ubuntu-11:~# /sbin/sysctl -p

kernel.shmall = 2097152
kernel.shmmax = 2147483648
kernel.shmmni = 4096
kernel.sem = 250 32000 100 128
fs.file-max = 65536
net.ipv4.ip_local_port_range = 1024 65000

Change Limits

vi /etc/security/limits.conf

Add the following …

* soft nproc 2047
* hard nproc 16384
* soft nofile 1024
* hard nofile 65536

Change The System In A Suse Kinda Way

(Ubuntu isn’t a supported distro for Oracle DB and some subtle changes are needed)

sudo ln -s /usr/bin/awk /bin/awk
sudo ln -s /usr/bin/rpm /bin/rpm
sudoln -s /lib/libgcc_s.so.1 /lib/libgcc_s.so
sudo ln -s /usr/bin/basename /bin/basename

Oracle Base Directory

I went with more typical Oracle style directories here for some reason, but you can choose what’s best for you, as long as the ownership is set correctly (watch this space)…

sudo mkdir -p /u01/oracle
sudo chown -R oracle:oinstall /u01
sudo chmod -R 770 /u01

Update default profile

vi /etc/profile

Add the following …

export ORACLE_BASE=/u01/oracle
export ORACLE_HOME=/u01/oracle/product/10.2.0/db_1
export ORACLE_SID=orcl10
export PATH=$PATH:$ORACLE_HOME/bin

Convince Oracle that Ubuntu is Redhat

sudo vi /etc/redhat-release

Add this …
“Red Hat Enterprise Linux AS release 3 (Taroon)”

Run The Installer

The zip file from Oracle – you will have unzipped it, it can be anywhere on the system, lets say /opt.
So after unzipping you will see a /opt/database directory.

chown -R oracle:install /opt/database

Then what’s needed? Start up an X environment (su to Oracle and startx), open a Terminal and…

/opt/database/runInstaller

Installer Options

Do not select the “create starter database” here and selection of Enterprise Edition worked for me, with the Installation Type option.

The installer will ask you run 2 scripts as root. Its is wise to follow this advisory.

The install proceeded fast. I only had one error related to the RDBMS compliation (“Error in invoking target ‘all_no_orcl ihsodbc’ of makefile ‘/u01/oracle/product/10.2.0/db_1/rdbms/lib/ins_rdbms.mk'”), but this was because I had not installed the libstdc++5

Create a Listener

So what you have now is a database engine but with no database to serve, and no Listener to process client connections to said database.

Again. within the Oracle owned X environment…

netca

and default options will work here, just to get a database working. netca is in $ORACLE_HOME/bin and therefore in the shell $PATH. Easy.

Create A Database

First up you need to find the GID for the oinstall group you created earlier…

cat /etc/group | grep oinstall

In my case it was 1001.

As root (UID=0) hose this into the /proc hugetlb_shm_group thus…

echo "" > /proc/sys/vm/hugetlb_shm_group

Again, as oracle user, do this…

dbca

…and again, default options will work in most cases here.

The database name should match the ORACLE_SID environment variable you specified earlier.

Database Service Control

The install script created a oratab file under /etc.
It may look something similar to…

root@ubuntu:~# cat /etc/oratab
....[comments]
orcl10:/u01/oracle/product/10.2.0/db_1:Y

The last part of the stanza (the “Y”) implies “yes” please start this SID on system start. This is your choice of course.

dbstart is a shell script under $ORACLE_HOME/bin. One line needs to be changed here in most cases…this is a basic substitution of your $ORACLE_HOME in place of the “/ade/vikrkuma_new/oracle” in the line after the comment “Set this to bring up Oracle Net Listener”: “ORACLE_HOME_LISTNER=/ade/vikrkuma_new/oracle”

# Set this to bring up Oracle Net Listener

ORACLE_HOME_LISTNER=/ade/vikrkuma_new/oracle

if [ ! $ORACLE_HOME_LISTNER ] ; then
echo "ORACLE_HOME_LISTNER is not SET, unable to auto-start Oracle Net Listener"
else
LOG=$ORACLE_HOME_LISTNER/listener.log

And that should be sufficient to just get a database up and running.

To shutdown the database, Oracle provides $ORACLE_HOME/bin/dbshut and this won’t require any editing.

“service” Control Under Linux

Personally I like to be able to control the Oracle database service with service binary as in:
service oracle start
and
service oracle stop

The script here to go under /etc/init.d was the same as my script for Oracle Database 11g…

root@ubuntu:~# cat /etc/init.d/oracle
#!/bin/bash
#
# Run-level Startup script for the Oracle Instance and Listener
#
### BEGIN INIT INFO
# Provides: Oracle
# Required-Start: $remote_fs $syslog
# Required-Stop: $remote_fs $syslog
# Default-Start: 2 3 4 5
# Default-Stop: 0 1 6
# Short-Description: Startup/Shutdown Oracle listener and instance
### END INIT INFO

ORA_HOME="/u01/oracle/product/10.2.0/db_1"

ORA_OWNR="oracle"

# if the executables do not exist -- display error

if [ ! -f $ORA_HOME/bin/dbstart -o ! -d $ORA_HOME ]
then
echo "Oracle startup: cannot start"
exit 1
fi

# depending on parameter -- startup, shutdown, restart
# of the instance and listener or usage display

case "$1" in
start)
# Oracle listener and instance startup
echo -n "Starting Oracle: "
su - $ORA_OWNR -c "$ORA_HOME/bin/dbstart $ORA_HOME"
su - $ORA_OWNR -c "$ORA_HOME/bin/lsnrctl start"

#Optional : for Enterprise Manager software only
su - $ORA_OWNR -c "$ORA_HOME/bin/emctl start dbconsole"

touch /var/lock/oracle
echo "OK"
;;
stop)
# Oracle listener and instance shutdown
echo -n "Shutdown Oracle: "

#Optional : for Enterprise Manager software only
su - $ORA_OWNR -c "$ORA_HOME/bin/emctl stop dbconsole"

su - $ORA_OWNR -c "$ORA_HOME/bin/lsnrctl stop"
su - $ORA_OWNR -c "$ORA_HOME/bin/dbshut $ORA_HOME"
rm -f /var/lock/oracle
echo "OK"
;;
reload|restart)
$0 stop
$0 start
;;
*)
echo "Usage: $0 start|stop|restart|reload"
exit 1
esac
exit 0

Most likely the only change required will be the ORA_HOME setting which obviously is your $ORACLE_HOME.

Quick Test

So after all this, how do we know our database is up and running?
Try a local test…as Oracle user…

sqlplus / as sysdba

This should drop you into the antiquated text based app and look something like…

oracle@ubuntu:~$ sqlplus / as sysdba

SQL*Plus: Release 10.2.0.1.0 - Production on Fri Jun 21 07:57:43 2013

Copyright (c) 1982, 2005, Oracle. All rights reserved.

Connected to:
Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Production
With the Partitioning, OLAP and Data Mining options

SQL>

Credits

This post is based to some extent the following two posts:
http://www.excession.org.uk/blog/installing-oracle-on-ubuntu-karmic-64-bit.html
and
http://sqlandplsql.com/2011/12/02/installing-oracle-11g-on-ubuntu/

Some parts of these posts didn’t work for me (I had lots of linking errors), but nonetheless thanks go out to the authors of those blogs.