Information Security Pseudo-skills and the Power of 6

How many Security Analysts does it take to change a light bulb? The answer should be one but it seems organisations are insistent on spending huge amounts of money on armies of Analysts with very niche “skills”, as opposed to 6 (yes, 6!) Analysts with certain core skills groups whose abilities complement each other. Banks and telcos with 300 security professionals could reduce that number to 6.

Let me ask you something: is Symantec Control Compliance Suite (CCS) a “skill” or a product or both? Is Vulnerability Management a skill? Its certainly not a product. Is HP Tippingpoint IPS a product or a skill?

Is McAfee Vulnerability Manager 7.5 a skill whereas the previous version is another skill? So if a person has experience with 7.5, they are not qualified to apply for a shop where the previous version is used? Ok this is taking it to the extreme, but i dare say there have been cases where this analogy is applicable.

How long does it take a person to get “skilled up” with HP Arcsight SIEM? I was told by a respected professional who runs his own practice that the answer is 6 months. My immediate thought is not printable here. Sorry – 6 months is ridiculous.

So let me ask again, is Symantec CCS a skill? No – its a product. Its not a skill. If you take a person who has experience in operational/technical Vulnerability Management – you know, vulnerability assessment followed by the treatment of risk, then they will laugh at the idea that CCS is a skill. Its only a skill to someone who has never seen a command shell before, tested manually for a false positive, or taken part in an unrestricted manual network penetration test.

Being a software product from a major vendor means the GUI has been designed to make the software intuitive to use. I know that in vulnerability assessment, i need to supply the tool with IP addresses of targets and i need to tell the tool which tests I want to run against those targets. Maybe the area where I supply the addresses of targets is the tab which has “targets” written on it? And I don’t want to configure the same test every time I run it, maybe this “templates” tab might be able to help me? Do i need a $4000 2-week training course and a nice certificate to prove to the world that I can work effectively with such a product? Or should there be an effective accreditation program which certifies core competencies (including evidence of the ability to adapt fast to new tools) in security? I think the answer is clear.

A product such as a Vulnerability Management product is only a “Window” to a Vulnerability Management solution. Its only a GUI. It has been tailored to be intuitive to use. Its the thin layer on top of the Vulnerability Management solution. The solution itself is much bigger than this. The product only generates list of vulnerabilities. Its how the organisation treats those vulnerabilities that is key – and the product does not help too much with the bigger picture.

Historically vulnerability management has been around for years. Then came along commercial products, which basically just slapped a GUI on processes and practices that existed for 20 years+, after which the jobs market decided to call the product the solution. The product is the skill now, whereas its really vulnerability management that is the skill.

The ability to adapt fast to new tools is a skill in itself but it also is a skill that should be built-in by default: a skill that should be inherent with all professionals who enter the field. Flexibility is the key.

The real skills are those associated with areas for large volumes of intellectual capital. These are core technologies. Say a person has 5 years+ experience of working in Unix environments as a system administrator and has shown interest in scripting. Then they learn some aspects of network penetration testing and are also not afraid of other technologies (such as Windows). I can guarantee that they will be up and running in less than one day with any new Vulnerability Management tool, or SIEM tool, or [insert marketing buzzphrase here] that vendors can magic up.

Different SIEM tools use different terms and phrases for the same basic idea. HP uses “flex connectors” whilst Splunk talks about “Forwarders” and “Heavy Forwarders” and so on. But guess what? I understand English but If i don’t know what the words mean, i can check in an online dictionary. I know what a SIEM is designed to do and i get the data flows and architecture concept. Network aggregation of syslog and Windows Events is not an alien concept to me, and neither are all layers of the TCP/IP stack (a really basic requirement for all Analysts – or should be). Therefore i can adapt very easily to new tools in this space.

IPS/IDS and Firewalls? Well they’re not even very functional devices. If you have ever setup Snort or iptables you’ll be fine with whatever product is out there. Recently myself and another Consultant were asked to configure a Tippingpoint device. We were up and running in 10 minutes. There were a few small items that we needed to check against the product documentation. I have 15 years+ experience in the field but the other guy is new. Nonetheless he had configured another IPS product before. He was immediately up and running with the product – no problem. Of course what to configure in the rule base – that is a bigger story and it requires knowledge of threats, attack techniques and vulnerabilities – but that area is GENERIC to security – its not specific to a product.

I’ve seen some truly crazy job specifications. One i saw was Websense Specialist!! Come on – its a web proxy! Its Squid with extra cosmetic functions. The position would be filled by a Websense “Olympian” probably. But what sort of job is that? Carpe Diem my friends, Carpe Diem.

If you run a security consultancy and you follow the usual market game of micro-boxed, pigeon-holed security skills, i don’t know how you can survive. A requirement comes up for a project that involves a number of different products. Your existing consultants don’t have those products written anywhere on their CVs, so you go to market looking for contractors at 600 USD per day. You either find the people somehow, or you turn the project down.  Either way you lose out massively. Or – you could have a base of 6 (its that number again) consultants with core skills that complement each other.

If the over-specialisation issue were addressed, businesses would save considerably on human resource and also find it easier to attract the right people. Pigeon-holed jobs are boring. It is possible and advisable to acquire human resource able to cover more bases in risk management.

There are those for and against accreditation in security. I think there is a solution here which is covered in more detail of Chapter 11 of Security De-engineering.

So how many Security Analysts does it take to change a light bulb? The answer is 6, but typically in real life the number is the mark of the beast: 666.

Windows Vulnerability Management: A Cheat Sheet

So its been a long time since my last post. In fact it is almost an entire calendar year. But if we ask our delusional/non-cynical (delete as appropriate) colleagues, we discover that we should try to look to the positive. The last post was on 10th September 2013. So do not focus on the fact that its nearly 1 year since my last post, focus on the 1 week between this post and the 1 year mark. That is quite something! Remember people: positivity is key. 51 weeks instead of 52. All is good.

And to make up for the proportional short-fall of pearls of wisdom, I am hereby making freely available a spreadsheet I developed: its a compendium of information with regard to Windows 2008 Server vulnerabilities. In vulnerability management, the tool (McAfee VM 7.5 in this case, but it could be any tool) produces long lists of vulnerability. Some of these can be false positives. Others will be bona fide. Then, where there is vulnerability, there is risk, and the level of that risk to the business needs to be deduced.

The response of the organisation to the above-mentioned list of vulnerability is usually an IT response – a collaboration between security and operations, or at least it should be. Security typically is not in a position to deduce the risk and / or the remedial actions by themselves. This is where the spreadsheet comes in. All of the information is in one document and the information that is needed to deduce factors such as impact, ease of exploit, risk, false positive, etc…its all there.

Operating system security checklists are as thin on the ground (and in their content) as they are critical in the prevention (and also detection) world. Work in this area is seen as boring and unsexy. It doesn’t involve anything that could get you a place as a speaker at a Black Hat conference. There is nothing in this realm that involves some fanciful breach technique.

Overall, forget perimeter firewalls and anti-virus – operating system security is now the front line in the battle against unauthorised access.

The CIS Benchmarks are quoted by many as a source of operating system configuration security check items and fair play to the folks at CIS for producing these documents.

The columns are as such:

  • “CIS” : the CIS subtitle for the vulnerability
  • “Recommended VM Test”: yes or no, is this a test that is worth doing? (not all of them are worth it, some are outdated, some are just silly)
  • “McAfee Test Available”: yes or no
  • “McAfee Test ID”: numeric code of the test pattern with McAfee VM 7.5
  • “Comments”: summary of the CIS text that describes the vulnerability
  • “Test / Reg Key”: the registry key to check, or other test for the vulnerability
  • “Group Policy”: The GPO value related to the vulnerability if applicable
  • “Further comments”: Some rationale from experience. For example, likelihood of being a false positive, impact, risk, ease of exploit, how to exploit. Generally – more details that can help the Analyst when interfacing with Windows support staff.
  • “McAfee VM test notes”: This is a scratchpad for the Analyst to make notes, as a reference for any other Analysts who may be performing some testing. For example, if the test regularly yields a false positive, note the fact here.
  • “References”: URLs and other material that give some background information. Mostly these are relevant Microsoft Technet articles.

MVM-CIS-Spreadoe

So if anyone would like a copy of this spread sheet, please don’t hesitate to contact me. No – I will not spam you or share your contact details.

 

Information Security Careers: The Merits Of Going In-house

Job hunting in information security can be a confusing game. The lack of any standard nomenclature across the sector doesn’t help in this regard. Some of the terms used to describe open positions can be interpreted in wildly different ways. “Architect” is a good example. This term can have a non-technical connotation with some, and a technical connotation with others.

There are plenty of pros who came into security, perhaps via the network penetration testing route, who only ever worked for consultancies that provide services, mainly for businesses such as banks and telcos. The majority of such “external” services are centered around network penetration testing and application testing.

I myself started out in infosec on the consultancy path. My colleagues were whiz kids and some were well known in the field. Some would call them “hackers”, others “ethical” or “white hat” network penetration testers. This article does not cover ethics or pander to some of the verdicts that tend to be passed outside of the law.

Many Analysts and Consultants will face the decision to go in-house at some point in their careers, or remain in a service provider capacity. Others may be in-house and considering the switch to a consultancy. This post hopefully can help the decision making process.

The idea of going in-house and, for example, taking up an Analyst position with a big bank – it usually doesn’t hold much appeal with external consultants. The idea prevails that this type of position is boring or unchallenging. I also had this viewpoint and it was largely derived from the few visions and sound bytes I had witnessed behind the veil. However, what I discovered when I took up an analyst position with a large logistics firm was that nothing could be further from the truth. Going in-house can benefit one’s career immensely and open the eyes to the real challenges in security.

Of course my experiences do not apply across the whole spectrum of in-house security positions. Some actually are boring for technically oriented folk. Different organisations do things in different ways. Some just use their security department for compliance purposes with no attention to detail. However there are also plenty that engage effectively with other teams such as IT operations and development project teams.

As an Analyst in a large, complex environment, the opportunity exists to learn a great deal more about security than one could as an external consultant.  An external consultant’s exposure to an organisation’s security challenges will only usually come in the form of a network or application assessment, and even if the testing is conducted thoroughly and over a period of weeks, the view will be extremely limited. The test report is sent to the client, and its a common assumption that all of the problems described in the report can be easily addressed. In the vast majority of cases, nothing could be further from the truth. What becomes apparent at a very early stage in one’s life as an in-house Analyst, is that very few vulnerabilities can be mitigated easily.

One of the main pillars of a security strategy is Vulnerability Management. The basis of any vulnerability management program is the security standard – the document that depicts how, from a security perspective, computer operating systems, DBMS, network devices, and so on, should be configured. So an Analyst will put together a list of configuration items and compose a security standard. Next they will meet with another team, usually IT operations, in an attempt to actually implement the standard in existing and future platforms. For many, this will be the point where they realize the real nature of the challenges.

Taking an example, the security department at a bank is attempting to introduce a Redhat Enterprise Linux security standard as a live document. How many of the configuration directives can be implemented across the board with an acceptable level of risk in terms of breaking applications or impacting the business in any way? The answer is “not many”. This will come as a surprise for many external consultants. Limiting factors can come from surprising sources. Enlightened IT ops and dev teams can open security’s eyes in this regard and help them to understand how the business really functions.

The whole process of vulnerability management, minus VM product marketeers’ diatribe, is basically detection, then deduce the risk, then take decisions on how to address the risk (i.e. don’t address the vulnerability and accept the risk, or address / work around the vulnerability and mitigate the risk). But as an external consultant, one will only usually get to hand a client a list of vulnerabilities and that will be the end of the story. As an in-house Security Analyst, one gets to take the process from start to finish and learn a great deal more in the process.

As a security consultant passing beyond the iron curtain, the best thing that can possibly happen to their careers is that they find themselves in a situation where they get to interface with the enlightened ones in IT operations, network operations (usually there are a few in net ops who really know their security quite well), and application architects (and that’s where it gets to be really fun).

For the Security Consultant who just metamorphosized into an in-house Analyst, it may well be the first time in their careers that they get to encounter real business concerns. IT operations teams live in fear of disrupting applications that generate huge revenues per minute. The fear will be palpable and it encourages the kind of professionalism that one may never have a direct need to have as an external consultant. Generally, the in-house Analyst gets to experience in detail how the business translates into applications and then into servers, databases, and data flows. Then the risks to information assets seem much more real.

The internal challenge versus the external challenge in security is of course one of protection versus breaking-in. Security is full of rock stars who break into badly defended customer networks and then advertise the feat from the roof tops. In between commercial tests and twittering school yard insults, the rock stars are preparing their next Black Hat speech with research into the latest exotic sploit technique that will never be used in a live test, because the target can easily be compromised with simple methods.

However the rock stars get all the attention and security is all about reversing and fuzzing so we hear. But the bigger challenge is not breaking in, its protection, but then protection is a lot less exotic and sexy than breaking in. So there lies the main disadvantage of going in-house. It could mean less attention for the gifted Analyst. But for many, this won’t be such an issue, because the internal job is much more challenging and interesting, and it also lights up a CV, especially if the names are those in banking and telecoms.

How about going full circle? How about 3 years with a service provider, then 5 years in-house, then going back to consulting? Such a consultant is indeed a powerful weapon for consultancies and adds a whole new dimension for service providers (and their portfolio of services can be expanded). In fact such a security professional would be well positioned to start their own consultancy at this stage.

So in conclusion: going in-house can be the best thing that a Security Consultant can do with their careers. Is going in-house less interesting? Not at all. Does it mean you will get less attention? You can still speak at conferences probably.

One Infosec Accreditation Program To Bind Them All

May 2013 saw a furious debate ensue after a post by Brian Honan (Is it time to professionalize information security?) that suggested that things need to be improved, which was followed by some comments to the effect that accreditation should be removed completely.

Well, a suggestion doesn’t really do it for me. A strong demand doesn’t really do it either, in fact we’re still some way short. No – to advocate the strength of current accreditation schemes is ludicrous. But to then say that we don’t need accreditations at all is completely barking mad.

Brian correctly pointed out “At the moment, there is not much that can be done to prevent anyone from claiming to be an information security expert.” Never a truer phrase was spoken.

Other industry sectors have professional accreditation and it works. The stakes are higher in areas such as Civil Engineering and Medicine? Well – if practitioners in those fields screw up, it cost lives. True, but how is this different from Infosec? Are the stakes really lower when we’re talking about our economic security? We have adversaries and that makes infosec different or more complex?

Infosec is complex – you can bet ISC2’s annual revenue on that. But doesn’t that make security even more deserving of some sort of accreditation scheme that works and generates trust?

I used the word “trust”, and I used it because that what’s we’re ultimately trying to achieve. Our customers are C-levels, other internal departments, end users, home users, and so on. At the moment they don’t trust infosec professionals and who can blame them? If we liken infosec to medicine, much of the time, forget about the treatment, we’re misdiagnosing. Infosec is still in the dark ages of drilling holes in heads in order to cure migraine.

That lack of trust is why, in so many organizations, security has been as marginalized as possible without actually being vaporized completely. Its also why security has been reduced down to the level of ticks in boxes, and “just pass the audit”.

Even though an organization has the best security pros in the world working for them, they can still have their crown jewels sucked out through their up-link in an unauthorized kinda way. Some could take this stance and advocate against accreditation because ultimately, the best real-world defenses can fail. However, nobody is pretending that the perfect, “say goodbye to warez – train your staff with us” security accreditation scheme can exist. But at the same time we do want to be able to configure detection and cover some critical areas of protection. To say that we don’t need training and/or accreditation in security is to say the world doesn’t need accreditation ever again. No more degrees and PhDs, no more CISSPs, and so on.

We certainly do need some level of proof of at least base level competence. There are some practices and positions taken by security professionals that are really quite deceptive and result in resources being committed in areas where there is 100% wastage. These poor results will emerge eventually. Maybe not tomorrow, but eventually the mess that was swept under the carpet will be discovered. We do need to eliminate these practices.

So what are we trying to achieve with accreditation? The link with IT needs to be re-emphasized. The full details of a proposal are covered in chapter 11 of Security De-engineering, but basically what we need first is to ensure the connection at the Analyst level with IT, mainly because of the information element of information technology and information security (did you notice the common word in IT and IT security? Its almost as though there might be a connection between them). 80% of information is now held in electronic form. So businesses need expertise to assist them with protection of that information.

Security is about both business and IT of course. Everybody knows this even if they can’t admit it. There is an ISMS element that is document and process based, which is critical in terms of future proofing the business and making security practices more resource-efficient. A baseline security standard is a critical document and cannot be left to gather dust on a shelf – it does need to be a “living” document. But the “M” in ISMS stands for Management, and as such its an area for…manage-ers. What is quite common is to find a security department of 6 or more Analysts who specialize in ISMS and audits. That does not work.

There has to be a connection with IT and probably the best way to ensure that is to advocate that a person cannot metamorphosize into a Security Analyst until they have 5 years served in IT operations/administration, network engineer, or as a DBA, or developer. Vendor certs such as those from IBM, Microsoft, Cisco – although heavily criticized they can serve to indicate some IT experience but the time-served element with a signed off testimonial from a referee is critical.

There can be an entrance exam for life as an Analyst. This exam should cover a number of different bases. Dave Shackleford’s assertion that creative thinkers are needed is hard to argue with. Indeed, what i think is needed is a demonstration of such creativity and some evidence of coding experience goes a long way towards this.

Flexibility is also critical. Typically IT ops folk cover one major core technology such as Unix or Windows or Cisco. Infosec needs people who can demonstrate flexibility and answer security questions in relation to two or more core technologies. As an Analyst, they can have a specialization with two major platforms plus an area such as application security, but a broad cross-technology base is critical. Between the members of a team, each one can have a specialization, but the members of the team have knowledge that compliments each other, and collectively the full spectrum of business security concerns can be covered.

There can be specializations but also proportional rewards for Analysts who can demonstrate competence in increasing numbers of areas of specialization. There is such a thing as a broad-base experienced Security Analyst and such a person is the best candidate for niche areas such as forensics, as opposed to a candidate who got a forensics cert, learned how to use Encase, plastered forensics on their CV, and got the job with no other Analyst experience (yes – it does happen).

So what emerges is a pattern for an approximate model of a “graduation”-based career path. And then from 5 years time-served as an Analyst, there can be another exam for graduation into the position of Security Manager or Architect. This exam could be something similar to the BCS’s CISMP or ISACA’s CISA (no – I do not have any affiliations with those organizations and I wasn’t paid to write this).

Nobody ever pretended that an accreditation program can solve all our problems, but we do need base assurances in order for our customers to trust us.

Oracle 10g EE Installation On Ubuntu 10

This is all 32 bit, no 64 bit software will be covered here.

To get Oracle 10g in 2013 requires a support account of course. Only 11g is available now. Basically I needed Oracle 10 because its still quite heavily used in global business circles. My security testing software may run into Oracle 10 (in fact, already has several times).

After some considerable problems with library linking related failures with Oracle 10g and Ubuntu 12 (12.04.2), I decided to just save time by backdating and using more compatible libraries. The install with Ubuntu 10.04.4 Lucid Lynx. The install with this older version (this is only for dev work, trust me i wouldn’t dream of using this in production) went like a dream.

Java

Note that many install guides insist on installing Oracle’s Java or some other JVM. I found that this was not necessary.

Other Libraries

and then libstdc++5 will be required. I found it here eventually…

http://old-releases.ubuntu.com/ubuntu/pool/universe/g/gcc-3.3/libstdc++5_3.3.6-17ubuntu1_i386.deb

and then …

dpkg -i libstdc++5_3.3.6-17ubuntu1_i386.deb

This process installs the library in the right place (at least where the installer for Oracle looks).

Users and Groups

sudo group add oinstall
sudo group add dba
sudo group add nobody
sudo user add -m oracle -g oinstall -G dba
sudo passwd oracle

Kernel Parameters

In /etc/sysctl.conf …

kernel.shmall = 2097152
kernel.shmmax = 2147483648
kernel.shmmni = 4096
kernel.sem = 250 32000 100 128
fs.file-max = 65536
net.ipv4.ip_local_port_range = 1024 65000

Reload to take effect…

root@vm-ubuntu-11:~# /sbin/sysctl -p

kernel.shmall = 2097152
kernel.shmmax = 2147483648
kernel.shmmni = 4096
kernel.sem = 250 32000 100 128
fs.file-max = 65536
net.ipv4.ip_local_port_range = 1024 65000

Change Limits

vi /etc/security/limits.conf

Add the following …

* soft nproc 2047
* hard nproc 16384
* soft nofile 1024
* hard nofile 65536

Change The System In A Suse Kinda Way

(Ubuntu isn’t a supported distro for Oracle DB and some subtle changes are needed)

sudo ln -s /usr/bin/awk /bin/awk
sudo ln -s /usr/bin/rpm /bin/rpm
sudoln -s /lib/libgcc_s.so.1 /lib/libgcc_s.so
sudo ln -s /usr/bin/basename /bin/basename

Oracle Base Directory

I went with more typical Oracle style directories here for some reason, but you can choose what’s best for you, as long as the ownership is set correctly (watch this space)…

sudo mkdir -p /u01/oracle
sudo chown -R oracle:oinstall /u01
sudo chmod -R 770 /u01

Update default profile

vi /etc/profile

Add the following …

export ORACLE_BASE=/u01/oracle
export ORACLE_HOME=/u01/oracle/product/10.2.0/db_1
export ORACLE_SID=orcl10
export PATH=$PATH:$ORACLE_HOME/bin

Convince Oracle that Ubuntu is Redhat

sudo vi /etc/redhat-release

Add this …
“Red Hat Enterprise Linux AS release 3 (Taroon)”

Run The Installer

The zip file from Oracle – you will have unzipped it, it can be anywhere on the system, lets say /opt.
So after unzipping you will see a /opt/database directory.

chown -R oracle:install /opt/database

Then what’s needed? Start up an X environment (su to Oracle and startx), open a Terminal and…

/opt/database/runInstaller

Installer Options

Do not select the “create starter database” here and selection of Enterprise Edition worked for me, with the Installation Type option.

The installer will ask you run 2 scripts as root. Its is wise to follow this advisory.

The install proceeded fast. I only had one error related to the RDBMS compliation (“Error in invoking target ‘all_no_orcl ihsodbc’ of makefile ‘/u01/oracle/product/10.2.0/db_1/rdbms/lib/ins_rdbms.mk'”), but this was because I had not installed the libstdc++5

Create a Listener

So what you have now is a database engine but with no database to serve, and no Listener to process client connections to said database.

Again. within the Oracle owned X environment…

netca

and default options will work here, just to get a database working. netca is in $ORACLE_HOME/bin and therefore in the shell $PATH. Easy.

Create A Database

First up you need to find the GID for the oinstall group you created earlier…

cat /etc/group | grep oinstall

In my case it was 1001.

As root (UID=0) hose this into the /proc hugetlb_shm_group thus…

echo "" > /proc/sys/vm/hugetlb_shm_group

Again, as oracle user, do this…

dbca

…and again, default options will work in most cases here.

The database name should match the ORACLE_SID environment variable you specified earlier.

Database Service Control

The install script created a oratab file under /etc.
It may look something similar to…

root@ubuntu:~# cat /etc/oratab
....[comments]
orcl10:/u01/oracle/product/10.2.0/db_1:Y

The last part of the stanza (the “Y”) implies “yes” please start this SID on system start. This is your choice of course.

dbstart is a shell script under $ORACLE_HOME/bin. One line needs to be changed here in most cases…this is a basic substitution of your $ORACLE_HOME in place of the “/ade/vikrkuma_new/oracle” in the line after the comment “Set this to bring up Oracle Net Listener”: “ORACLE_HOME_LISTNER=/ade/vikrkuma_new/oracle”

# Set this to bring up Oracle Net Listener

ORACLE_HOME_LISTNER=/ade/vikrkuma_new/oracle

if [ ! $ORACLE_HOME_LISTNER ] ; then
echo "ORACLE_HOME_LISTNER is not SET, unable to auto-start Oracle Net Listener"
else
LOG=$ORACLE_HOME_LISTNER/listener.log

And that should be sufficient to just get a database up and running.

To shutdown the database, Oracle provides $ORACLE_HOME/bin/dbshut and this won’t require any editing.

“service” Control Under Linux

Personally I like to be able to control the Oracle database service with service binary as in:
service oracle start
and
service oracle stop

The script here to go under /etc/init.d was the same as my script for Oracle Database 11g…

root@ubuntu:~# cat /etc/init.d/oracle
#!/bin/bash
#
# Run-level Startup script for the Oracle Instance and Listener
#
### BEGIN INIT INFO
# Provides: Oracle
# Required-Start: $remote_fs $syslog
# Required-Stop: $remote_fs $syslog
# Default-Start: 2 3 4 5
# Default-Stop: 0 1 6
# Short-Description: Startup/Shutdown Oracle listener and instance
### END INIT INFO

ORA_HOME="/u01/oracle/product/10.2.0/db_1"

ORA_OWNR="oracle"

# if the executables do not exist -- display error

if [ ! -f $ORA_HOME/bin/dbstart -o ! -d $ORA_HOME ]
then
echo "Oracle startup: cannot start"
exit 1
fi

# depending on parameter -- startup, shutdown, restart
# of the instance and listener or usage display

case "$1" in
start)
# Oracle listener and instance startup
echo -n "Starting Oracle: "
su - $ORA_OWNR -c "$ORA_HOME/bin/dbstart $ORA_HOME"
su - $ORA_OWNR -c "$ORA_HOME/bin/lsnrctl start"

#Optional : for Enterprise Manager software only
su - $ORA_OWNR -c "$ORA_HOME/bin/emctl start dbconsole"

touch /var/lock/oracle
echo "OK"
;;
stop)
# Oracle listener and instance shutdown
echo -n "Shutdown Oracle: "

#Optional : for Enterprise Manager software only
su - $ORA_OWNR -c "$ORA_HOME/bin/emctl stop dbconsole"

su - $ORA_OWNR -c "$ORA_HOME/bin/lsnrctl stop"
su - $ORA_OWNR -c "$ORA_HOME/bin/dbshut $ORA_HOME"
rm -f /var/lock/oracle
echo "OK"
;;
reload|restart)
$0 stop
$0 start
;;
*)
echo "Usage: $0 start|stop|restart|reload"
exit 1
esac
exit 0

Most likely the only change required will be the ORA_HOME setting which obviously is your $ORACLE_HOME.

Quick Test

So after all this, how do we know our database is up and running?
Try a local test…as Oracle user…

sqlplus / as sysdba

This should drop you into the antiquated text based app and look something like…

oracle@ubuntu:~$ sqlplus / as sysdba

SQL*Plus: Release 10.2.0.1.0 - Production on Fri Jun 21 07:57:43 2013

Copyright (c) 1982, 2005, Oracle. All rights reserved.

Connected to:
Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Production
With the Partitioning, OLAP and Data Mining options

SQL>

Credits

This post is based to some extent the following two posts:
http://www.excession.org.uk/blog/installing-oracle-on-ubuntu-karmic-64-bit.html
and
http://sqlandplsql.com/2011/12/02/installing-oracle-11g-on-ubuntu/

Some parts of these posts didn’t work for me (I had lots of linking errors), but nonetheless thanks go out to the authors of those blogs.

Scangate Re-visited: Vulnerability Scanners Uncovered

I have covered VA tools before but I feel that one year later, the same misconceptions prevail. The notion that VA tools really can be used to give a decent picture of vulnerability is still heavily embedded, and that notion in itself presents a serious vulnerability for businesses.

A more concise approach at a run down on the functionality of VA warez may be worth a try. At least lets give it one last shot. On second thoughts, no, don’t shoot anything.

Actually forget “positive” or “negative” views on VAs before reading this. I am just going to present the facts based on what I know myself and of course I’m open to logical, objective discussion. I may have missed something.

Why the focus on VA? Well, the tools are still so commonplace and heavily used and I don’t believe that’s in our best interests.

What I discovered many years ago (it was actually 2002 at first) was that discussions around these tools can evoke some quite emotional responses. “Emotional” you quiz? Yes. I mean when you think about it, whole empires have been built using these tools. The tools are so widespread in security and used as the basis of corporate VM programs. VM market revenues runs at around 1 billion USD annually. Songs and poems have been written about VAs – OK I can’t back that up, but careers have been built and whole enterprise level security software suites built using a nasty open source VA engine.

I presented on the subject of automation in VA all those years ago, and put forward a notion that running VA tools doesn’t carry much more value as compared to something like this: nmap -v -sS -sV <targets> . Any Security Analyst worth their weight in spam would see open ports and service banners, and quickly deduce vulnerability from this limited perspective. “Limited”, maybe, but is a typical VA tool in a better position to interrogate a target autotragically?

One pre-qualifier I need to throw out is that the type of scanners I will discuss here are Nessus-like scanners, the modus operandi of which is to use unauthenticated means to scan a target. Nessus itself isn’t the main focus but it’s the tool that’s most well known and widely used. The others do not present any major advantages over Nessus. In fact Nessus is really as good as it gets. There’s a highly limited potential with these tools and Nessus reaches that limit.

Over the course of my infosec career I have had the privilege to be in a position where I have been coerced into using VAs extensively, and spent many long hours investigating false positives. In many cases I set up a dummy Linux target and used a packet sniffer to deduce what the tool was doing. As a summary, the findings were approximately:

  • Out of the 1000s of tests, or “patterns”, configured in the tools, only a few have the potential to result in accurate/useful findings. Some examples of these are SNMP community string tests, and tests for plain text services (e.g. telnet, FTP).
  • The vast majority of the other tests merely grab a service “banner”. For example, the tool port scans, finds an open port 80 TCP, then runs a test to grab a service banner (e.g. Apache 2.2.22, mickey mouse plug-in, bla bla). I was sort of expecting the tool to do some more probing having found a specific service and version, but in most cases it does not.
  • The tool, having found what it thinks is a certain application layer service and version, then correlates its finding with its database of public disclosed vulnerabilities for the detected service.

Even for some of the plan text services, some of the tests which have the potential to reveal useful findings have been botched by the developers. For example, tests for anonymous FTP only work with a very specific flavour of FTP. Other FTP daemons return different messages for successful anonymous logins and the tool does not accommodate this.

Also what happens if a service is moved from its default port? I had some spectacular failures with running Nessus against a FTP service on port 1980 TCP (usually it is listening on port 21). Different timing options were tested. Nessus uses a nmap engine for port scanning, but nmap by itself is usually able to find non-default port services using default settings.

So in summary, what the VA tools do is mostly just report that you are running ridiculous unencrypted blast-from-the-past services or old, down-level services – maybe. Really I would hope security teams wouldn’t need to spend 25K USD on an enterprise solution to tell them this.

False positives is one thing, but false negatives is quite another. Popular magazines always report something like 50% success rate in finding vulnerabilities in staged tests. Why is it always 50%? Remember also that the product under testing is usually one from a vendor who pays for a full spread ad in that magazine.

Putting numbers to false negatives makes little sense with huge, complex software packages of millions of lines of source code. However, it occurred to me not so long ago whilst doing some white box testing on a client’s critical infrastructure: how many of the vulnerabilities under testing could possibly be discovered by use of a VA tool? In the case of Oracle Database the answer was less than 5%. And when we’re talking Oracle, we’re usually talking critical, as in crown jewels critical.

If nothing else, the main aspect I would hope the reader would take out of this discussion is about expectation. The expectation that is set by marketing people with VA tools is that the tools really can be used to accurately detect a wide range of vulnerability, and you can bet your business on the tools by using them to test critical infrastructure. Ladies and gentlemen: please don’t be deceived by this!

Can you safely replace manual testing with use of these tools? Yes, but only if the target has zero value to the business.

 

Security in Virtual Machine Environments. And the planet.

This post is based on a recent article on the CIO.com site.

I have to say, when I read the title of the article, the cynic in me once again prevailed. And indeed there will be some cynicism and sarcasm in this article, so if that offends the reader, i would like to suggest other sources of information: those which do not accurately reflect the state of the information security industry. Unfortunately the truth is often accompanied by at least cynicism. Indeed, if I meet an IT professional who isn’t cynical and sarcastic, I do find it hard to trust them.

Near the end of the article there will be a quiz with a scammed prize offering, just to take the edge of the punishment of the endless “negativity” and abject non-MBA’edness.

“While organizations have been hot to virtualize their machine operations, that zeal hasn’t been transferred to their adoption of good security practices”. Well you see they’re two different things. Using VMs reduces power and physical space requirements. Note the word “physical” here and being physical, the benefits are easier to understand.

Physical implies something which takes physical form – a matter energy field. Decision makers are familiar with such energy fields. There are other examples in their lives such as tables, chairs, other people, walls, cars. Then there is information in electronic form – that’s a similar thing (also an energy field) but the hunter/gatherer in some of us doesn’t see it that way, and still as of 2013, the concept eludes many IT decision makers who have fought their way up through the ranks as a result of excellent performance in their IT careers (no – it’s not just because they have a MBA, or know the right people).

There is a concept at board level of insuring a building (another matter energy field) against damages from natural causes. But even when 80% of information assets are in electronic form, there is still a disconnect from the information. Come on chaps, we’ve been doing this for 20 years now!

Josh Corman recently tweeted “We depend on software just as much as steel and concrete, its just that software is infinitely more attack-able!”. Mr Corman felt the need to make this statement. Ok, like most other wise men in security, it was intended to boost his Klout score, but one does not achieve that by tweeting stuff that everybody already knows. I would trust someone like Mr Corman to know where the gaps are in the mental portfolios of IT decision makers.

Ok, so moving on…”Nearly half (42 percent) of the 346 administrators participating in the security vendor BeyondTrust‘s survey said they don’t use any security tools regularly as part of operating their virtual systems…”

What tools? You mean anti-virus and firewalls, or the latest heuristic HIDS box of shite? Call me business-friendly but I don’t want to see endless tools on end points, regardless of their function. So if they’re not using tools, is it not at this point good journalism to comment on what tools exactly? Personally I want to see a local firewall and the obligatory and increasingly less beneficial anti-virus (and i do not care as to where, who, whenceforth, or which one…preferably the one where the word “heuristic” is not used in the marketing drivel on the box). Now if you’re talking system hardening and utilizing built-in logging capability – great, that’s a different story, and worthy of a cuddly toy as a prize.

“Insecure practices when creating new virtual images is a systemic problem” – it is, but how many security problems can you really eradicate at build-time and be sure that the change won’t break an application or introduce some other problem. When practical IT-oriented security folk actually try to do this with skilled and experienced ops and devs, they realise that less than 50% of their policies can be implemented safely in a corporate build image. Other security changes need to be assessed on a per-application basis.

Forget VMs and clouds for a moment – 90%+ of firms are not rolling out effectively hardened build images for any platform. The information security world is still some way off with practices in the other VM field (Vulnerability Management).

“If an administrator clones a machine or rolls back a snapshot,”… “the security risks that those machines represent are bubbled up to the administrator, and they can make decisions as to whether they should be powered on, off or left in state.”

Ok, so “the security risks that those machines represent are bubbled up to the administrator”!!?? [Double-take] Really? Ok, this whole security thing really can be automated then? In that case, every platform should be installed as a VM managed under VMware vCenter with the BeyondTrust plugin. A tab that can show us our risks? There has to be a distinction between vulnerability and risk here, because they are two quite different things. No but seriously, I would want to know how those vulnerabilities are detected because to date the information security industry still doesn’t have an accurate way to do this for some platforms.

Another quote: “It’s pretty clear that virtualization has ripped up operational practices and that security lags woefully behind the operational practice of managing the virtual infrastructure,”. I would edit that and just the two words “security” and “lags”. What with visualized stuff being a subset of the full spectrum of play things and all.

“Making matters worse is that traditional security tools don’t work very well in virtual environments”. In this case i would leave remaining five words. A Kenwood Food Mixer goes to the person who can guess which ones those are. See? Who said security isn’t fun?

“System operators believe that somehow virtualization provides their environments with security not found in the world of physical machines”. Now we’re going Twilight Zone. We’ve been discussing the inter-cluster sized gap between the physical world and electronic information in this article, and now we have this? Segmentation fault, core dumped.

Anyway – virtualization does increase security in some cases. It depends how the VM has been configured and what type of networking config is used, but if we’re talking virtualised servers that advertise services to port scanners, and / or SMB shares with their hosts, then clearly the virtualised aspect is suddenly very real. VM guests used in a NAT’ing setup is a decent way to hide information on a laptop/mobile device or anything that hooks into an untrusted network (read: “corporate private network”).

The vendor who was being interviewed finished up with “Every product sounds the same,” …”They all make you secure. And none of them deliver.” Probably if i was a vendor I might not say that.

Sorry, I just find discussions of security with “radical new infrastructure” to be something of a waste of bandwidth. We have some very fundamental, ground level problems in information security that are actually not so hard to understand or even solve, at least until it comes to self-reflection and the thought of looking for a new line of work.

All of these “VM” and “cloud” and “BYOD” discussions would suddenly disappear with the introduction of integrity in our little world because with that, the bigger picture of skills, accreditation, and therefore trust would be solved (note the lack of a CISSP/CEH dig there).

I covered the problems and solutions in detail in Security De-engineering, but you know what? The solution (chapter 11) is no big secret. It comes from the gift of intuition with which many humans are endowed. Anyway – someone had to say it, now its in black and white.

Hardening is Hard If You’re Doing it Right

Yes, ladies and gentlemen, hardening is hard. If its not hard, then there are two possibilities. One is that the maturity of information security in the organization is at such a level that security happens both effectively and transparently – its fully integrated into the fabric of BAU processes and many of said processes are fully automated with accurate results. The second (far more likely given the reality of security in 2013) is that the hardening is not well implemented.

For the purpose of this diatribe, let us first define “hardening” so that we can all be reading from the same hymn sheet. When I’m talking about hardening here, the theme is one of first assessing vulnerability, then addressing the business risk presented by the vulnerability. This can apply to applications, or operating systems, or any aspect of risk assessment on corporate infrastructure.

In assessing vulnerability, if you’re following a check list, hardening is not hard – in fact a parrot can repeat pearls of wisdom from a check list. But the result of merely following a check list will be either wide open critical hosts or over-spending on security – usually the former. For sure, critical production systems will be impacted, and I don’t mean in a positive way.

You see, like most things in security, some thinking is involved. It does suit the agenda of many in this field to pretend that security analysis can be reduced down to parrot-fashion recital of a check list. Unfortunately though, some neural activity is required, at least if gaining the trust of our customers (C-levels, other business units, home users, etc) is important to us.

The actual contents of the check list should be the easy part, although unfortunately as of 2013, we all seem to be using different versions of the check list, and some versions are appallingly lacking. The worst offenders here deliver with a quality that is inversely proportional to the prices they charge – and these are usually external auditors from big 4 consultancies, all of whom have very decent check lists, but who also fail to ensure that Consultants use said check list. There are plenty of cases where the auditor knocks up their own garage’y style shell script for testing. In one case i witnessed not so long ago, the script for testing RedHat Enterprise Linux consisted of 6 tests (!) and one of the tests showed a misunderstanding of the purpose of the /etc/ftpusers file.

But the focus here is not on the methods deployed by auditors, its more general than that. Vulnerability testing in general is not a small subject. I have posted previously on the subject of “manual” network penetration testing. In summary: there will be a need for some businesses to show auditors that their perimeter has been assessed by a “trusted third party”, but in terms of pure value, very few businesses should be paying for the standard two week delivery with a four person team. For businesses to see any real value in a network penetration test, their security has to be at a certain level of maturity. Most businesses are nowhere near that level.

Then there is the subject of automated, unauthenticated “scanning” techniques which I have also written about extensively, both in an earlier post and in Chapter Five of Security De-engineering. In summary, the methodology used in unauthenticated vulnerability scanning results in inaccuracy, large numbers of false positives, wasted resources, and annoyed operations and development teams. This is not a problem with any particular tool, although some of them are especially bad. It is a limitation of the concept of unauthenticated testing, which amounts to little more than pure guesswork in vulnerability assessment.

How about the growing numbers of “vulnerability management” products out there (which do not “manage” vulnerability, they make an attempt at assessing vulnerability)? Well, most of them are either purely an expensive graphical interface to [insert free/open source scanner name], or if the tool was designed to make a serious attempt at accurate vulnerability assessment (more of them do not), then the tests will be lacking or over-done, inaccurate, and / or doing the scanning in an insecure way (e.g. the application is run over a public URL, with the result that all of your configuration data, including admin passwords, are held by an untrusted third party).

In one case, a very expensive VM product literally does nothing other than port scan. It is configured with hundreds of “test” patterns for different types of target (MS Windows, *nix, etc) but if you’re familiar with your OS configurations,you will look at the tool output and be immediately suspicious. I ran the tool against a Linux and Windows test target and “packet sniffed” the scanning engine’s probe attempts. In summary, the tool does nothing. It just produces a long list of configuration items (so effectively a kind of Security Standard for the target) without actually testing for the existence of vulnerability.

So the overall principle: the company [hopefully] has a security standard for each major operating system and database on their network and each item in the standard needs to be tested for all, or some of the information asset hosts in the organization, depending on the overall strategy and network architecture. As of the time of writing, there will need to be some manual / scripted augmentation of automatic vulnerability assessment processes.

So once armed with a list of vulnerabilities, what does one do with it? The vulnerability assessment is the first step. What has to happen after that? Can Security just toss the report over to ops and hope for the best? Yes, they can, but this wouldn’t make them very popular and also there needs to be some input from security regarding the actual risk to the business. Taking the typical function of operations teams (I commented on the functions and relationships between security and operations in an earlier post), if there is no input from security, then every risk mitigation that meets any kind of an impact will be blocked.

There are some security service providers/consultancies who offer a testing AND a subsequent hardening service. They want to offer both detection AND a solution, and this is very innovative and noble of them. However, how many security vulnerabilities can be addressed blindly without impacting critical production processes? Rhetorical question: can applications be broken by applying security fixes? If I remove the setuid bit from a root owned X Window related binary, it probably has no effect on business processes. Right? What if operations teams can no longer authenticate via their usual graphical interface? This is at least a little bit disruptive.

In practice, as it turns out, if you look at a Security Standard for a core technology, lets take Oracle 11g as an example: how many of the numerous elements of a Security Standard can we say can be implemented without fear of breaking applications, limiting access for users or administrators, or generally just making trouble-shooting of critical applications a lot less efficient? The answer is: not many. Dependencies and other problems can come from surprising sources.

Who in the organization knows about dependencies and the complexities of production systems? Usually that would be IT / Network Operations. And how about application – related dependencies? That would be application architects, or just generally we’ll say “dev teams” as they’re so affectionately referred to these days. So the point: even if security does have admin access to IT resources (rare), is the risk mitigation/hardening a job purely for security? Of course the answer is a resounding no, and the same goes for IT Operations.

So, operations and applications architects bring knowledge of the complexities of apps and infrastructure to the table. Security brings knowledge of the network architecture (data flows, firewall configurations, network device configurations), the risk of each vulnerability (how hard is to exploit and what is the impact?), and the importance to the business of information assets/applications. Armed with the aforementioned knowledge, informed and sensible decisions on what to do with the risk (accept, mitigate, work around, or transfer) can be made by the organization, not by security, or operations.

The early days of deciding what to do with the risk will be slow and difficult and there might even be some feisty exchanges, but eventually, addressing the risk becomes a mature, documented process that almost melts into the background hum of the machinery of a business.

How Much Of A CASE Are You?

This piece is adapted from Chapter 3 of Security De-engineering, titled “Checklists and Standards Evangelists”.

My travels in information security have taken to me to 3 different continents and 15 different countries. I have had the pleasure and pain to deal with information security problems in every industry sector that ever existed since the start of the Industrial Revolution (but mostly finance’y/bank’y of course), and I’ve had the misfortune and pleasure to meet a whole variety of species and sub-species of the genus Information Security Professional.

In the good old days of the 90s, it was clear there were some distinctive features that were hard-wired into the modus operandi of the Information Security Professional. This earlier form of life, for want of a better name, I call the “Hacker”, and I will talk about them in my next post.

In the pre-holocene mid to late 90s, the information security professional was still plausibly human, in that they weren’t afraid to display distinguishing characteristics. There was no great drive to “fit in”, to look the same, talk the same, and act the same as all the other information security professionals. There was a class that was information security professional, and at the time, there was only one instance of that class.

Then during the next few years, going into the 2000s, things started to change in response to the needs of ego and other head problems, mostly variants of behaviour born out of insecurity. The need to defend territory, without possession of the necessary intellectual capital to do so, gave birth to a new instance of the class Information Security Professional – the CASE (Checklists and Standards Evangelist). The origin of the name will become clear.

My first engagement in the security world was with a small, ex-countries (mostly former Yugoslavia and Soviet Union) testing team in the late 90s. Responding incorrectly to the perceived needs of the market, around 2001/2 there were a couple of rounds of Hacker lay-offs – a common global story at the time. A few weeks after the second batch of lay-offs, there was a regional team event, wherein our Operations Manager (with a strong background in hotel management) opened the event with “security is no longer about people with green hair and piercings”. Well, ok, but what was it about then? The post 2000s version of “It” is the focus of this post, and I will cover a very scientific methodology for self-diagnosing the level of CASE for the reader.

Ok, so here are some of the elements of CASE’dom that are more commonly witnessed. Feel free to run a self-diagnosis, scoring from 0 to 5 for each point, based either on what you actually believe (how closely you agree with the points), or how closely you see yourself, or how closely you can relate to these points based on your experiences in infosec:

  • “Technical” is a four letter word.
  • Anything “technical”, to do with security (firewall configuration, SIEM, VM, IDS/IPS, IDM etc) comes under the remit of IT/Network Operations.
  • Security is not a technical field – its nothing to do with IT, its purely a business function. Engineers have no place at the table. If a candidate is interviewed for a security position and they use a tech term  such as “computer” or “network”, then they clearly have no security experience and at best they should apply for an lowly ops position.
  • You were once a hacker, but you “grew out of it”.
  • Any type of risk assessment methodology can be reduced down to a CHECKLIST, and recited parrot fashion, thereby replacing the need for actual expertise and thinking. Cost of safeguard versus risk issues are never very complex and can be nailed just by consultation of a check list.
  • Automated vulnerability scanners are a good replacement for manual testing, and therefore manual testers, and by entering target addresses and hitting an enter button, there is no need for any other type of vulnerability assessment, and no need for tech staff in security.
  • There is a standard, universally recognised vocabulary to be used in security which is based on whichever CISSP study guide you read.
  • Are you familiar with this situation: you find yourself in a room with people who talk about the same subject as you, but they use different terms and phrases, and you get angry at them in the belief that your terminology is the correct version?
  • CISSP is everything that was, is, and ever will be. CISSP is the darkness and the light, and the only thing that matters, the alpha and omega. There is a principle: “I am a CISSP, therefore I am”, and if a person does not have CISSP (or it “expired”), then they are not an effective security professional.
  • You are a CEH and therefore a skilled penetration tester.
  • “Best Practice” is a phrase which is ok to use on a regular basis, despite the fact that there is no universally accepted body of knowledge to corroborate the theory that the prescribed practice is the best practice, and business/risk challenges are all very simple to the extent that a fixed solution can be re-used and applied repeatedly to good effect.
  • Ethics is a magnificent weapon to use when one feels the need to defend one’s territory from a person who speaks at, or attends “hacker conferences”. If an analyst has ever used a “hacking tool” in any capacity, then they are not ethical, and subject to negative judgment outside of the law. They are in fact a criminal, regardless of evidence.
  • You look in the mirror and notice that you have a square head and a fixed, stern grimace. At least during work hours, you have no sense of humour and are unable to smile.
  • For a security professional in an in-house situation: it is their job to inform other business units of security standard and policy directives, without assessment of risk on a case-by-case basis, and also no offering of guidance as to how the directives might be realised. As an example: a dev team must be informed that they MUST use two-factor authentication regardless of the risk or the additional cost of implementation. Furthermore, it is imperative to remind the dev team that the standards were signed off by the CEO, and generally to spread terror whilst offering no further guidance.
  • You are a security analyst, but your job function is one of “management” – not analysis or assessment or [insert nasty security term]. Your main function is facilitating external audits and/or processing risk exemption forms.
  • Again for in-house situations: silence is golden. The standard response to any inter-department query is defiance. The key element of any security professional’s arsenal is that of silence, neutrality, and generally not contributing anything. This is a standard defence against ignorance. If a security professional can maintain a false air of confidence while ignoring any form of communique, and generally just not contributing anything, then a bright future awaits. The mask that is worn is one of not actually needing to answer, because you’re too important, and time is too valuable.
  • You fill the gap left by the modern security world by adding in words like “Evangelist” in your job title, or “thought leader”. Subject Matter Expert (SME) also is quite an attractive title. “Senior” can also be used if you have 1 second of experience in the field, or a MBA warrants such a prefix.
  • Your favourite term is “non-repudiation”, because it has that lovely counter-intuitive twist in its meaning. The term has a decent shelf-life, and can be used in any meeting where management staff are present, regardless of applicability to the subject under discussion.
  • Security incident” and “security department” both have the word “security” in them. Management notices this common word, so when there’s an incident and ops refuse to get involved, the baton falls to the security department which has no tools, either mental or otherwise, for dealing with incidents. So, security analysts live in fear of incidents. This is all easily fixed by hiring folk who both need to “fit in” with the rest of the team and also who use words like “forensics” and “incident” on their CV (and they are CISSPs).
  • “Cloud Security” is a new field of security, that only came into existence recently, and is an area of huge intellectual capital. If one has a cloud-related professional accreditation, it means they are very, very special and possess powers other mortals can only dream of. No, really. Cloud adoption is not merely a change in architecture, or places an emphasis on crypto and legal coverage! It’s way more than that!
  • Unlike Hackers, you have unique “access” to C-level management, because you are mature, and can “communicate effectively”.
  • You applied for a job which was advertised as highly technical as per the agent’s (bless ’em eh) job description that was passed on by HR. On day one you realise a problem. You may never see a command shell prompt ever again.

A maximum score of 110 points will be seen as very good or very bad by your management team, hopefully the former for your sake, hopefully the latter for the business’s sake.

Somewhere in the upper area of 73 to 110 points is max’xed-out CASE. This is as CASE as it gets. I wouldn’t want to advocate a new line of work to anyone really, but it might just be the case than an alternative career would lead to a greater sense of fulfilment and happiness.

There is hope for anyone falling in the less than 73 area. For example, its not too late to go through that [insert core technology] Security Standard, try and understand the technical risks, talk to operations about it, and see it all anew. If “tech” really is something that is against your nature, then you will probably be in the 73 to 110 class. Less than 73 is manageable. Of course by getting more tech, you could be alienating yourself or upsetting the apple cart. Its your choice ultimately…

The statement that information security is not actually anything to do with information technology, is of course nothing more than pre-tense, and more and more of our customers are starting to realise this.

What’s Next For BYOD – 2013 And Beyond

There are security and business case arguments about BYOD. They cast different aspects and there’s peta-bytes of valid points out there.

The security argument? Microsoft Windows is still the corporate OS of choice and still therefore the main target for malware writers. As a pre-qualifier – there is no bias towards one Operating System or another here.

Even considering that in most cases, when business asks for something, security considerations are secondary, there is also the point that Windows is by its nature, very hard to make malware-resistant. Plenty of malware problems are not introduced as a result of a lack of user awareness (for example, unknowingly installing malware in the form of faked anti-virus or browser plug-ins), plus plenty of services are required to run with SYSTEM privileges. These factors make Windows platforms hard to defend in a cost-effective, manageable way.

Certainly we have never been able to manage user OS rights/privileges and that isn’t going to change any time soon. There is no 3rd party product that can help. Does security actually make an effective argument in cases where users are asking for control over printers and Wifi management? Should such functions be locked anyway? Not necessarily. And once we start talking fine-grained admin rights control we’re already down a dark alley – at least security needs to justify to operations as to why they are making their jobs more difficult, the environment more complex and therefore less reliable. And with privilege controls, security also must justify to users (including C-levels) as to why their corporate device is less usable and convenient.

For the aforementioned reasons, the security argument is null and void. I don’t see BYOD as a security argument at all, mainly because the place where security is at these days, isn’t a place where we can effectively manage user device security – the doesn’t change with or without BYOD, and this is likely to be the case for some years to come yet. We lost that battle, and the security strategy has to be planned around the assumption that user subnets are compromised. I would agree that in a theoretical case where user devices are wandering freely, not at all subject to corporate controls, then the scope is there for a greater frequency of malware issues, but regardless, the stance has to be based on an assumption that one or more devices in corporate subnets has been compromised and the malware is designed to connect ingress and egress.

How about other OS flavors, such as Apple OS X for example? With other OS flavors, it is possible to manage privileges and lock them down to a much larger degree than we can with Windows, but as has been mentioned plenty of times now, once another OS goes mainstream and grows in corporate popularity, then it also shows up on the radars of malware writers. Reports of  malware designed to exploit vulnerabilities in OS X software started surfacing earlier in 2012, with “The Flashback Trojan” given the widest coverage.

I would venture that at least the possibility exists to use technical controls to lock down Unix-based devices to a much larger degree, as compared with MS Windows variants, but of course the usability experience has to match the needs of business. Basically, regardless of whether our view is utopic or realistic, there will be holes, and quite sizable holes too.

For the business case? Having standard build user workstations and laptops does make life easier for admins, and it allows for manageability and efficiency, but there is a wider picture of course. The business case for BYOD is a harder case to make than we might have at first thought. There are no obvious answers here. One of the more interesting con articles was from CIO Magazine earlier in 2012: BYOD: If You Think You’re Saving Money, Think Again and then Cisco objectively report that there are plenty in the pro corner too: Cisco Study: IT Saying Yes To BYOD.

So what does all this bode for the future? The manageability aspect and therefore the business aspect is centered around the IT costs and efficiency analysis. This is more of an operational discussion than an information risk management discussion.

The business case is inconclusive, with plenty in the “say no to BYOD” camp. The security picture is without foundation – we have a security nightmare with user devices, regardless of who owns the things.

Overall the answer naturally lies in management philosophy, if we can call it that. There is what we should do, and what we will do….and of course these are often out by 180 degrees from each other. The lure of BYOD will be strong at the higher levels who usually only have the balance sheet as evidence, along with the sales pitches of vendors. Accountant-driven organisations will love the idea and there will be variable levels of bravery, confidence, and technical backing in the IT rationalization positions. Similar discussions will have taken place with regard to cloud’ing and outsourcing.

The overall conclusion: BYOD will continue to grow in 2013 and probably beyond. Whether that should be the case or not? That’s a question for operations to answer, but there will be plenty of operations departments that will not support the idea after having analyzed the costs versus benefits picture.