May
6
Apple’s Security Schizophrenia
Filed Under Computers & Tech, Security on May 6, 2012 at 3:19 pm
With the recent Flashback outbreak, Mac security has become very topical, getting a lot more discussion than it has for some time now. Unfortunately, I’ve seen a lot of FUD doing the rounds, particularly from AV vendors, who want to capitalise on the situation to scare as many people as possible into paying them for their products. People are looking for a simple message, but the reality is not at all simple. There is truth in most of the arguments you hear, but rarely the whole truth. This is because Apple are simultaneously badly behind on some of the simple stuff, and miles ahead of the pack on some of the more advanced stuff.
Lets start with what Apple are getting wrong. Really, it’s very simple, they don’t send out patches quickly enough, particularly to the many third-party open source components they bundle into OS X. OS X benefits greatly from it’s open source Darwin/FreeBSD core. This allows Apple to include all sorts of open source tools right in the OS. Some of the more important ones are the CUPS printing library, the SAMBA Windows Networking tools, the BASH shell environment in the Terminal, the GZip compression libraries, and many many many more. Apple get all this great functionality without having to write it themselves, but, Apple do have a responsibility to patch these components in a timely manner, and they frankly don’t.
This is what went wrong with Flashback. Java was updated by Oracle back in February, but Apple didn’t get around to releasing that patch through Software Update until after the malware had taken hold in April. This is also not the first time Java patches have been late on the Mac, and there has also been a history of critical SAMBA patches going un-patched for months on end.
It’s dangerous enough to be late patching bugs in your own software, but, it’s significantly more dangerous to be late applying patches to open source components. The reason for this is that the patches to the open source stuff are out there as soon as they are released, and the changes between the code before and after show attackers where the problem is, and helps them develop attacks. By leaving these kinds of problems in your OS for months at a time, you are leaving the door open for attackers. It really was just a matter of time until someone took advantage of the opportunities Apple was leaving on the table to make some cold hard cash. Flashback was the first do this successfully, but, unless Apple get their act together on patching, they won’t be the last!
There are no two ways about it, Apple need to get better at releasing patches.
In older versions of OS X the list of things Apple were behind on was a little longer, but recent versions of OS X have brought significant improvements. In many ways OS X and Windows 7 are on a par in terms of security architecture. Both have mechanisms for requiring admin passwords to access protected files and settings, both have Data Execution Prevention (DEP) support to make it harder to exploit the most common kinds of coding mistakes, like stack/buffer overflows, and both now have good Address Space Layout Randomisation (ASLR) to make it harder for attackers to hijack fragments of OS code for their nefarious ends.
Something to bear in mind is that there are two sides to protecting code from exploitation. You definitely want to try to remove as many vulnerabilities as you can by writing good code, having good quality control, and patching promptly, because these things limit the amount of foot-holds attackers can get into your system. There is, however, an equally important second half to this equation, you also want to limit what attackers can do to leverage any vulnerabilities that do slip through the net. Humans write code, and humans are prone to mistakes, so it is inevitable that there will be bugs in all code. When you accept this painful truth, it’s obvious that you need a second line of defence, and this is where Apple are really showing promise.
Apple made great use of the closed nature of their iOS mobile operating system to really test their technologies for limiting what apps can do, and by extension, what attackers can do when they find bugs that let them hijack apps. On iOs the kernel of the OS will only execute code that has been digitally signed. If the app’s code has been tampered with, the signature will not match, and the OS will refuse to run the app at all, stopping the attackers in their tracks. As well as this, iOS also traps running apps inside so-called sandboxes, limiting their visibility into folders and threads outside their little prisons. This means that even if a developer manages to get an app that does something malicious approved by Apple, digitally signed and into the app store, the amount of damage the app can do is still very limited.
These protections are of course not perfect, because, like all code, the code implementing these protections is also written by imperfect humans, so it too is imperfect! However, by adding these layers of defence, the amount of work it takes to attack an iOS app is much greater than the amount of work it takes to attack apps running on less secured platforms. To successfully attack iOS attackers need to do three things, they need to find an exploitable bug in an app, they then need to leverage that bug to break out of the sandbox by finding and attacking a bug in the Sandbox code, and then, finally, they need to leverage that bug to find and exploit a bug in the kernel so they can disable the enforced code signing. That’s not impossible, but it is hard work, so, it’s more economical to go after less well defended OSes like Android.
With OS X 10.8 Mountain Lion Apple are bringing some of these iOS technologies into OS X, though with less restrictive configurations and policies. GateKeeper in Mountain Lion will, by default, only run apps that have been digitally signed so that we can be sure they come from their supposed source, and that they have not been interfered with by a third party en-route. Unlike on iOS, Apple will allow signed apps to be distributed by any means the developer wishes, and will not be forcing them through an Apple App Store. Even in Lion Apple have introduced sandboxing technology, but again, in a more open way than on iOS. In OS X’s sandboxing regime, apps can apply for more permissions than they can on iOS, but again, apps will only be allowed the out of their sandboxes to the resources they need to do their task, but no more. So, if you download a game that will never need to see your files, then the sandbox will not allow the app see out at all. Should that game get compromised, the malware is trapped unless the attacker can also find and exploit a flaw in the Apple’s sandboxing and code signing code.
One of the key things to note about sandboxing and code signing is that they are white-listing technologies. They work by disallowing everything that is not thought to be safe. This is in stark contrast to the blacklisting techniques older technologies like anti-virus rely on, where everything is assumed safe unless it’s on their list of known bad things. We’ve been relying on blacklisting for decades, and the data is in, blacklisting simply does not work! If it did, then we’d be living in a spam and virus free utopia, and we very clearly aren’t!
It’s very brave of Apple to move towards white-listing, because it will cause all sorts of reactionary people to scream blue murder, but, from a security point of view, it puts them miles ahead of Windows, and it will probably stay that way for a long time, because I don’t see MS having the backbone to push through whitelisting any time soon!
So, are the people who say Apple are bad at security right, yes, they have done a poor job of patching, but, equally, the people who say Apple are leading the way forward are also right! If Apple stick to their guns on sandboxing and code signing, and get their act together on patching, Mac users could be in for a comparatively safe ride for a long time to come.
[…] http://www.bartbusschots.ie/blog/?p=2280 […]