Search

Why Apple’s walled garden is no match for Pegasus spyware

Guardian Technology 21 Jul 2021 10:50

You will, by now, have heard about Pegasus. It’s the brand name for a family of spyware tools sold by the NSO Group, an Israeli outfit of hackers-for-hire who sell their wares to intelligence agencies, law enforcement, and militaries around the world.

Sign up to Alex Hern’s weekly technology newsletter, TechScape.

An investigation by the Guardian and 16 other media organisations around the world into a massive data leak suggests widespread abuse of NSO Group’s hacking software by government customers. The company insists it is intended for use only against criminals and terrorists but the investigation has revealed that journalists, human rights activists and opposition politicians are also being targeted. Since our phones are increasingly external brains, storing our lives in digital form, a successful deployment of Pegasus can be devastating. Messages, emails, contact details, GPS location, calendar entries and more can be extracted from the device in a matter of minutes.

On Sunday, the Guardian and its media partners began to publish the results of the investigation into the NSO Group, Pegasus, and the people whose numbers appear on the leaked list:

The presence of a number in the data does not reveal whether there was an attempt to infect the phone with spyware such as Pegasus, the company’s signature surveillance tool, or whether any attempt succeeded. There are a very small number of landlines and US numbers in the list, which NSO says are “technically impossible” to access with its tools – which reveals some targets were selected by NSO clients even though they could not be infected with Pegasus.

But this is a tech newsletter, and I want to focus on the tech side of the story. Chiefly: how the hell did this happen?

Pegasus affects the two largest mobile operating systems, Android and iOS, but I’m going to focus on iOS here for two reasons: one is a technical problem that I’ll get to in a bit, but the other is that, although Android is by far the most widely used mobile OS, iPhones have a disproportionately high market share among many of the demographics targeted by the customers of NSO Group.

But it’s also because they have a reputation for security. Dating back to the earliest days of the mobile platform, Apple fought to ensure that hacking iOS was hard, that downloading software was easy and safe, and that installing patches to protect against newly discovered vulnerabilities was the norm.

It’s worth pausing to note what is, and isn’t, worth criticising Apple for here. No software on a modern computing platform can ever be bug-free, and as a result no software can ever be fully hacker-proof. Governments will pay big money for working iPhone exploits, and that motivates a lot of unscrupulous security researchers to spend a lot of time trying to work out how to break Apple’s security.

What that means in practice is that the only thing that can protect iOS users from an attack is Apple – and if Apple fails, there’s no other line of defence.

At the heart of the criticism, Wardle accepts, is a solid motivation. Apple’s security model is based on ensuring that, for the 99% – or more – for whom the biggest security threat they will ever face is downloading a malicious app while trying to find an illegal stream of a Hollywood movie, their data is safe. Apps can only be downloaded from the company’s own App Store, where they are supposed to be vetted before publication. When they are installed, they can only access their own data, or data a user explicitly decides to share with them. And no matter what permissions they are given, a whole host of the device’s capabilities are permanently blocked off from them.

A similar problem exists at the macro scale. An increasingly common way to ensure critical systems are protected is to use the fact that an endless number of highly talented professionals are constantly trying to break them – and to pay them money for the vulnerabilities they find. This model, known as a “bug bounty”, has become widespread in the industry, but Apple has been a laggard. The company does offer bug bounties, but for one of the world’s richest organisations, its rates are pitiful: an exploit of the sort that the NSO Group deployed would command a reward of about $250,000, which would barely cover the cost of the salaries of a team that was able to find it – let alone have a chance of out-bidding the competition, which wants the same vulnerability for darker purposes.

In a statement, Apple said:

There are ways round some of these problems. Digital forensics does still work on iPhones – despite, rather than because, of Apple’s stance. In fact, that’s the other reason why I’ve focused on iPhones rather than Android devices here. Because while the NSO Group was good at covering its tracks, it wasn’t perfect. On Android devices, the relative openness of the platform seems to have allowed the company to successfully erase all its traces, meaning that we have very little idea which of the Android users who were targeted by Pegasus were successfully affected.

So there you go: the same opacity that makes Apple devices generally safe makes it harder to protect them when that safety is broken. But it also makes it hard for the attackers to clean up after themselves. Perhaps two wrongs do make a right?

Continue reading original article...

Tags

NSO GroupPegasusAppleNSO
You may also like