Breathtaking Iphone hack

mostlysignssomeportents:


AWDL is Apple’s mesh networking protocol, a low-level, device-to-device wireless system that underpins tools like Airdrop. It is implemented in the Ios kernel, a high-privilege, high-risk zone in Iphone and Ipad internals.

A researcher at Google’s Project Zero, Ian Beer, found a vulnerability in AWDL that allowed him to wirelessly infect Ios devices, then have them go on to spread the virus wirelessly to any Ios devices they came into contact with.

https://googleprojectzero.blogspot.com/2020/12/an-ios-zero-click-radio-proximity.html

The proof-of-concept attack undetectably grants  "full access to the user’s personal data, including emails, photos, messages, and passwords and crypto keys stored in the keychain.“

https://arstechnica.com/gadgets/2020/12/iphone-zero-click-wi-fi-exploit-is-one-of-the-most-breathtaking-hacks-ever/

Beer developed the exploit virtually single-handedly over six months and confidentially disclosed its details to Apple, which issued patches for it earlier this year. Now that the patch has had time to propagate, Beer has released a detailed, formal account of his work.

The 30,000 word technical paper is heavy reading, but if you want inspiration to delve into it, try the accompanying 14-second video, which is one of the most remarkable (and alarming) infosec clips I’ve ever seen.

https://www.youtube.com/watch?v=ikZTNSmbh00

As far as can be known, this was never exploited in the wild. In his Ars Technica coverage of the exploit, Dan Goodin drops the other shoe: “If a single person could do all of this in six months, just think what a better-resourced hacking team is capable of.”

It’s a theme that Beer himself explores in a Twitter thread, in which he describes the tradeoffs in protocols like AWDL, whose ease of use was critical in private messaging by Hong Kong protesters last hear.

https://twitter.com/i41nbeer/status/1333884906515161089

But whose “large and complex attack surface [exposed] to everyone in radio proximity” creates a security nightmare if there are any bugs at all in the code…and unfortunately the quality of the AWDL code was at times fairly poor and seemingly untested.“

It’s a sobering reminder that companies can’t fully audit their own products. Even companies with sterling security track-records like Apple slip up and miss really, really, REALLY important stuff.

It’s really at the heart of understanding why independent security research must be protected - at a moment in which it is under assault, as out-dated laws like the Computer Fraud and Abuse Act are used to punish researchers who go public with their work.

Dominant companies - including Google and Apple - have taken the position that security disclosures should be subject to a corporate veto (in other words, that companies should be able to decide when their critics can make truthful disclosures about their mistakes).

When the W3C introduced EME, it created the first-ever standardized browser component whose security defects could be suppressed under laws like the CFAA and Sec 1201 of the DMCA.

W3C corporate members opposed measures to require participants to promise NOT to punish security researchers who warned browser users of ways they could be attacked through defects in EME.

https://www.eff.org/deeplinks/2017/09/open-letter-w3c-director-ceo-team-and-membership

And Google is presently using the DMCA to suppress code that reveals defects in its own EME implementation, Widevine, which has become the industry standard.

https://krebsonsecurity.com/2020/10/google-mending-another-crack-in-widevine/

In his thread, Beer rightfully praises both Apple and Google for having a bug bounty program that serves as a carrot to entice security researchers into disclosing to the company first and giving it time to patch before going public.

(And he calls on Apple to award him a bounty that he can donate to charity, which, with corporate charitable matching, would come out to $500K. This is a no-brainer that Apple should totally do).

But as laudable as the Bug Bounty carrot is, let us not forget that the companies still jealously guard the stick: the right to seek fines and even prison time for security researchers who decide that they don’t trust the companies to act on disclosures.

That may sound reasonable to you - after all, it’s reckless to just blurt out the truth about an exploitable bug before it’s been patched. But companies are really good at convincing themselves that serious bugs aren’t serious and just sitting on them.

When that happens, security researchers have to make a tough call: do they keep mum and hope that no one else replicates their findings and starts to attack users, or do they go public so that people can stop using dangerously defective products?

It’s a call that Google’s Project Zero has made repeatedly. In 2015, they went public with a serious, unpatched, widespread Windows bug when they got tired of waiting for Microsoft to fix it:

https://www.engadget.com/2015-01-02-google-posts-unpatched-microsoft-bug.html

And in October, Google disclosed another Windows 0-day that was being exploited in the wild, presumably reasoning that it was better to tell users they were at risk, even if it meant giving ammo to new waves of hackers.

https://arstechnica.com/information-technology/2020/10/googles-project-zero-discloses-windows-0day-thats-been-under-active-exploit/

Bug Bounties are great - essential, even. But for so long as companies get to decide who can tell the truth about the defects in their products, bug bounties won’t be enough. The best, most diligent security teams can make dumb mistakes that create real risk.

Your right to know whether you are at risk should not be subject to a corporate whim. The First Amendment - and free speech protections encoded in many other legal systems - provides a high degree of protection for truthful utterances.

The novel and dangerous idea that corporations should have a veto over the truth about their mistakes is completely irreconcilable with these free speech norms and laws.