16 September 2024

Apple Intelligence Promises Better AI Privacy. Here’s How It Actually Work

Lily Hay Newman

The generative AI boom has, in many ways, been a privacy bust thus far, as services slurp up web data to train their machine learning models and users’ personal information faces a new era of potential threats and exposures. With the release of Apple’s iOS 18 and macOS Sequoia this month, the company is joining the fray, debuting Apple Intelligence, which the company says will ultimately be a foundational service in its ecosystem. But Apple has a reputation to uphold for prioritizing privacy and security, so the company took a big swing. It has developed extensive custom infrastructure and transparency features, known as Private Cloud Compute (PCC), for the cloud services Apple Intelligence uses when the system can't fulfill a query locally on a user's device.

The beauty of on-device data processing, or “local” processing, is that it limits the paths an attacker can take to steal a user's data. The data never leaves the computer or phone, so that's what the attacker has to target. It doesn't mean an attack will never be successful, but the battleground is defined and constrained. Giving data to a company to process in the cloud isn't inherently a security issue—an unfathomable amount of data moves through global cloud infrastructure safely every day. But it expands that battlefield immensely and also creates more opportunities for mistakes that inadvertently expose data. The latter has particularly been an issue with generative AI given the unintended ways that a system tasked with generating content may access and share information.

No comments: