- cross-posted to:
- hackernews@lemmy.bestiver.se
blog.cryptographyengineering.com
- cross-posted to:
- hackernews@lemmy.bestiver.se
The author only mentions homomorphic encryption in a footnote:
Notes:
(A quick note: some will suggest that Apple should use fully-homomorphic encryption [FHE] for this calculation, so the private data can remain encrypted. This is theoretically possible, but unlikely to be practical. The best FHE schemes we have today really only work for evaluating very tiny ML models, of the sort that would be practical to run on a weak client device. While schemes will get better and hardware will too, I suspect this barrier will exist for a long time to come.)
And yet Apple claims to be using homomorphic encryption to provide their “private server” AI compute:
Combining Machine Learning and Homomorphic Encryption in the Apple Ecosystem
Presumably the author doubts Apple’s implementation but for some reason has written a whole blog post about AI and encryption and hasn’t mentioned why Apple’s homomorphic encryption system doesn’t work.
I’d be quite interested to know what exactly is the weakness in their implementation. I imagine Apple and everyone who uses their services would be interested to know too. So why not mention it at all?
Might be the difference between FHE and regular HE. I don’t know a lot about this subject, but if HE was more practical, I’d expect to see it a lot more, outside of ML too.
Despite the bad title, the article itself is worth a read, though the topics covered are being discussed long ago, but serves as a good reminder.
A point the author raises is about data security in end-to-end encrypted communications when using with AI. Remember that end-to-end encryption is specifically protecting data in transit? It doesn’t do anything after the data is delivered to the end device. Even before the age of “AI”, the other end can do whatever he wants on that piece of data. He can shared the communication with another person next to him which the sender might or might not know of, upload it to social media, or hand it to the law enforcement. And the “AI” the tech industry going forward is just an other participant of the communication built right into the device. It can do exactly the same as any recipients wants to. It can attempt to try to (badly) summarize the communication for you, submit that communication to any third party, or even report you for CSAM as it determines your engaging in “grooming behavior.”
And the author also asked the question, “Who does your AI agent actually work for?” However, this question is already been answered by Windows Recall, the prime example of an AI agent. It collects data in an attempt to “help” us recall things in the past, but it will answer questions from anyone have access to it. Be it, you, your family/friend, or even law enforcement. The answer is anyone.