For anyone wondering the paper is here https://arxiv.org/abs/1907.13022
and the source code here https://github.com/rharper2/EfficientLearningDataSet
the article is just linking to a paywall
Always kind of bizarre seeing people almost scared of quantum computing, because in theory it could break modern encryption (there’s a proven, efficient algorithm for prime factorization) and on the other end, quantum computing research is just now figuring out how to, you know, not have random data changes in their computations. Like, imagine writing code, assigning a boolean to a variable, and not being sure that when you query that variable again, that it’ll still hold the same value.
Also, we apparently already have theory in place for all kinds of cryptography that’s resilient against quantum computing prime factorization, so I don’t see the sky falling anytime soon. Not for that reason, at least.
Yeah, it’s simply cryptographic algorithms that don’t use primes, and instead use for example elliptic curves.
They’re still a bit in a weird spot, in particular only certain elliptic curves are secure, and to my knowledge so far only really US intelligence services have checked their security, so people are a bit worried that they declared elliptic curves as “secure” for which they have working exploits.
It would also be nice, if we could switch over like a decade before quantum computing becomes viable for decrypting, so that they can’t decrypt recorded communication afterwards and still potentially extract passwords that are still valid and so on.
But yeah, with all of that in mind, I don’t see quantum computing being a problem yet. It won’t be here for a while still and when it is here, it won’t either magically decrypt all communications over night.
As I understand it though, quantum computers will have extremely high probability of getting the right answer out though, with the right algorithms. You wouldn’t say your account is insecure because someone could completely randomly guess your password
Yeah, I heard that, too, that the result will usually only narrow it down and then you’d use non-quantum computing to check the remaining possibilities.
But these inaccuracies should only be allowed in the output. If you already read the input wrong, i.e. the wrong prime to factorize, or it just gets completely jumbled during the calculation, then the output won’t be an approximation, it’ll be completely wrong.
But well, I’m also not scared, because I’ve also heard that for prime factorization to be viably fast, you would still need millions of qubits and we’re still in double digits.
Also, these qubits currently have to be cooled to 1K and adding more qubits negatively impacts the stability of the other qubits, so it’s not like we’re on the verge of being able to massively increase the number of qubits.
Subscribe to see more stories about technology on your homepage