Andy Yen, the CEO of Proton (Mail, Drive, VPN, Pass…) answered a lot of the questions you, the community, asked, in an interview that covers basically everything!

He discusses security, privacy, the origins of Proton, how they operate, Linux support, future projects, products and features, quantum computing, passkeys, and more!

Proton Mail: https://proton.me/mail/TheLinuxEXP Proton VPN: https://protonvpn.com/TheLinuxEXP

👏 SUPPORT THE CHANNEL: Get access to a weekly podcast, vote on the next topics I cover, and get your name in the credits:

YouTube: https://www.youtube.com/@thelinuxexp/join Patreon: https://www.patreon.com/thelinuxexperiment Liberapay: https://liberapay.com/TheLinuxExperiment/

Or, you can donate whatever you want: https://paypal.me/thelinuxexp

👕 GET TLE MERCH Support the channel AND get cool new gear: https://the-linux-experiment.creator-spring.com/

🎙️ LINUX AND OPEN SOURCE NEWS PODCAST: Listen to the latest Linux and open source news, with more in depth coverage, and ad-free! https://podcast.thelinuxexp.com

🏆 FOLLOW ME ELSEWHERE: Website: https://thelinuxexp.com Mastodon: https://mastodon.social/web/@thelinuxEXP Pixelfed: https://pixelfed.social/TLENick PeerTube: https://tilvids.com/c/thelinuxexperiment_channel/videos Discord: https://discord.gg/mdnHftjkja

#vpn #privacy #proton #onlinesecurity #protonmail

Timecodes:

00:00 Intro 01:16 How did Proton start? 03:24 Why start with email? 06:03 What is Proton’s business model? 08:34 Why set up in Switzerland? 11:33 What data do you have on customers? 14:39 How is encryption important? 18:20 Do you always need to use a VPN? 20:47 Why focus on building an ecosystem? 24:55 Is an Office Suite planned? 26:29 What differentiates Proton from competitors? 30:26 Is Proton a viable alternative to big tech services? 33:31 Why expand to more products instead of finishing existing ones? 37:19 Does the general public care about privacy? 38:45 What’s next for Proton services? 40:08 What are the plans for native Linux clients? 46:03 Will ProtonVPN offer dedicated IPs to everyone? 47:46 What’s the environmental impact of Proton? 49:27 Proton on F-Droid, without Google Play notifications? 52:03 Why are code repos all separated and hard to find? 53:12 Why are addresses ending in “.me” ? 54:57 When will all apps reach feature parity? 56:24 Will SMTP relay be supported? 57:47 Will Proton focus more on businesses in the future? 59:50 Why put all your eggs in one basket with just Proton services? 01:01:00 Will Proton support passkeys? 01:03:21 Does E2E matter is the recipient isn’t using it? 01:04:49 Will Proton disable port forwarding in VPN? 01:06:41 Is encryption enough to make email private? 01:09:06 What protects users from a change in Proton’s code licensing? 01:11:14 How does Proton protect its infrastructure? 01:13:14 Impacts of Quantum Computing on privacy and security? 01:14:24 What’s the future of Proton Bridge? 01:16:25 When will Proton photos be a thing? 01:17:17 Plans for Proton Notes? 01:18:20 Will VPN support the Apple TV? 01:21:12 Support the channel

  • sudneo@lemmy.world
    link
    fedilink
    arrow-up
    1
    ·
    9 months ago

    Yeah, i think it is a feature, and a very beneficial one for the people this system was designed for - those who want a lot of privacy-desiring users to settle on using an encryption solution which isn’t too difficult to circumvent.

    This you need to prove somehow. Has there be any attack that happened like this? Has there been any content leaked this way, or provided to law enforcement? In other words, did they use this “feature” in any way? Because if this is just a design limitation, then it’s not a feature, it’s a risk exactly like using someone else’s code exposes you to supply chain risk. Would you say that anybody who uses any external library is actually a snake-oil seller about the property of their product because if a supplier (library, dependency, etc.) get compromised their product could be compromised? I wouldn’t say so. I think that intentions matter here.

    Note that, throughout this discussion, I’m not really just talking about Proton but rather them and Tuta and Hushmail and anything else that shares this architecture.

    Yes, I understand.

    Well, they could be honest and inform their users: “to have the convenience of using webmail you must sacrifice the benefit of end-to-end encryption (not needing to trust the server and its operators to refrain from reading your messages).”

    But that’s not true. End-to-end encryption simply means that the encrypt/decrypt operation happen on the client side. It doesn’t mean that it’s an unbreakable design. Following this logic, every software that does PGP encryption should say “to have the convenience of not having to rewrite all the code ourselves we use suppliers which might allow third parties to read your messages”. Proton content is still end-to-end encrypted, with the code hosted publicly. The fact that vectors exist to invalidate that is not a reason to invalidate the whole thing, exactly like the existence of supply chain attacks are not a reason to dismiss the validity of e2ee for CLI tools and the like.

    Also, I mentioned the potential to use the bridge. That is a fully client-side tool which does not run in the browser, does that satisfy your risk appetite?

    Yep. But no matter how tight their processes are, there are still single points of failure that can be coerced to gain access to anyone’s email.

    They are a point of failure, not a single point of failure necessary (as in a single person).

    but I don’t have the energy to explain to you why selling something as e2ee while it reduces to (among other things) specifically the security of TLS is dishonest.

    But this was not your claim, your claim was that compromising them and serving backdoored JS was not the only way, and that an attacker in an appropriate network position could achieve the same. I am saying that particular vector does not apply, because your browser will actually refuse to serve Proton without a valid certificate due to HSTS. So an attacker can tamper with the code only at either of the “ends” (either compromising them or compromising your endpoint).

    I just checked their site and they still say it’s “for journalists”, and “we can never access your messages”, etc etc.

    Just for reference, what I meant is that people referred by the statement “and the incorrect perception that ProtonMail’s end-to-end encryption provides meaningful security is undoubtedly preventing some of their customers from using better tools instead.” are not those who have that risk model. Journalist and other at-risk people have technical consultants and are (hopefully?) aware of the risks, and can apply additional controls (for example, using Proton to send encrypted content). They are not those who won’t use other - more secure - channels than email because they read Proton pages.

    If what you want is not privacy from adversaries who can compromise your mailserver, but rather just protection from GMail reading your mail, then you don’t need e2ee: you need a provider with a privacy policy you believe they will honor.

    e2ee is just a very nice and clear-cut way to enforce the privacy policy. Law enforcement can still get the data from a provider. If the data is not collected, the data cannot be given. Sure, it’s possible that a 3-letter agency will coerce Proton to compromise a user but a) this did not happen yet (as far as we know?) and 2) again, if that’s part of your risks, don’t use emails or just use email to send encrypted content…

    Why would you assume they are when they’re lying about their ability to read your emails?

    You seem to be really fixated with this statement, but it’s not true. They don’t have the “ability” to read emails. They have a setup that - provided the violation of controls that we both don’t know about - can possibly grant them that ability. I really don’t understand why you think it’s different from any other software. If the NSA goes to https://www.gnupg.org and says “you know what, the next time you serve your software to IP x.x.x.x”, you serve this package, you will never know and your encryption is toast. Would you say that the folks behind GnuPG “have the ability to read your emails”? I wouldn’t, because they are not backdooring the software, although the possibility for them, contributors and national actors to do that exists.

    rather you are just saying that you think it is very unlikely that they would ever abuse that capability and that you assume their procedures make it so that one rogue employee couldn’t do it alone. You do seem to understand that, contrary to what they’ve written in the screenshot above, ProtonMail as a company technically could decide to.

    Yeah, you are correct. This is exactly the same as me saying that technically a lot of people in my organization could tamper with payments and violate the integrity of most of UK bank transfers. In practice, there are a bazillions controls in place to ensure that this does not happen, and before touching production there are tons of safeguards, but theoretically my company could decide to break compliance, remove procedure and allow a free-for-all on banking transactions before being fined/shut down/to the abyss.

    I do believe that they have no interest whatsoever to abuse this architectural feature, but I agree that they could be coerced to. However, as I said before, I believe the same to be true for any other software, which is why I don’t agree on the risk model to be significantly different from many other tools. In fact, the fact that they are in Swiss jurisdiction might help, compared to a lot of (F)OSS entities which are in the US.

    But, do you think most of their customers understand that?

    No, I think most people don’t.

    which of these statements do you think is the most likely to be accurate:

    I have no idea. I would say 1 or 3 are the most likely. It seems a very unnecessary way (if I were a certain 3 letter agency) to gain access to a small set of data, when I can compromise the whole device and maintain persistence much more conveniently (for example coercing the ISP to give me access to the router and go from there, or ask directly Apple and Microsoft, etc.).

    If it were revealed that #4 were in fact the case, would you agree that it is snakeoil? If you agree with me that #3 is the most likely scenario, approximately how many times per hour/week/year would they need to be complying with these requests before you would agree that they are, in fact, snakeoil?

    I would say that they should disclose that for sure, at least with a warrant canary, since they might actually not even be allowed to fully disclose it. I am fairly conflicted about the fact that government surveillance has -sometimes- reason to be exercised, provided a judge has vetted it and proper guarantees are in place (not the US way, to be clear), and the fact it is routinely abused. I also believe that perfect security does not exist, and it’s enough for me to send an encrypted attachment via Proton and mitigate this whole risk.

    To answer your question, I would say that if this is a forced action that happened a handful of times, for extremely high profile cases and severe reasons, then I might still consider their claim legitimate. If it’s a routine procedure to satisfy pretty much any request, then I would agree that this becomes more of a feature than an attack.

    That said, I also have a couple of final questions for you too:

    • Proton bridge runs on the client and does not use the browser. The code is open source. Since they provide this too, would you consider this on-par with using your favorite CLI/plugin for PGP? Would this solve the problem you raise?
    • Do you think that it’s possible that any of the 3-letters agencies could coerce a software author (or some collaborator) and produce a malicious release for the code that is served only to you (for example, by IP, fingerprint or other identifier) or that it activates only for you (device ID etc.)? For example go to Kevin McCarthy and force him to produce a backdoored version of Mutt (http://www.mutt.org/download.html) which is backdoored to leak your keys.
    • Do you think that alternatively Github/Bitbucket for example could be coerced by said agencies to backdoor the version (and signature) you get for a given code, say https://bitbucket.org/mutt/mutt/downloads/mutt-2.2.12.tar.gz (maybe after graciously “asking” Kevin for his key to sign the software).
    • If you think the above is possible, do you think there is any distributor for software that could not be coerced? And how this vector is actually different from Proton being forced to break their own encryption?
    • If you agree that the above is possible, would you say that any claim about Mutt using PGP to e2e encrypt/decrypt your emails are snakeoil?
    • Arthur BesseA
      link
      fedilink
      English
      arrow-up
      1
      ·
      9 months ago

      Yeah, i think it is a feature, and a very beneficial one for the people this system was designed for - those who want a lot of privacy-desiring users to settle on using an encryption solution which isn’t too difficult to circumvent.

      This you need to prove somehow.

      I said “i think” because, unlike many of the other things I’m saying here which are statements of fact, my suggestion that ProtonMail specifically is designed for this attack to be possible is merely well-informed speculation.

      Has there be any attack that happened like this?

      See the links in my earlier comments for evidence of this kind of attack happening against all three of the other largest email providers with similar architectures as ProtonMail (Tuta, Hushmail, and Lavabit).

      Also, I mentioned the potential to use the bridge. That is a fully client-side tool which does not run in the browser, does that satisfy your risk appetite?

      If both users are using the bridge (assuming it is designed how I think it is), they would certainly be better off than if one or both of them is using the webmail e2ee. However, I would never use or recommend using protonmail, even with the bridge, because it is very likely that the people I’m writing to would often not be using the bridge. Also, because ProtonMail e2ee doesn’t interoperate with anything else, and by using it I’d be endorsing it and encouraging others to use it (“it” being ProtonMail, which for most users is this webmail snakeoil).

      Also, I don’t know in detail how the bridge actually works, and, like most of the people I know who sometimes audit things like this… the open source bits from Proton like their bridge aren’t interesting enough to be worth auditing for free (except perhaps by a security company, for their own marketing purposes) because, even if it turns out to be soundly implemented itself, it is a component of a non-interoperable proprietary snakeoil platform.

      Yep. But no matter how tight their processes are, there are still single points of failure that can be coerced to gain access to anyone’s email.

      They are a point of failure, not a single point of failure necessary (as in a single person).

      From your earlier comments I think you’re working from a mental model where an individual employee performing the attack would need to check something in to git or something like that, but, don’t you think anyone with root on, say, one of the caching frontend webservers do this? I suggest that you try to think about how you would design their system to prevent a single person from unilaterally doing it, and then figure out how you can break your design.

      I am saying that particular vector does not apply, because your browser will actually refuse to serve Proton without a valid certificate due to HSTS.

      Yes, I get that you are saying that, but it’s because you have not been hearing me saying that HTTPS has been circumvented numerous ways over the years and will continue to be. Do you think we’ve seen the last rogue certificate authority? Or the last HSM where (oops!) they key can actually be extracted?

      Don’t you think there is a reason why most modern software update mechanisms don’t rely solely on HTTPS for authenticity of their updates?

      🤔 I actually wonder why ProtonMail lists Digicert and Comodo alongside LetsEncrypt in their CAA DNS records. (Fwiw, they currently have a cert from LetsEncrypt, from my network perspective at least). Doesn’t that mean that, against a browser supporting DNSSEC and CAA records, a rogue employee at any of those 3 companies can issue a cert that would allow this attack to be performed? (Of course, against a browser that isn’t validating CAA with DNSSEC, anybody at any one of thousands of sub-CAs can also do it…)

      at-risk people have technical consultants and are (hopefully?) aware of the risks, and can apply additional controls

      As someone who has been one of those technical consultants, let me tell you, arguing with at-risk people about the veracity of posts on privacy forums singing the praises of things like protonmail is part of the job. 😭

      If the NSA goes to https://www.gnupg.org and says “you know what, the next time you serve your software to IP x.x.x.x”, you serve this package, you will never know and your encryption is toast. Would you say that the folks behind GnuPG “have the ability to read your emails”? I wouldn’t, because they are not backdooring the software, although the possibility for them, contributors and national actors to do that exists.

      This is a false equivalence in several ways:

      • Targeting an IP address is much less useful than targeting a user by their username and password
      • Careful users have the ability to (and many do) verify hashes and signatures of a downloaded program before they run it, unlike javascript on a web page
      • Users retain a copy of the program after downloading it and so often have evidence if an attack took place
      • Many users obtain their GPG binaries from some distribution rather than the GnuPG website (read on about that…)

      Again, these software distribution channels (eg, Linux distros, etc) have many of their own problems, but they are in a different league than javascript in a browser. Ways they’re better include:

      • These days, in many/most cases, at least two keys/people are required to compromise them. This isn’t nearly enough but it is better than one.
      • Other than by IP, users aren’t identifying themselves before downloading things
      • Users can access them from many different mirrors; there isn’t a single server from which to target all users of a given distribution

      rather you are just saying that you think it is very unlikely that they would ever abuse that capability and that you assume their procedures make it so that one rogue employee couldn’t do it alone. You do seem to understand that, contrary to what they’ve written in the screenshot above, ProtonMail as a company technically could decide to.

      I do believe that they have no interest whatsoever to abuse this architectural feature, but I agree that they could be coerced to.

      But, do you think most of their customers understand that?

      No, I think most people don’t.

      Isn’t that because their web page says something to the contrary?

      I have no idea. I would say 1 or 3 are the most likely.

      Really? Scenario 1 is possible? You think a privacy-touting email service with 100M users might have never had a request to circumvent their encryption, despite being able to?

      It seems a very unnecessary way (if I were a certain 3 letter agency) to gain access to a small set of data, when I can compromise the whole device and maintain persistence much more conveniently (for example coercing the ISP to give me access to the router and go from there, or ask directly Apple and Microsoft, etc.).

      Again, I’m not just talking about 3 letter agencies, but anyone who wants to read someone’s mail. And often there is a point where the email address is all that is known about the target.

      Do you think that it’s possible that any of the 3-letters agencies could coerce a software author (or some collaborator) and produce a malicious release for the code that is served only to you (for example, by IP, fingerprint or other identifier)

      I use some mitigations I won’t go into, but, yeah, on the system I’m typing this on I do sadly use a distribution which relies on a single archive signing key, so, if you compromise that key (or the people with access to it), and obtain a valid HTTPS certificate for the particular mirror I use, and you know the IP address I’m using at the moment I’m doing an OS update, you can serve me a targeted (by IP) malicious software update. 😢

      that it activates only for you (device ID etc.)? For example go to Kevin McCarthy and force him to produce a backdoored version of Mutt (http://www.mutt.org/download.html) which is backdoored to leak your keys.

      I think the vast majority of Mutt users don’t get their Mutt binaries from Kevin McCarthy, and having him put a targeted backdoor in the source code would be foolish as it would be likely to be noticed by one of the mutt distributors who builds it before it gets distributed. Since reproducible builds still aren’t ubiquitous, the best place to insert a widely-distributed-but-targeted-in-code backdoor would be at the victim’s distributor’s buildserver.

      Do you think that alternatively Github/Bitbucket for example could be coerced by said agencies to backdoor the version (and signature) you get for a given code, say https://bitbucket.org/mutt/mutt/downloads/mutt-2.2.12.tar.gz (maybe after graciously “asking” Kevin for his key to sign the software).

      Yes, but unlike the ProtonMail case there is a chance of being caught so it is a much higher risk for the attacker.

      If you think the above is possible, do you think there is any distributor for software that could not be coerced? And how this vector is actually different from Proton being forced to break their own encryption?

      There are a wide variety of software distribution paradigms, on a spectrum of difficulty to attack. At one end of the spectrum you have things like Bitcoin Core, where binaries are deterministically built and signed by multiple people, and many users actually verify the signatures to confirm that multiple builders (with strong reputations) have independently built an identical binary artifact. At the other end of the spectrum you have things like ProtonMail with zero auditability, users identifying themselves and re-downloading the software at each use, and numerous single points of failure that can be exploited to attack a specific user. Things like mainstream free software operating system distributions, macOS, Windows Update, etc sit somewhere in the middle of that spectrum.

      If you agree that the above is possible, would you say that any claim about Mutt using PGP to e2e encrypt/decrypt your emails are snakeoil?

      No. See previous answers for the massive differences.

      • sudneo@lemmy.world
        link
        fedilink
        arrow-up
        1
        ·
        9 months ago

        I will skip some parts because I think it’s not worth repeating.

        I think the vast majority of Mutt users don’t get their Mutt binaries from Kevin McCarthy, and having him put a targeted backdoor in the source code would be foolish as it would be likely to be noticed by one of the mutt distributors who builds it before it gets distributed. Since reproducible builds still aren’t ubiquitous, the best place to insert a widely-distributed-but-targeted-in-code backdoor would be at the victim’s distributor’s buildserver.

        This was clearly just an example. Any distributor is the single of point of failure. You can coerce or compromise it, and you will serve compromised software.

        Yes, but unlike the ProtonMail case there is a chance of being caught so it is a much higher risk for the attacker.

        No there isn’t. There is nothing that prevents Github to serve you a different file when you query the same URL than what regular users will (for example by IP). It’s trivial to do this with any reverse proxy. And the same applies for a signature file, which means you can only notice if you manage to get the file and the signature from someone else and compare the signature/hashes for the same release. Which is basically the same as saying “I will compare my JS blob de-minified with the one in the OSS repo”, nobody does this either, I agree. This can totally happen every time you download something from any website, technically, provided that the server is coerced or compromised.

        on a spectrum of difficulty to attack

        Not really. The spectrum is much narrower than how you present it. I bet 99% of users install software in one of these ways:

        • Package manager (linux/Mac).
        • Download an installer or the code from the software website (Windows, AppImage, etc.).
        • Install through a platform (say, Steam)

        Almost all the package managers AFAIK work under the same model (package, signed with the distributor’s key, served via web), which is susceptible to coercion and compromise. All the webservers and platforms can be coerced/compromised to serve different files (installers) to different clients.

        Am I missing something? Is there another way to serve software that I am missing?

        numerous single points of failure that can be exploited to attack a specific user

        There is one, so far. The provider being compromised. The rest is your speculation such as

        but it’s because you have not been hearing me saying that HTTPS has been circumvented numerous ways over the years and will continue to be

        Which is like saying, there are vulnerabilities. Yes, there will be vulnerabilities, but this applies to any software too. And if HTTPs is broken to allow MiTM then this is a risk for any software you download via web, starting from the linux ISO, so it’s far from a webmail-specific problem.

        No. See previous answers for the massive differences.

        You list:

        • These days, in many/most cases, at least two keys/people are required to compromise them. This isn’t nearly enough but it is better than one.

        Nothing, absolutely nothing, tells you that it’s enough to compromise one Proton employee to gain access to production and replace the code. Also, you have absolutely no idea of the security practice of the couple of people who handle those keys, they are not accountable in any way, they don’t need to be compliant with any standard (for what is worth), etc. I would say it’s much more likely for any of the mirrors/repositories to get compromised compared to Proton.

        In fact, you say:

        From your earlier comments I think you’re working from a mental model where an individual employee performing the attack would need to check something in to git or something like that, but, don’t you think anyone with root on, say, one of the caching frontend webservers do this? I suggest that you try to think about how you would design their system to prevent a single person from unilaterally doing it, and then figure out how you can break your design.

        I do this for a living. One way to do this is to close off production environments, assign temporary permissions that require multiple people to sign-in at the same time and spectate when production is accessed. Teleport allows to do this, for example, nothing I am conjuring out of thin air. Similarly, the CI can implement a million check to verify the provenance of the software and require multiple sign-off before things are actually deployed. Breakglass procedure exist (usually for a handful of individuals), but they generate alert and are audited post-factum, so that such attack would be detected.

        • Other than by IP, users aren’t identifying themselves before downloading things

        True, but for me being attacked this changes very little. Attackers can just establish a C2, check if the target is right and do not do anything else on other devices. I grant you, this is a difference, but the control here is the fact that more people will possibly spot the issue and I will get to know it before getting compromised. It’s possible, but it’s a very weak control.

        Users can access them from many different mirrors; there isn’t a single server from which to target all users of a given distribution

        True, bigger attack surface, but each individual mirror can be compromised via the same vector (and of course the source can). Also Proton does not have a machine that serves everyone. Might have multiple regions, multiple clusters, separate by accounts, departments etc. In addition, you are talking about a highly targeted attack. Relying on the obscurity of which mirror someone uses is really not something I would consider applicable here.

        The biggest difference is the automation with which JS code is “updated”. This is what makes the attack potentially slower via regular supply chain. Nothing I would consider massive for sophisticated attackers like the ones able to exploit this vector. So the massive difference in your opinion is that:

        • Attackers are not able to target individuals with the same precision.
        • Attackers might need to know more about you to target the distributor of your software.

        On the other hand:

        • A company with a security department has a smaller chance to be compromised compared to random individual
        • A company like Proton at least has to adhere to some standard and security hygiene, which individuals handling package repos/mirrors don’t.

        If for you these are massive differences, OK. For me they are not.

        Finally:

        If both users are using the bridge (assuming it is designed how I think it is), they would certainly be better off than if one or both of them is using the webmail e2ee. However, I would never use or recommend using protonmail, even with the bridge, because it is very likely that the people I’m writing to would often not be using the bridge. Also, because ProtonMail e2ee doesn’t interoperate with anything else, and by using it I’d be endorsing it and encouraging others to use it (“it” being ProtonMail, which for most users is this webmail snakeoil).

        How it is relevant what both users are using the bridge? The bridge is literally doing the same that -say- mutt does. This has nothing with the bridge, what you are saying (I think) is that you wouldn’t send an email to someone if you don’t trust the software they use, but this is independent from you using the bridge. You can add other people (non-Proton users) keys to be used, so Bridge -> Mutt is exactly the same as Bridge -> Bridge or Mutt -> Mutt.

        because it is very likely that the people I’m writing to would often not be using the bridge

        In this case there is no tool that you can use that will “protect” you, if you don’t trust the other side.

        Also, because ProtonMail e2ee doesn’t interoperate with anything else, and by using it I’d be endorsing it and encouraging others to use it (“it” being ProtonMail, which for most users is this webmail snakeoil).

        Which is not a security consideration.

        The security model of the bridge is the same as the security model of mutt, or other CLI tools or anything you might use for PGP. It seems you have absolutely no security consideration why this would be worse.

        So, in short:

        • Using protonmail you address the security risk you highlighted in the same way as it is addressed by using any other client tool that doesn’t run in the browser.
        • The fact that you won’t use it because of your personal crusade against webmail is irrelevant in terms of security for a non-webmail too.