The company has updated its FAQ page to say that private chats are no longer shielded from moderation.

Telegram has quietly removed language from its FAQ page that said private chats were protected from moderation requests. The change comes nearly two weeks after its CEO, Pavel Durov, was arrested in France for allegedly allowing “criminal activity to go on undeterred on the messaging app.”

Earlier today, Durov issued his first public statement since his arrest, promising to moderate content more on the platform, a noticeable change in tone after the company initially said he had “nothing to hide.”

“Telegram’s abrupt increase in user count to 950M caused growing pains that made it easier for criminals to abuse our platform,” he wrote in the statement shared on Thursday. “That’s why I made it my personal goal to ensure we significantly improve things in this regard. We’ve already started that process internally, and I will share more details on our progress with you very soon.”

Translation: Durov is completely compromised and will do whatever NATO tells him to do. Do not trust in the security of Telegram, which frankly was never that good to begin with. And do not trust anything else even remotely connected to the company or Durov personally.

  • Vent@lemm.ee
    link
    fedilink
    English
    arrow-up
    10
    arrow-down
    1
    ·
    2 months ago

    Mobilecoin

    It’s dumb, but it’s also not really marketed and is easy to forget that it exists even when using the app daily.

    Denigrating warrant canaries

    He consulted with lawyers and they said that removing/not updating a warrant canary would likely have the same legal consequences as violating the court order by simply announcing the subpoena. Also, a warrant canary is nearly useless even in the ideal case because it just says that they got a secret warrant, not what the subpoena was for or any other details. You wouldn’t know the exact date, what was requested, or even what country made the request. And it becomes even less useful after receiving the first secret warrant.

    Also, not all subpoenas are secret. Signal posts all government requests, including the full documents of all communication between Signal and the government, at https://signal.org/bigbrother

    And, since Signal is E2EE, they don’t have any useful data to share when they receive a warrant anyway.

    Refusing to allow non-signal servers

    Signal isn’t federated and it’s not intended to be. If you’re using a private server, you’d only be able to talk to people also on your servers. If that’s a feature you want, you can simply choose a different messaging solution. It’s a design decision, not a security flaw.

    Only allowing Google and Apple app stores

    Here’s an official apk download: https://signal.org/android/apk

    Requiring phone numbers for account creation

    Yeah, it’s kinda weird. They started as an SMS app which obviously requires a phone number and just haven’t got rid of the requirement. They added usernames and hide your phone number by default, so you can at least message others without sharing your phone number.

    In the end, phone numbers streamline signup and account management and Signal is meant as a texting replacement, not a social media/texting hybrid like Telegram or Discord, so phone numbers help the less tech-literate to use the app. As long as the encryption is sound, phone numbers don’t really add that much security risk and the point is to bring high-grade encrypted messaging to everyone, not to be an ultra-anonymous hardened messaging platform to avoid state-level targeted attacks.

    • Chronicon [they/them]@hexbear.net
      link
      fedilink
      English
      arrow-up
      12
      ·
      edit-2
      2 months ago

      Yeah, warrant canaries are kind of a joke. They only work if people actually check them and you think the feds are too stupid to notice (or you think the courts actually care about precedent around not compelling actions but they obviously don’t). Or I guess if the creator gets merc’ed or arrested but servers aren’t seized, but that’s not really what they’re supposed to be for.

      not to be an ultra-anonymous hardened messaging platform to avoid state-level targeted attacks.

      But this is basically how it’s presented to people in a lot of online spaces when the topic comes up, including here. As the gold standard, best you can get, currently unbreakable.

      It’s a design decision, not a security flaw.

      it’s kinda both. Not a flaw per-se, but that design decision precludes any verification that the code they are running is what they publish, and at that point what’s the point of open source? Being actively hostile to any 3rd party apps, servers, etc. is pretty suspect. In open source security transparency is paramount, IMO.

      I’m glad they finally added usernames and stuff but I don’t think we should necessarily trust it either. I use it for day to day chatting. it’s at least not getting read by advertisers which is a feature on its own. I would not use it for serious organizing

      edit: one final thing

      And, since Signal is E2EE, they don’t have any useful data to share when they receive a warrant anyway.

      Metadata is absolutely useful info, and while signal does protect metadata more than the average bear, I don’t think I’d confidently claim they have nothing to hand over if the NSA comes knocking.

      • Vent@lemm.ee
        link
        fedilink
        English
        arrow-up
        4
        arrow-down
        1
        ·
        2 months ago

        All good points!

        Not to be an ultra-hardened messafing platform to avoid state-level targeted attacks

        I think Signal likely could be used to avoid state-level hacks and to be ultra-anonymous, but in that case you’d want to take extra precautions like using a burner and, to your point about metadata, there are other ways to identify who you are than your phone number, especially if you’re an organization comprised of many people. Realistically, anyone that has a real need to protect themselves against state-level threats either has the resources available to do so properly with their own tech, or is so hopelessly outmatched that it doesn’t matter regardless.

        Imo encryption is more about being a roadblock than an impenetrable shield. Even for organizations with infinite money and technological expertise, there are easier ways to identify you and get your data than breaking even moderately good security implementations. News stories of feds getting access to Signal convos are all about getting access to a phone and simply reading the messages, not breaking encryption or setting up honeypots on Signal servers.

        It’s a design decision, not a security flaw.

        The beauty of E2EE is that you don’t need to trust the servers at all, once you verify that you’re actually connected to the person you intend to be. Doesn’t matter if the server is trying to con you, keys are generated locally and everything is signed and encrypted locally before being sent off-device. As long as you can verify that the app you’re running matches the published source code, and that the source code isn’t duping you, you should be good to go. I haven’t reviewed the Signal protocol in a few years, but I don’t believe there are any servers that require trust, like say SSL has.

        As for hostility towards 3rd party apps, it’s pretty common for orgs to want everyone to only use first-party software when interacting with their service. It’s nearly ubiquitous today. I think probably all of us on Lemmy prefer platforms that allow for 3rd party apps, but there are legitimate reasons not to and I wouldn’t say it’s a security flaw.

        I’m glad they finally added usernames and stuff but I don’t think we should necessarily trust it either… I would not use it for serious organizing

        I think this ties back to the encryption vs wrench scenario. If you’re organizing a protest, you’re screwed no matter what you use because the cops just need to join the group themselves or take someone’s phone. Self-destructing messages can prevent this, and hostility towards 3rd party apps help in that case since you can be more certain that nobody is using some shoddy implementation that ignores self-destruction or improperly deletes things.

        If you’re organizing a military operation, you shouldn’t be using civilian messaging apps full stop.

        If you’re somewhere in between like a cartel or terrorist organization, please stay off any app I use to send memes to friends.

        Metadata is absolutely useful info, and while signal does protect metadata more than the average bear, I don’t think I’d confidently claim they have nothing to hand over if the NSA comes knocking.

        100%, but it’s a hell of a lot less useful than Facebook Messenger, my grandma can set it up in 5 minutes without any trouble, I don’t have to maintain any servers, and know that it’s supported by well funded top-notch engineers that aren’t going anywhere anytime soon.

        I use it for day to day chatting. it’s at least not getting read by advertisers which is a feature on its own.

        Literally same.

        • Chronicon [they/them]@hexbear.net
          link
          fedilink
          English
          arrow-up
          3
          ·
          2 months ago

          Self-destructing messages can prevent this, and hostility towards 3rd party apps help in that case since you can be more certain that nobody is using some shoddy implementation that ignores self-destruction or improperly deletes things.

          Helps you with local cops for sure. But disappearing messages are also just a false sense of security IMO, there’s nothing technically stopping someone from using a modified client like that, in fact some do exist and generally work despite the hostility, and so do screenshots…

          If you’re somewhere in between like a cartel or terrorist organization, please stay off any app I use to send memes to friends.

          I mean yeah, but I don’t think this is realistic. If you offer people bulletproof un-censor-able security they’re going to take you up on it, even if you don’t like them. But signal isn’t that

          Signal like every mainstream service has some amount of control and uses it to crack down on things like spam. They likely will use that control to censor other things too in the long term. To me that’s a bad thing. If it were federated, that power and responsibility would be with the instance/homeserver, not with one centralized organization.

          The beauty of E2EE is that you don’t need to trust the servers at all, once you verify that you’re actually connected to the person you intend to be.

          This ties back to my point about metadata. There are plenty of reasons to want to trust the server, and with signal, you can’t.

          I do agree though, feds doing targeted surveillance have easier ways. The issue is more one of bulk collection, and principle.

          And frankly the whole argument about open source safety goes out the window when the source and distribution is centralized, development is done behind closed doors (not sure to what extent this is true of signal clients but it was true of the server), and updates are automatically pushed out.

          There are big advantages to the linux-distro-with-maintainers model in that regard, as those are well-versed people who track development and act as a filter between users and a malicious update.