Hope this isn’t a repeated submission. Funny how they’re trying to deflect blame after they tried to change the EULA post breach.

  • Zoolander@lemmy.world
    link
    fedilink
    English
    arrow-up
    25
    arrow-down
    9
    ·
    edit-2
    1 year ago

    They did. They had MFA available and these users chose not to enable it. Every 23andMe account is prompted to set up MFA when they start. If people chose not to enable it and then someone gets access to their username and password, that is not 23andMe’s fault.

    Also, how do you go about “preventing compromised credentials” if you don’t know that the credentials are compromised ahead of time? The dataset in question was never publicly shared. It was being sold privately.

    • sudneo@lemmy.world
      link
      fedilink
      English
      arrow-up
      14
      arrow-down
      7
      ·
      1 year ago

      The fact that they did not enforce 2fa on everyone (mandatory, not just having the feature enabled) is their responsibility. You are handling super sensitive data, credential stuffing is an attack with a super low level of complexity and high likelihood.

      Similarly, they probably did not enforce complexity requirements on passwords (making an educated guess vere), or at least not sufficiently, which is also their fault.

      Regarding the last bit, it might noto have helped against this specific breach, but we don’t know that. There are companies who offer threat intelligence services and buy data breached specifically to offer this service.

      Anyway, in general the point I want to make is simple: if your only defense you have against a known attack like this is a user who chooses a strong and unique password, you don’t have sufficient controls.

      • Zoolander@lemmy.world
        link
        fedilink
        English
        arrow-up
        16
        arrow-down
        6
        ·
        edit-2
        1 year ago

        I guess we just have different ideas of responsibility. It was 23andMe’s responsibility to offer MFA, and they did. It was the user’s responsibility to choose secure passwords and enable MFA and they didn’t. I would even play devil’s advocate and say that sharing your info with strangers was also the user’s responsibility but that 23andMe could have forced MFA on accounts who shared data with other accounts.

        Many people hate MFA systems. It’s up to each user to determine how securely they want to protect their data. The users in question clearly didn’t if they reused passwords and didn’t enable MFA when prompted.

        • sudneo@lemmy.world
          link
          fedilink
          English
          arrow-up
          12
          ·
          1 year ago

          My idea is definitely biased by the fact that I am a security engineer by trade. I believe a company is ultimately responsible for the security of their users, even if the threat is the users’ own behavior. The company is the one able to afford a security department who is competent about the attacks their users are exposed to and able to mitigate them (to a certain extent), and that’s why you enforce things.

          Very often companies use “ease” or “users don’t like” to justify the absence of security measures such as enforced 2fa. However, this is their choice, who prioritize not pissing off (potentially) a small % of users for the price of more security for all users (especially the less proficient ones). It is a business choice that they need to be accountable for. I also want to stress that despite being mostly useless, different compliance standards also require measures that protect users who use simple or repeated passwords. That’s why complexity requirements are sometimes demanded, or also the trivial bruteforce protection with lockout period (for example, most gambling licenses require both of these, and companies who don’t enforce them cannot operate in a certain market). Preventing credentials stuffing is no different and if we look at OWASP recommendation, it’s clear that enforcing MFA is the way to go, even if maybe in a way that it does not trigger all the time, which would have worked in this case.

          It’s up to each user to determine how securely they want to protect their data.

          Hard disagree. The company, i.e. the data processor, is the only one who has the full understanding of the data (sensitivity, amount, etc.) and a security department. That’s the entity who needs to understand what threat actors exist for the users and implement controls appropriately. Would you trust a bank that allowed you to login and make bank transfers using just a login/password with no requirements whatsoever on the password and no brute force prevention?

          • Zoolander@lemmy.world
            link
            fedilink
            English
            arrow-up
            4
            arrow-down
            3
            ·
            1 year ago

            This wasn’t a brute force attack, though. Even if they had brute force detection, which I’m not sure if they don’t or not, that would have done nothing to help this situation as nothing was brute forced in the way that would have been detected. The attempts were spread out over months using bots that were local to the last good login location. That’s the primary issue here. The logins looked legitimate. It wasn’t until after the exposure that they knew it wasn’t and that was because of other signals that 23andMe obviously had in place (I’m guessing usage patterns or automation detection).

            • sudneo@lemmy.world
              link
              fedilink
              English
              arrow-up
              3
              ·
              1 year ago

              Of course this is not a brute force attack, credentials stuffing is different from bruteforcing and I am well aware of it. What I am saying is that the “lockout period” or the rate limiting (useful against brute force attacks) for logins are both security measures that are sometimes demanded from companies. However, even in the case of bruteforcing, it’s the user who picks a “brute-forceable” password. A 100 character password with numbers, letters, symbols and capital letters is essentially not possible to be bruteforced. The industry recognized however that it’s the responsibility of organizations to implement protections from bruteforcing, even though users can already “protect themselves”. So, why would it be different in the case of credentials stuffing? Of course, users can “protect themselves” by using unique passwords, but I still think that it’s the responsibility of the company to implement appropriate controls against this attack, in the same exact way that it’s their responsibility to implement a rate-limiting on logins or a lockout after N failed attempts. In case of stuffing attacks, MFA is the main control that should simply be enforced or at the very least required (e.g., via email - which is weak but better than nothing) when any new pattern in a login emerges (new device, for example). 23andMe failed to implement this, and blaming users is the same as blaming users for having their passwords bruteforced, when no rate-limiting, lockout period, complexity requirements etc. are implemented.

              • Zoolander@lemmy.world
                link
                fedilink
                English
                arrow-up
                3
                arrow-down
                2
                ·
                edit-2
                1 year ago

                So forced MFA is the only way to prevent what happened? That’s basically what you’re saying, right?

                Their other mechanisms would prevent credential stuffing (e.g., rate limits, comparing login locations) so how was this still successful?

                • sudneo@lemmy.world
                  link
                  fedilink
                  English
                  arrow-up
                  4
                  ·
                  1 year ago

                  Yes, forced mfa (where forced means every user is required to configure it) is the most effective way. Other countermeasures can be effective, depending on how they are implemented and how the attackers carry out the attack. Rate limiting for example depends on arbitrary thresholds that attackers can bypass by slowing down and spreading the logins over multiple IPs. Other things you can do is preventing bots to access the system (captcha and similar - this is usually a service from CDNs), which can be also bypassed by farms and in some cases clever scripting. Login location detection is only useful if you can ask MFA afterwards and if it is combined with a solid device fingerprinting.

                  My guess in what went wrong in this case is that attackers spread the attack very nicely (rate limiting ineffective) and the mechanism to detect suspicious logins (country etc.) was too basic, and took into account too few and too generic data. Again, all these measures are only effective against dumb attackers. MFA (at most paired with strong device fingerprinting) is the only effective way there is, that’s why it’s on them to enforce, not offer, 2fa. They need to prevent the attack, not let just users take this decision.

    • lightnsfw@reddthat.com
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      There are services that check provided credentials against a dictionary of compromised ones and reject them. Off the top of my head Microsoft Azure does this and so does Nextcloud.

      • Zoolander@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        1
        ·
        1 year ago

        This assumes that the compromised credentials were made public prior to the exfiltration. In this case, it wasn’t as the data was being sold privately on the dark web. HIBP, Azure, and Nextcloud would have done nothing to prevent this.