• Puls3
    link
    fedilink
    English
    arrow-up
    4
    arrow-down
    1
    ·
    2 years ago

    I mean ethically its a debatable topic, if I don’t help fix someone’s car and then he crashes it, its not my fault, he shouldn’t have driven it while it was broken.

    Same with user generated or AI data, it works 99.9% of the time, but that 0.1% is too dangerous to deploy in a life endangering situation.

    • someRandomRedneck@beehaw.org
      link
      fedilink
      English
      arrow-up
      1
      ·
      edit-2
      2 years ago

      You’ve got a bit of a point there I’ll give you that but it’s an apple to oranges comparison, unless you’re intentionally trying to cause them to crash by not helping them fix their car. The person I originally replied to is advocating intentionally trying to cause a crash.

      • Puls3
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        1
        ·
        2 years ago

        I think it was a more tongue in cheek reference to the incompetence of the companies and how they will use that data in practice, but I might have read too much into it. Regardless, intentionally clicking the wrong items on captchas shouldn’t cause a crash unless the companies force it to by cutting corners.

        • someRandomRedneck@beehaw.org
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          1
          ·
          2 years ago

          It doesn’t matter if it was tongue in cheek, if my dumbass took it seriously then you know other dumbass people will take it seriously. And I guess my main issue is about the vocal intent to cause harm which is demonstrated by their mention of making sure to stay safe on the sidewalk.