A judge has dismissed a complaint from a parent and guardian of a girl, now 15, who was sexually assaulted when she was 12 years old after Snapchat recommended that she connect with convicted sex offenders.

According to the court filing, the abuse that the girl, C.O., experienced on Snapchat happened soon after she signed up for the app in 2019. Through its “Quick Add” feature, Snapchat “directed her” to connect with “a registered sex offender using the profile name JASONMORGAN5660.” After a little more than a week on the app, C.O. was bombarded with inappropriate images and subjected to sextortion and threats before the adult user pressured her to meet up, then raped her. Cops arrested the adult user the next day, resulting in his incarceration, but his Snapchat account remained active for three years despite reports of harassment, the complaint alleged.

  • makeasnek
    link
    fedilink
    English
    arrow-up
    8
    arrow-down
    1
    ·
    edit-2
    9 months ago

    On the other hand, Snapchat absolutely should be liable for its recommendation algorithms’ actions.

    Should they though? The algorithm can be as simple as “show me the user with the most liked posts”. Even the best design algorithm is going to make suggestions that users connect with sex offenders because the algorithm has no idea who is a sex offender. Unless snapchat has received an abuse report of some kind of actively monitors all accounts all the time, they have no way to know this user is dangerous. Even if they did monitor the accounts, they won’t know the user is dangerous until they do something dangerous. Even if they are doing something dangerous, it may not be obvious from their messages and photos that they are doing something dangerous. An online predator asking a 12 year old to meet them somewhere looks an awful lot like a family member asking the same thing assuming there’s not something sexually suggestive in the message. And requiring that level of monitoring is extremely expensive and invasive. It means only big companies with teams of lawyers can run online social media services. You can say goodbye to fediverse in that case, along with any expectation of privacy you or anybody else can have online. And then, well, hello turnkey fascism to the next politician who gets in power and wants to stifle dissent.

    Kids being hurt is bad. We should work to build a society where it happens less often. We shouldn’t sacrifice free, private speech in exchange or relegate speech only to the biggest, most corporate, most surveilled platforms. Because kids will still get hurt, and we’ll just be here with that many fewer liberties. Let’s not forget that the US federal government has a list of known child sex offenders in the form of Epstein’s client list and yet none of them are in prison. I don’t believe that giving the government more control and surveillance over online speech is going to somehow solve this problem. In fact, it will make it harder to hold those rich, well-connected, child rapist fucks accountable because it will make dissent more dangerous to engage in.

    • nyan@lemmy.cafe
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      2
      ·
      9 months ago

      Yes, they should. They chose to deploy the algorithm rather than using a different algorithm, or a human-curated suggestion set, or nothing at all. It’s like a store offering one-per-purchase free bonus items while knowing a few of them are soaked in a contact poison that will make anyone who touches them sick. If your business uses a black box to serve clients, you are liable for the output of that black box, and if you can’t find a black box that doesn’t produce noxious output, then either don’t use one or put a human in the loop. Yes, that human will cost you money. That’s why my suggestion at the end was to use a single common feed, to reduce the labour. If they can’t get enough engagement from a single common feed to support the business, maybe the business should be allowed to die.

      The only leg Snapchat has to stand on here is the fact that “C.O.” was violating their TOS by opening an account when she was under the age of 13, and may well have claimed she was over 18 when she was setting up the account.