https://futurism.com/the-byte/government-ai-worse-summarizing

The upshot: these AI summaries were so bad that the assessors agreed that using them could require more work down the line, because of the amount of fact-checking they require. If that’s the case, then the purported upsides of using the technology — cost-cutting and time-saving — are seriously called into question.

  • keepcarrot [she/her]@hexbear.net
    link
    fedilink
    English
    arrow-up
    4
    ·
    3 个月前

    Because people call it an AI instead of a bunch of related trained predictive algorithms? If the other things were happening (labour discipline, art theft, using a gallon of water to run a bad google search) but people were using whatever term you wanted, what would actually change?

    Like, I’m not saying it’s wrong to be annoyed by these companies ad copy, and there’s absolutely people out there who think “AI” is more human than their employees, it’s just a huge amount of time and energy wasted over a relatively minor part of the whole relationship. Even this 3 reply exchange here is probably too much.

    • UlyssesT [he/him]@hexbear.net
      link
      fedilink
      English
      arrow-up
      3
      ·
      edit-2
      3 个月前

      I said it before and I’ll say it one more time: YES, it does matter in this case, because the “AI” label and the consequent bullshit artistry tied to it grants the tech companies involved in it more venture capital from credulous investors persuaded by the bullshit labeling to put more money into it, which means the bad stuff you brought up happens even more as a consequence.

      Like, I’m not saying it’s wrong to be annoyed by these companies ad copy

      I’m not saying you can’t be annoyed at my being annoyed, but even here on this leftist shitposting site some people buy into the “AI” label meaning a lot more than it actually does and that has a ripple effect of consequences even here as well.