• storcholus@lemmy.world
    link
    fedilink
    English
    arrow-up
    38
    arrow-down
    2
    ·
    7 months ago

    Have you read ai stories? They are shit. The current ai doesn’t understand the arc that makes a story

      • storcholus@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        ·
        7 months ago

        That’s what I meant. Ai stories are not passable and I think I think if we give them to people who don’t know how stories work (children) we are in for a bad time

    • guyrocket@kbin.social
      link
      fedilink
      arrow-up
      9
      ·
      7 months ago

      Or tragically wrong.

      I would not want a machine with no moral compass whatsoever telling “stories” to a toddler.

      Hi, Susie. Have you ever heard of the Texas Chainsaw Massacre? Columbine? BTK?

    • erwan
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      3
      ·
      7 months ago

      More likely running on servers

  • TexasDrunk@lemmy.world
    link
    fedilink
    English
    arrow-up
    7
    ·
    7 months ago

    I have what is probably a stupid and misplaced question. The second picture in the article has the phrase “with hope in his heart”. That phrase repeatedly pops up in the hilariously bad ChatGPT stories I’ve seen people generate.

    Is there a reason that cheesy phrases that don’t get used in real life keep popping into stories like that?

    • piyuv@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      ·
      7 months ago

      Those phrases are not common anymore but once was very common, among the corpus the llm is trained on (mid 20th century books)

      • TexasDrunk@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        7 months ago

        I want to preface this by saying I’m not doubting you, I just don’t know how it works.

        Ok, but wouldn’t the training be weighted against older phrases that are no longer used? Or is all training data given equal weight?

        Additionally, if the goal is to create bedtime stories or similar, couldn’t the person generating it ask for a more contemporary style? Would that affect the use of that phrase and similar cheesy lines that keep appearing?

        I would never use an LLM for creative or factual work, but I use them all the time for code scaffolding, summarization, and rubber ducking. I’m super interested and just don’t understand why they do the things they do.