• droans@lemmy.world
    link
    fedilink
    arrow-up
    5
    arrow-down
    1
    ·
    11 months ago

    Everyone who’s actually worked a real job knows it’s better for someone to not do a job at all than to do it 75% right.

    Because now that you know the LLM is getting basic information wrong, you can’t trust that anytime it produced is correct. You need to spend extra time fact-checking it.

    LLMs like Bard and ChatGPT/GPT3/3.5/4 are great at parsing questions and making results that sound good, but they are awful at giving correct answers.