TL;DR: LLMs are just mimicking natural language and conversation. Fact checking and healthy skepticism is not part of their model. For example they can be easily tricked into advocating conspiracy theories, like a fake moon landing. Google Bard is even stating arithmetic falsehoods like 5*6 != 30

  • smegforbrainsOP
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    not sure what you’re saying here. are you claiming it can’t do any sort of reasoning or open-ended problem solving?

    It’s right there in the title mate.