TL;DR: LLMs are just mimicking natural language and conversation. Fact checking and healthy skepticism is not part of their model. For example they can be easily tricked into advocating conspiracy theories, like a fake moon landing. Google Bard is even stating arithmetic falsehoods like 5*6 != 30

  • stanleytweedle
    link
    fedilink
    English
    arrow-up
    5
    ·
    edit-2
    2 years ago

    It’s not intelligent, but it’s very agreeable, which is all it takes to lead people.

  • taj
    link
    fedilink
    English
    arrow-up
    5
    ·
    2 years ago

    This is what I keep telling my friends who use them to ‘write research papers/articles’. It’s just a bunch of bs, that I don’t trust.

    Thanks, but I’m going to continue to research and lookup my own info.

    • Lemdee@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      ·
      2 years ago

      I tried to use ChatGPT help speed up my writing, like I tried using “give me a brief history of Queen Dido’s relationship with Caesar” when working on a character inspired by Queen Dido and then it gave me completely false information relating to some other historical figure.

      Until it can get more trustworthy in that regard I’m not sure how effective it is for writing assistance let alone writing the entire paper by itself.

      • liontigerwings@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        3
        ·
        edit-2
        2 years ago

        This might sound funny but try bing chat instead. Chat gpt uses 3.5 which is prone to hallucination as they call it. Bing uses gpt4 and also shows sources. I found it to be generally better at everything, especially since it has access to the Internet. You can look up stuff past 2021

        Edit: bings response. I have no clue if it’s right or not you tell me.

        Queen Dido was the founder and first queen of the Phoenician city-state of Carthage. She lived in the 8th century BC. Julius Caesar was a Roman general and statesman who lived in the 1st century BC. There is no historical evidence that suggests that Queen Dido had any relationship with Julius Caesar. However, there are many stories and legends about Queen Dido’s life and her relationship with Aeneas, a Trojan hero who founded Rome. Would you like me to look up more information about Queen Dido or Julius Caesar?

        Source: Conversation with Bing, 6/13/2023 (1) The Truth About Princess Diana’s Relationship With Dodi Fayed. https://www.grunge.com/649740/the-truth-about-princess-dianas-relationship-with-dodi-fayed/. (2) Dido - Wikipedia. https://en.wikipedia.org/wiki/Dido. (3) Dido Character Analysis in The Aeneid | SparkNotes. https://www.sparknotes.com/lit/aeneid/character/dido/. (4) Dido - CliffsNotes. https://www.cliffsnotes.com/literature/a/aeneid/character-analysis/dido.

        • Lemdee@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          ·
          edit-2
          2 years ago

          It’s better than ChatGPT, thanks for the suggestion! I’ll definitely give it a shot! lol

  • M0oP0o@mander.xyz
    link
    fedilink
    English
    arrow-up
    3
    ·
    2 years ago

    The only use so far with any value (moral or factual) seems to be insane entertainment. I can’t stop having a chuckle of the Harry Squatter videos and interactions people have with Neuro-sama.

    I think trying to use this for anything even remotely factual is just asking for a paddlin’.

  • DrQuint
    link
    fedilink
    English
    arrow-up
    2
    ·
    2 years ago

    If you want to fuck with any LLM without effort, just keep telling them “no you’re wrong” and copy paste a random string of their post. Within 2 or 3 messages they’ll be trying to overcorrect by saying bullshit.

    This is due to, in part, them having some sort of programming stopping them from ever being unhelpful. They can not say anything to the effect of “Oh, I don’t know” unless if you strictly indicate that it’s a topic beyond their training time period, on top of, likewise, also having something that outright disallows them to disagree with the user outside of particular controversial topics. If you throw the onus of correctness onto them without ever specifying where the incorrectness ends, they will just throw more and more constraints until the output is garbage.

  • JoJo@social.fossware.space
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    It’s the inability of LLMs to do maths which really ought to give some of its more naive proponents pause for thought. It can’t do what a pocket calculator could do 50 years ago. It can’t do what an abacus could do centuries ago. How is anyone still taking this cheap expensive magic trick so seriously?

  • Peanut@sopuli.xyz
    link
    fedilink
    English
    arrow-up
    2
    arrow-down
    1
    ·
    2 years ago

    not sure what you’re saying here. are you claiming it can’t do any sort of reasoning or open-ended problem solving?

    i think we’re fairly confident now that they can do structured reasoning to some degree. it is not flawless in that it might not give you real or accurate information every time, but we are also figuring out the contexts behind that. as for spreading misinformation, anything intentional prompted to be incorrect is irrelevant to gauging intelligence. unintentional results don’t necessarily mean it’s unintelligent either.

    there’s a really good document on this aspect as well.

    https://www.lesswrong.com/posts/D7PumeYTDPfBTp3i7/the-waluigi-effect-mega-post

    there are a lot of ethical and technical aspects of LLMs that are severely underdeveloped, but that shouldn’t be a surprise to anyone. i don’t think any of that would suggest that it’s reasonable to disregard the absurd pace of development this past decade, and last few years especially. good thing we have a sudden surge of attention towards developing these things.

    • smegforbrainsOP
      link
      fedilink
      English
      arrow-up
      1
      ·
      2 years ago

      not sure what you’re saying here. are you claiming it can’t do any sort of reasoning or open-ended problem solving?

      It’s right there in the title mate.

  • Babalas@lemmy.nz
    link
    fedilink
    English
    arrow-up
    1
    ·
    edit-2
    2 years ago

    Seems more like different people expect it to behave differently. I mean the statement that it isn’t intelligent because it can be made to believe conspiracy theories would apply equally to humans would it not?

    I’m having a blast using it to write descriptions for characters and locations for my Savage Worlds game. It can even roll up an NPC for you. It’s fantastic for helping to fill in details. I.e. I embrace it’s hallucinations.

    For work (programmer) it also acts like a contextually aware search engine that I can correct. It’s like peer to peer programming with a genius grad. Yesterday I had it help me out writing a vim keymap to open a url for a Qt class and that’s pretty obscure.

    It is setup to accept your input as fact, so if you give it the premise that 5*6 != 30, it’ll use that as a basis.

    For a 3rd gen baby AI I’m not complaining.