I asked Google Bard whether it thought Web Environment Integrity was a good or bad idea. Surprisingly, not only did it respond that it was a bad idea, it even went on to urge Google to drop the proposal.

  • MJBrune@beehaw.org
    link
    fedilink
    arrow-up
    24
    ·
    1 year ago

    What do you mean source? It’s a language model that learned from what people said. No source is needed, just an understanding of how llms actually work. When you ask an llm what the answer to a math question is, it doesn’t run a calculation of that question. Instead of gives you back what it thinks you want to hear. Some llms have gotten additional actions like making these calculations but for the most basic implementation it’s telling you want you want to hear through a series of tests that you’ve told it if it was right or wrong on.

    So you teach it what your want to hear and it repeats it.

    • novibe
      link
      fedilink
      English
      arrow-up
      19
      arrow-down
      2
      ·
      1 year ago

      That ignores all the papers on emergent features of LLMs and the fact they are basically black boxes. Yes, we “trained” them to write what we want to hear. But we don’t really understand what happens inside of it. We can’t categorically claim things like “they are only regurgitating what they heard”. Because that is not a scientific or even philosophical statement.

      If you think about it for a second, it’s also applicable to human beings…

      • MJBrune@beehaw.org
        link
        fedilink
        English
        arrow-up
        2
        ·
        1 year ago

        To assume otherwise would be incorrect with the data we have currently. You shouldn’t assume something is doing more than it is until it can prove it. Otherwise, you get rocks that keep tigers away.

        • novibe
          link
          fedilink
          English
          arrow-up
          7
          ·
          edit-2
          1 year ago

          I think to assume what you assume is also incorrect given current data.

          And that’s my entire point…. What is it doing? How what it’s doing is different from a mind or intelligence?

          Like our brains and minds evolved to “fill in the blank”. For many situations, due to survival and millions of years of selection. So what is the actual difference?

          I’m not saying it’s “conscious”, but why is it not a mind?

    • Elise@beehaw.org
      link
      fedilink
      arrow-up
      2
      ·
      1 year ago

      I’ve actually developed quite a bit with gpt4 and have beta access and have developed quite some fancy prompts if I do say so myself.

      Telling me ‘isn’t it obvious’ doesn’t make it more obvious to me.