You know how Google’s new feature called AI Overviews is prone to spitting out wildly incorrect answers to search queries? In one instance, AI Overviews told a user to use glue on pizza to make sure the cheese won’t slide off (pssst…please don’t do this.)

Well, according to an interview at The Vergewith Google CEO Sundar Pichai published earlier this week, just before criticism of the outputs really took off, these “hallucinations” are an “inherent feature” of  AI large language models (LLM), which is what drives AI Overviews, and this feature “is still an unsolved problem.”

  • masquenox@lemmy.world
    link
    fedilink
    English
    arrow-up
    84
    arrow-down
    3
    ·
    6 months ago

    Since when has feeding us misinformation been a problem for capitalist parasites like Pichai?

    Misinformation is literally the first line of defense for them.

    • EatATaco@lemm.ee
      link
      fedilink
      English
      arrow-up
      11
      arrow-down
      2
      ·
      6 months ago

      “put glue in your tomato sauce.”

      “Omg you ate a capitalist parasite spreading misinformation intentionally!”

      When the only tool you have is a hammer, everything looks like a nail.

      • masquenox@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        arrow-down
        4
        ·
        6 months ago

        “put glue in your tomato sauce.”

        Doesn’t sound all that different from the stuff emanating from the right’s Great Orange Hope a while back that worked pretty well to keep his base appropriately frothing at the mouth - you are free to write it off as pure coincidence… but I won’t just yet.

        • EatATaco@lemm.ee
          link
          fedilink
          English
          arrow-up
          3
          ·
          6 months ago

          Can you come up with any rational explanation as to why they would do that?

            • EatATaco@lemm.ee
              link
              fedilink
              English
              arrow-up
              2
              ·
              6 months ago

              The part where it’s not “pure coincidence” but instead a deliberate part of some conspiracy.

              • masquenox@lemmy.world
                link
                fedilink
                English
                arrow-up
                1
                arrow-down
                2
                ·
                6 months ago

                but instead a deliberate part of some conspiracy.

                You mean… apart from the bog-standard propaganda regime the capitalist class has been enforcing on us long before either of us were born?

                  • masquenox@lemmy.world
                    link
                    fedilink
                    English
                    arrow-up
                    1
                    arrow-down
                    1
                    ·
                    6 months ago

                    I’d say it’s a continuation of the exact same thing - they just don’t know how to properly use their latest toy yet.

    • Aceticon@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      ·
      edit-2
      6 months ago

      LLMs trained on shitposting are too obvious for it to be quality misinformation.

      For quality disinformation they should train them solely on MBA course-work and documents produced by people with MBAs.

      Sure, the rate of false information would be even worse, but it would be formatted in slick ways meant to obfuscate meaning, which would avoid the kind of hilarity that has ensued when Google deployed an LLM trained on Reddit data and thus be much better for Google’s stock price.

    • RubberDuck@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      6 months ago

      But this is not misinformation, it is uncontrolled nonsense. It directly devalues their offering of being able to provide you with an accurate answer to something you look for. And if their overall offering becomes less valuable, so does their ability to steer you using their results.

      So while the incorrect nature is not a problem in itself for them, (as you see from his answer)… the degradation of their ability to influence results is.

      • UnderpantsWeevil@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        6 months ago

        But this is not misinformation, it is uncontrolled nonsense.

        The strategy is to get you to keep feeding Google new prompts in order to feed you more adds.

        The AI response is just a gimmick. It gives Google something to tell their investors, when they get asked “What are you doing with AI right now? We hear that’s big.”

        But the real money is getting unique user interactions for the purpose of serving up more ad content. In that model, bad answers are actually better than no answers, because they force the end use to keep refining the query and searching through the site backlog.

        • RubberDuck@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          6 months ago

          I don’t believe they will retain user interactions if the reason for the user interactions dissapears. The value of Google is they provide accurate search results.

          I can understand some users just want to be spoonfed an answer. But that’s not what most people expect from a search engine.

          I want google to use actual AI to filter out all the nonsense sites that turn a Reddit post into an article of 500 words using an LLM without any actual value. That should be googles proposition.

          • UnderpantsWeevil@lemmy.world
            link
            fedilink
            English
            arrow-up
            0
            ·
            6 months ago

            The value of Google is they provide accurate search results.

            They offer the most accurate results of search engines you’re familiar with. But in a shrinking field with degrading quality, that’s a low bar and sinking quick.

            I want google to use actual AI to filter out all the nonsense sites

            So did the last head of Google search, until the new CEO fired him.

        • fishos@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          arrow-down
          1
          ·
          edit-2
          6 months ago

          If you don’t know the answer is bad, which confident idiots spouting off on reddit and being upvoted into infinity has proven is common, then you won’t refine your search. You’ll just accept the bad answer and move on.

          Your logic doesn’t follow. If someone doesn’t know the answer and are searching for it, they likely won’t be able to tell if the answer is correct. We literally already have that problem with misinformation. And what sounds more confident than an AI?