• JoeKrogan@lemmy.world
    link
    fedilink
    English
    arrow-up
    70
    arrow-down
    12
    ·
    1 year ago

    No you shouldn’t. Google has enough data already. If it is not self hosted it can’t be trusted.

    • Carighan Maconar@lemmy.world
      link
      fedilink
      English
      arrow-up
      33
      arrow-down
      24
      ·
      1 year ago

      The idea that you should fly with exclusively self-hosted approaches is equally absurd to the idea that you should just blindly trust everyone.

      Plus, if they have, as you say, “enough” data already, then surely giving them more doesn’t actually hurt you in any way, shape or form?

      • notenoughbutter
        link
        fedilink
        English
        arrow-up
        19
        arrow-down
        1
        ·
        1 year ago

        yeah, self hosted may be a bit too much for everyone, but they should at least make its training database open as ai is biased on whatever data it is trained on

        eg. like how some smart taps won’t work for black people as the company just didn’t trained the sensors to work with dark skin

        imo, nextcloud took the best approach here, allowing users to utilize chatgpt 4 if needed, while still making a totally in-house FLOSS option available

      • umbrella
        link
        fedilink
        English
        arrow-up
        4
        ·
        1 year ago

        why not?

        what is so absurd about code running in an users own device?

        • Carighan Maconar@lemmy.world
          link
          fedilink
          English
          arrow-up
          4
          arrow-down
          2
          ·
          1 year ago

          Because it’s just unnecessary. Due to their nature, you want a few services reachable from anywhere, anyways. There’s no reason for the average consumer to acquire hardware for this purpose. Just rent the service or the hardware elsewhere, which also reduces upfront cost which is ideal in situations where you cannot know whether you’ll stick with the service.

          Again, it’s either extreme that’s absurd. You don’t need your own video streaming platform for example. In rare cases, sure. For the vast majority of people, Netflix is a much service however.

          • umbrella
            link
            fedilink
            English
            arrow-up
            2
            ·
            edit-2
            1 year ago

            hard disagree on that one, the opposite is true. we end up with companies centralizing it on huge datacenters and not even being able to profit from it (services like youtube are unprofitable). best solution would be a federated service. I digress though because video platforms are a completely different beast.

            something as personal like ai assistants should utilize the processing power i already have available, wasteful not to.

            also its a BAD idea to hand out data for something so personal to google yet again. lets not keep repeating that mistake if we can avoid it.

    • soulfirethewolf@lemdro.id
      link
      fedilink
      English
      arrow-up
      4
      ·
      1 year ago

      I would love to self-host something like that. But I do not have a good enough GPU to do something like that

      • 👁️👄👁️@lemm.ee
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        Newer Pixels are having hardware chips dedicated to AI in them, which could be able to run these locally. Apple is planning on doing local LLMs too. There’s been a lot of development on “small LLMs”, which have a ton of benefits, like being able to study LLMs easier, run them on lower specs, and saving power on LLM usage.

        • httpjames@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 year ago

          Smaller LLMs have huge performance tradeoffs, most notably in their abilities to obey prompts. Bard has billions of parameters, so mobile chips wouldn’t be able to run it.

          • 👁️👄👁️@lemm.ee
            link
            fedilink
            English
            arrow-up
            1
            ·
            1 year ago

            That’s right now, small LLMs have been the focus of development just very recently. And judging how fast LLMs have been improving, I can see that changing very soon.

    • gelberhut@lemdro.id
      link
      fedilink
      English
      arrow-up
      5
      arrow-down
      21
      ·
      1 year ago

      Yes, and that selfhosted code is written by someone else - it cannot be trusted.

      This selfhosted, selfwritten code is ok, but wait the hardware is not defined by you - it cannot be trusted!

      • Serdan@lemm.ee
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        Ridiculous take.

        There’s a vast difference between using a cloud service that definitely spies on you, and a self-hosted solution that you can ensure doesn’t.

        • Chozo@kbin.social
          link
          fedilink
          arrow-up
          0
          arrow-down
          1
          ·
          1 year ago

          a self-hosted solution that you can ensure doesn’t.

          Being self-hosted in no way, shape, or form ensures that it doesn’t spy on you. You’re still putting trust in a third-party to keep their promises. The average user lacks the know-how to audit code. Hell, the average user wouldn’t be able to figure out self-hosting in the first place.

          • ijeff@lemdro.idOPM
            link
            fedilink
            English
            arrow-up
            1
            ·
            1 year ago

            It’s actually quite easy to see if an app is phoning home. Also easy to prevent.

            • danhakimi@kbin.social
              link
              fedilink
              arrow-up
              0
              arrow-down
              1
              ·
              1 year ago

              Lol, how do you prevent a Google app from phoning home without preventing all Google apps, including GPS, from accessing the internet at all?

          • Serdan@lemm.ee
            link
            fedilink
            English
            arrow-up
            1
            ·
            1 year ago

            You don’t have to audit code to ensure it doesn’t call home.

              • danhakimi@kbin.social
                link
                fedilink
                arrow-up
                0
                ·
                1 year ago

                disable your internet connection.

                that’s really it. Lots of apps find lots of ways to call home, and Google, especially, is constantly calling home from Android, so unless you’re going to, like… uninstall all but one Google app to test it in a vacuum, and then add other apps one at a time, it’s not going to work. Also, that experiment won’t work, because we already know that Google Play Services handles most of these shenanigans.

                • Chozo@kbin.social
                  link
                  fedilink
                  arrow-up
                  0
                  ·
                  1 year ago

                  We’re talking about a service that intrinsically requires an internet connection, though.

        • gelberhut@lemdro.id
          link
          fedilink
          English
          arrow-up
          0
          arrow-down
          1
          ·
          edit-2
          1 year ago

          Ridiculous conspiracy about “definitely spies”, especially in the android community.

          I remember someone tried to sue Google for reading his emails, because Anti-Spam must “read” mails to detect spams.

          Anyways, for people who are afraid of cloud spions nothing is changed, for people who are interested in Google assistant boosting GA with bard is a promising improvement.

          • ijeff@lemdro.idOPM
            link
            fedilink
            English
            arrow-up
            0
            ·
            1 year ago

            In this case it’s less about “spying” and more about data being used for training.

            • danhakimi@kbin.social
              link
              fedilink
              arrow-up
              1
              ·
              1 year ago

              and advertising.

              and it’s also about the way they pretend that, because they’re processing data on device, it’s somehow safe from them. No, they’re processing data on device to do federated learning (or otherwise use the processed data in ways you still prefer they just not do).

      • 👁️👄👁️@lemm.ee
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        1
        ·
        1 year ago

        Did you write the driver for the keyboard you wrote that on? Silly and completely unrealistic take. The world relies on trust to operate. It’s not black and white.

        • gelberhut@lemdro.id
          link
          fedilink
          English
          arrow-up
          2
          arrow-down
          1
          ·
          1 year ago

          This was a joke. I have problems neither with the keyboard driver nor with cloud services as such. Both can be ok to use or not - one needs to apply common sense.

  • angelsomething@lemmy.one
    link
    fedilink
    English
    arrow-up
    38
    arrow-down
    1
    ·
    1 year ago

    Yeah hard pass for me dog. Are hey not currently in trouble for manipulating search results? What makes you think bard will be any different. Just look at bing chat.

    • ijeff@lemdro.idOPM
      link
      fedilink
      English
      arrow-up
      5
      ·
      1 year ago

      Amazon Alexa does significantly better when it comes to recognizing these basic commands and for smart home controls. Extremely quick and consistent. It’s useless for web search question though.

      • MakeItCount@lemmy.world
        link
        fedilink
        English
        arrow-up
        7
        arrow-down
        2
        ·
        1 year ago

        And then you try in french and suddenly 80% of the command result somehow to a kitchen recipe one

        • MrQuallzin@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          edit-2
          1 year ago

          Oh dang, yeah it’s probably the language. Wouldn’t surprise me that other languages aren’t being prioritized by Google, which is a shame. German’s an awesome language.

          Edit cause I’m gonna look stupid for saying German without actually checking. I’m high, blame the grass. Whatever language it is looks awesome

  • Rentlar@lemmy.ca
    link
    fedilink
    English
    arrow-up
    30
    arrow-down
    1
    ·
    1 year ago

    And as Rick Osterloh, Google SVP of Devices & Services, stated in a recent interview with Michael Fisher (MrMobile), “You probably use YouTube, you probably use Google search, you probably use Gmail. You already use Google; if you want the best place to use all of your Google products, it’s going to be on a Pixel.”

    Just like Apple, Google would ideally want full capture of their ecosystem.

    The AI features seem useful but Google will likely do one of the following with it within 3 years of release:

    A. Kill the feature

    B. Nerf the feature to an unusuable level

    C. Shove advertising into the feature at every possible opportunity

    • Virkkunen@kbin.social
      link
      fedilink
      arrow-up
      6
      ·
      1 year ago

      C and B are definitely happening in the next months, A will start by the end of next year, as support for Bard dwindles and Google moves on to the next AI assistant that has half the features and polish of the previous one.

    • rgb3x3@beehaw.org
      link
      fedilink
      English
      arrow-up
      3
      ·
      1 year ago

      Based on how full-in they’re going with Pixel and AI, it’ll most likely be a combination of #2 and #3.

      They’ll abandon the current version for some other incompatible version, leaving everyone using the current version SOL.

      Google can’t be trusted for long-term product and feature support.

    • JasSmith@kbin.social
      link
      fedilink
      arrow-up
      4
      arrow-down
      2
      ·
      1 year ago

      It’s partly why I haven’t bought an Android phone, ever, and have stuck with iPhones. I know Apple is going to keep supporting the phone and apps within for many years. It’s encouraging that Google will support the newest Pixel with software, but they really need to work on their hardware quality and support now. This has been a consistent sore spot since inception.

      • Cylusthevirus@kbin.social
        link
        fedilink
        arrow-up
        6
        ·
        1 year ago

        Not how Android works, but ok. Android is just an OS. Each specific combo of OS and phone is unique. You can modify Android to the point where it’s a completely different user experience between two implementations nominally using Android.

        Maybe you just like iOS, that’s fine, but it’s good to understand why you like it. Personally I’ve been saddled with an iPhone for work and I hate the GUI, but I can appreciate the materials and some design choices.

  • danhakimi@kbin.social
    link
    fedilink
    arrow-up
    21
    arrow-down
    2
    ·
    1 year ago

    “I’ve barely tried these AI assistants everybody seems so hyped about, but you can up flight and hotel info, and Google has vaguely implied that it’ll be able to do more in the future so I’m super excited!”

    I mean, couldn’t Google Assistant already look up flight and hotel info before? Doesn’t the introduction of generative AI just transform that look up from simply scanning email and regurgitating info into a weird black box AI task that’s right about 30% of the time but confident all the time?

    He pretty much admits he has no idea what generative AI chatbots are, and it really shows from the article.

    • ThrowawayOnLemmy@lemmy.world
      link
      fedilink
      English
      arrow-up
      13
      ·
      1 year ago

      Yeah a lot of the features promised were already available at one point and taken away. I miss having the assistant show me a list of important info, like expected packages, priority emails, and appointments in one place. Google got rid of all that stuff and replaced the home feed with a buncha click bait news articles instead.

      • danhakimi@kbin.social
        link
        fedilink
        arrow-up
        4
        arrow-down
        1
        ·
        1 year ago

        I remember back when they introduced Google now, and how it wasn’t that great and slowly started to suck more and more. I have since disabled the Google app on my phone, and don’t miss it in the slightest. I have a browser to handle my searches.

    • jasondj@ttrpg.network
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      1
      ·
      1 year ago

      This really doesn’t seem like much.

      My utopia would be a home assistant with a chat bot who is, quite literally, an “imaginary friend”. Somebody to just…talk to. No judgement, no drama. Remembered what you talked about before. Knows your emails and IMs and texts, and essentially every digital memory. Maybe not quite an actual therapist, but more like a trusted confidant.

      Of course this would have to be self-hosted or Google would have to absolutely guarantee that all of its data is sandboxed and only used to train your friend.

  • colonial@lemmy.world
    link
    fedilink
    English
    arrow-up
    16
    arrow-down
    8
    ·
    1 year ago

    Wow, I can talk to the hallucination machine! What an innovation!

    … God, imagine if this all this effort went towards fusion power or space infrastructure. What a waste.

      • danhakimi@kbin.social
        link
        fedilink
        arrow-up
        6
        arrow-down
        4
        ·
        1 year ago

        generative AI chatbots are not much of an “advance” in technology as much as they are a “popular gimmick” in technology.

        • kromem@lemmy.world
          link
          fedilink
          English
          arrow-up
          5
          arrow-down
          4
          ·
          edit-2
          1 year ago

          Mhmm. Literally things that computer scientists a decade ago considered impossible within our lifetimes occurs, but social media is convinced it’s a ‘gimmick.’

          Laypeople have really drunk up the anti-AI Kool aid these days…

          • danhakimi@kbin.social
            link
            fedilink
            arrow-up
            3
            arrow-down
            4
            ·
            1 year ago

            I was using chatbots this convincing back in my AOL Instant Messenger days, tbh. The things they point to as being considered impossible are like, “it can generate a whole story that doesn’t make any sense!” So could the old chatbots, there just wasn’t any hype around it back then. “They can answer questions in a conversational tone!” So could Google a decade ago, but it was much more accurate back then.

            • kromem@lemmy.world
              link
              fedilink
              English
              arrow-up
              6
              arrow-down
              3
              ·
              edit-2
              1 year ago

              There was no AOL chat bot that could explain why a joke it had never seen before was funny or could solve an original variation of a logic puzzle.

              The fact that you can’t tell the difference reflects more on where you fall within the Dunning-Kreuger curve of NLP model assessment than it does the capabilities of the LLMs.

              • danhakimi@kbin.social
                link
                fedilink
                arrow-up
                2
                arrow-down
                3
                ·
                1 year ago

                There was no AOL chat bot that could explain why a joke it had never seen before was funny

                Let me know when they invent one of those, because they sure as fuck haven’t done it yet.

                could solve an original variation of a logic puzzle.

                This is very mildly interesting, if I had any reason to believe it could do so successfully with any regularity. It would be a fun party trick at a dinner party full of mathematicians.

                The fact that you can’t tell the difference reflects

                Reflects what, that I never asked it to explain a joke or solve an arbitrary logic puzzle? Why would I have done that? Those are gimmicks. Those are made-up problems, designed only to show off a product that can’t solve the problems people actually try to use it for. The tool is completely useless for most users because most users go in expecting it to be useful, it’s only “useful” for people who go in looking to invent problems and watch them get solved.

                People are using it to write blog posts. The blog posts don’t read any better than shitty bot-generated blog posts from a decade ago.

                People are using it to write bedtime stories. But we already have bedtime stories, and the LLM stories don’t make any sense—hence, why the whole idea is built around “write a story for a child too little to understand what you’re saying!” Yeah, perfect. Made-up nonsense can’t hurt them.

                This whole damn thread is full of examples. People want the Bard integration to do X—and either it can’t, or it can, but it’s a function it’s already done perfectly well, and maybe the bard-integrated solution is just strictly less accurate.

                Natural Language Processing is not new. There are new techniques within natural language processing, and some of them are cool and good. Generative LLMs are just not in that category.

                The real-life applications of generative AI are pretty much just making bad AI art for NFTs and instagram bot accounts. Maybe in another decade, with a few more large-scale advancements, it’ll be able to write a script for a shitty but watchable anime. I’ve heard that we’ve gone about as far as we can with LLMs, but I suppose we’ll see.

                • kromem@lemmy.world
                  link
                  fedilink
                  English
                  arrow-up
                  3
                  ·
                  1 year ago

                  Let me know when they invent one of those, because they sure as fuck haven’t done it yet.

                  This was literally part of the 2022 PaLM paper and allegedly the thing that had Hinton quit to go ringing alarm bells and by this year we now have multimodal GPT-4 writing out explanations for visual jokes.

                  Just because an ostrich sticks its head in the sand doesn’t mean the world outside the hole doesn’t exist.

                  And in case you don’t know what I mean by that, here’s GPT-4 via Bing’s explanation for the phrase immediately above:

                  This statement is a metaphor that means ignoring a problem or a reality does not make it go away. It is based on the common myth that ostriches bury their heads in the sand when they are scared or threatened, as if they can’t see the danger. However, this is not true. Ostriches only stick their heads in the ground to dig holes for their nests or to check on their eggs. They can also run very fast or kick hard to defend themselves from predators. Therefore, the statement implies that one should face the challenges or difficulties in life, rather than avoiding them or pretending they don’t exist.

                  Go ahead and ask Eliza what the sentence means and compare.

  • Jack.
    link
    fedilink
    English
    arrow-up
    11
    arrow-down
    3
    ·
    1 year ago

    Bard is somehow worse than ChatGPT I hate it

    • kib48@lemm.ee
      link
      fedilink
      English
      arrow-up
      3
      ·
      1 year ago

      really? they’re on the same level imo (they both suck equally)

  • The Barto@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    6
    ·
    1 year ago

    All I want to know is, will it argue back with me, because the assistant is already a massive smartass, if it can have a proper argument, then I’m all in baby!

  • macallik@kbin.social
    link
    fedilink
    arrow-up
    4
    ·
    1 year ago

    To me, it feels like the final frontier for phones before a pivot to virtual/augmented reality becomes more tangible.

  • LiveLM@lemmy.zip
    link
    fedilink
    English
    arrow-up
    4
    ·
    edit-2
    1 year ago

    I’ll believe it when I see it. What will likely happen is that it’ll start out good then Google will slowly neuter it and make it dumber and dumber until it’s essentially useless like Assistant

    • comrade_pibb [comrade/them]@hexbear.net
      link
      fedilink
      English
      arrow-up
      2
      ·
      1 year ago

      The only use case I’ve ever found for assistant is setting up an alarm for when I’m too hungover to find my glasses. That “set an alarm for 845” works a treat sometimes