The rapid spread of artificial intelligence has people wondering: who’s most likely to embrace AI in their daily lives? Many assume it’s the tech-savvy – those who understand how AI works – who are most eager to adopt it.

Surprisingly, our new research (published in the Journal of Marketing) finds the opposite. People with less knowledge about AI are actually more open to using the technology. We call this difference in adoption propensity the “lower literacy-higher receptivity” link.

  • affiliate@lemmy.world
    link
    fedilink
    English
    arrow-up
    7
    arrow-down
    2
    ·
    3 hours ago

    i think we give silicon valley too much linguistic power. there should really be more pushback on them rebranding LLMs as AI. it’s just a bunch of marketing nonsense that we’re letting them get away with.

    (i know that LLMs are studied in the field of computer science that’s known as artificial intelligence, but i really don’t think that subtlety is properly communicated to the general public.)

    • btaf45@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      edit-2
      2 hours ago

      here should really be more pushback on them rebranding LLMs as AI.

      Those would be AI though wouldn’t they?

      The pushback I would like to see is the rush of companies to rebrand ordinary computer programs as “AI”.

    • Feathercrown@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      edit-2
      2 hours ago

      I actually think in this case it’s the opposite-- your expectations of the term “AI” aren’t accurate to the actual research and industry usage. Now, if we want to talk about what people have been trying to pass off as “AGI”…

  • daniskarma@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    28
    arrow-down
    1
    ·
    10 hours ago

    I’m tech savvy and I use AI daily.

    Probably not the AI you think of. As it’s not LLM or image generation.

    But I have a security system self hosted using frigate, which uses AI models for image recognition.

    • Jax@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      4
      ·
      2 hours ago

      So you’re tech savvy and you use AI as it should be - like a tool. Not a magic genie that will spit out code for you.

    • Naia@lemmy.blahaj.zone
      link
      fedilink
      English
      arrow-up
      1
      ·
      2 hours ago

      Even using LLMs isn’t an issue, it’s just another tool. I’ve been messing around with local stuff and while you certainly have to use it knowing it’s limitations it can help for certain things, even if just helping parse data or rephrasing things.

      The issue with neural nets is that while it theoretically can do “anything”, it can’t actually do everything.

      And it’s the same with a lot of tools like this. People not understanding the limitations or flaws and corporations wanting to use it to replace workers.

      There’s also the tech bros who feel that creative works can be generated completely by AI because like AI they don’t understand art or storytelling.

      But we also have others who don’t understand what AI is and how broad it is, thinking it’s only LLMs and other neural nets that are just used to produce garbage.

    • thisbenzingring@lemmy.sdf.org
      link
      fedilink
      English
      arrow-up
      9
      ·
      edit-2
      7 hours ago

      I am a system admin and one of our appliances is a HPE Alletra. The AI in it is awesome and it never tries to interact with me. This is what I want. Just do your fucking job AI, I don’t want you to pretend to be a person.

  • jaybone@lemmy.world
    link
    fedilink
    English
    arrow-up
    36
    arrow-down
    1
    ·
    11 hours ago

    “Surprisingly”? This should be a surprise to no one who is paying any kind of attention to any online communities where techy people post.

  • CosmoNova@lemmy.world
    link
    fedilink
    English
    arrow-up
    23
    arrow-down
    1
    ·
    11 hours ago

    How exactly is this a surprise to anyone when the same applied to crypto and NFTs already? AI and blockchain technologies are useful to experts in tiny niches so far but that’s not the usual tech savvy user. For the end user it’s just a toy with little use cases.

    • Feathercrown@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      2 hours ago

      AI is much more broadly applicable than Blockchain could ever be, although somehow it’s still being pushed more than it should be.

  • Petter1@lemm.ee
    link
    fedilink
    English
    arrow-up
    8
    ·
    9 hours ago

    At the state of AI today, it helps noobs to get to average level but not help average to get a pro

    • wondrous_strange@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      7 hours ago

      The real question in my opinion is how does a pro truly benefit from it other than being a different type of a search engine

      • Petter1@lemm.ee
        link
        fedilink
        English
        arrow-up
        2
        ·
        4 hours ago

        Yea, if you are a pro in something it most of the time only tells you what you already know (I sometimes use it as a sort of sanity check, by writing prompts that I think I know the output that comes)

        • wondrous_strange@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          3 hours ago

          I only found it useful doing trivial chores such as converting between data structures, maybe create a test for a function, parsing and some regex. Anything deeper than that was full of errors or the it offered was suboptimal at best. It also fails a lot of times in fetching the relevant docs/sources for the discussion. I gave up trying after so many times it basically told me " go search for yourself"

          • Petter1@lemm.ee
            link
            fedilink
            English
            arrow-up
            1
            arrow-down
            1
            ·
            edit-2
            2 hours ago

            I often use it as my Python Slave because I am lazy

            Like i write in bad fast human Language what my Script needs to do and then iterate from there giving it errors/ bug reports back (and fix some stuff that am I not too lazy for myself)

            Scrripts that I needed were in complexity like, API calls, serial communication or converting PO to CSV and back (pls don’t ask 😅 it is for Work and I can not tell more)

            But I guess, that because my skill is not too high, I‘m sure, if I was more skilled, I might be faster just writing it directly as code 💁🏻

            But for code that needs to be built (like C), I mostly use it to make it explain me what existing code does, if I am not 100% sure after a short read. Have tried some generated code there as well, but then I get nothing but build errors 😆 at least, it, most of the time, can tell what the build error is trying to say.

            Ah, and currently, I use my free chatGPT to make it teach me how to make music using only open source tools 😄

            • wondrous_strange@lemmy.world
              link
              fedilink
              English
              arrow-up
              2
              ·
              2 hours ago

              I very much agree with your conclusions and general approach.

              LLMs are great for certain tasks that are programming related and it does it very well. I, too, often find myself needing scripts that as long as they did what they were suppose to, I really didn’t care how.

              Another thing I’ve noticed(which is probably related to amounts of training data) is that it can help better with simple Python tasks as opposed to how it handles simple rust tasks.

              But you mentioned one of my main issues with. Ice been programming for 15 years or so, and still learning. All the available llms did crucial errors about fundamental tabd complex topics and got the answer so very wrong but also sounding very convincing. Couple it with lack of proper linking to the sources of the response, you might see why having it explain code might cause your learn wrongly. Although it is also possible to say this about randoms internet tutorials. I always try to remind myself that it’s a tool that produces output that always needs to be verified.

              • Petter1@lemm.ee
                link
                fedilink
                English
                arrow-up
                1
                ·
                1 hour ago

                I often make in a new chat with a prompt including assumptions based on the info from output of previous chat. Most of the time, it then makes a good job factchecking itself and for example tells many things not matching with what it told in previous chats. Then you know that it has not enough training data in that regard and failed to get relevant infos from it’s web search.

                More than once above happened to me on copilot (from enterprise ms365) and then chatGPT limited free promts saved me 😂

  • MonkderVierte
    link
    fedilink
    English
    arrow-up
    11
    arrow-down
    1
    ·
    13 hours ago

    What form of AI are we talking about? Because most of them exposed to the people are glorified toys with shady business models. While tools like AlphaFold are pretty useful.