• magnetosphere@kbin.social
    link
    fedilink
    arrow-up
    46
    arrow-down
    1
    ·
    1 year ago

    Well, of course “demand” is shrinking. AI was the hot new thing, everybody played with it, and its flaws and limitations were quickly discovered. People learned that its uses are much more limited than the hype suggests.

    Plus, if you were expecting science-fiction level AI (as in, a computer that could actually think and reason like a person) you were in for some major disappointment.

    • Anticorp
      link
      fedilink
      English
      arrow-up
      13
      arrow-down
      1
      ·
      1 year ago

      Every time I read this sort of sentiment on Lemmy I’m just totally confused. Have you actually worked with ChatGPT yet? Have you asked it to do things for you and given it very clear instructions like you would a new employee? I’ve been completely amazed by it. It has improved my productivity at work by probably 600%. It also helps me edit my emails for tone, and clarity, and can format shit that would take me hours in like half a second.

      • robbieIRL@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        2
        ·
        1 year ago

        Everytime I read this sort of sentiment in a comment I’m just totally confused. Do you think Microsoft only looked at your usage for their reports?

        Joining aside, No one is denying it works for you, but the article is suggesting that’s not the same for all.

      • Corkyskog@sh.itjust.works
        link
        fedilink
        arrow-up
        1
        ·
        edit-2
        1 year ago

        Haha, I thought that they were going to say “Of course demand is shrinking because OpenAI put a rate limit in for unpaid users”

        I haven’t used OprnAI directly yet. Earlier there were some programs that would access the API that you could use on mobile, I have used that. When you start an account now you get a certain amount of tokens to use. I have no idea how quickly I will go through the prompt tokens, so U haven’t used it at all. I am waiting until I need to use it for something important.

        Edit: Apparently the tokens are only for the API as another user has informed me.

        • XTornado
          link
          fedilink
          arrow-up
          2
          ·
          1 year ago

          Just in case is not clear for other people the token thing is just for the API. For using their website is unlimited as long as is GPT3 of course.

          • Corkyskog@sh.itjust.works
            link
            fedilink
            arrow-up
            2
            ·
            edit-2
            1 year ago

            Oh, neat! I am really glad you made this comment, I was terrified I would burn through all my tokens and then be locked out without a subscription.

            They must be trying to clamp down on those mobile apps that are essentially making miney off them as a 3rd party portal.

    • PM_ME_YOUR_ZOD_RUNES@sh.itjust.works
      link
      fedilink
      arrow-up
      15
      arrow-down
      5
      ·
      edit-2
      1 year ago

      I disagree that it has limited uses and I do believe it is a big step towards science fiction level AI. I use it almost every day. It’s great for so many things, cooking, spelling/grammar, coding, brainstorming and information to name a few.

      I’m pretty tech savvy but know nothing about coding. Using ChatGPT I was able to create VBA code for work that will save me and my team 100’s of hours per year. It took a lot of time, patience and troubleshooting but I managed to get something that suits our needs exactly and functions as I want it to. I would of never done this otherwise. ChatGPT made it possible.

      I will admit that it has limitations and can be quite stupid. It won’t do everything and you have to help it along sometimes. But at the end of the day, it is a powerful tool once you learn how to use it.

      • ImFresh3x@sh.itjust.works
        link
        fedilink
        arrow-up
        8
        ·
        edit-2
        1 year ago

        How do you use it for cooking? I can’t imagine it’s better than having an actual recipe written by someone you trust.

        And for grammar I find grammarly to be way betters

        • PM_ME_YOUR_ZOD_RUNES@sh.itjust.works
          link
          fedilink
          arrow-up
          3
          ·
          1 year ago

          Because you can ask it questions about the recipe it gives. It also gets straight to the point, unlike pretty much every online recipe.

          But for the most part I don’t really follow recipes, so I rarely use it for that. It’s mostly questions about cooking techniques, timings and advice.

      • Brocken40@sh.itjust.works
        link
        fedilink
        arrow-up
        12
        arrow-down
        6
        ·
        1 year ago

        It’s not really a step towards Sci fi level ai, it’s just a slightly more advanced version of clicking on the first autopredicted word when you type a sentence on your cell phone. the tools you needed already existed and were stolen are spit out by a very fancy text prediction algorithm.

        • BitSound@lemmy.world
          link
          fedilink
          arrow-up
          7
          arrow-down
          2
          ·
          1 year ago

          I’d disagree, and go so far as to say that it’s a baby AGI, and we need new terms to talk about the future of these approaches.

          To start, “fancy autocomplete” is correct but useless, in the same way that saying the human brain is just a bunch of meat or the like. Assume that we built an autocomplete so good at its job that it knew every move you were about to make and every word you were about to speak. Yes, it’s “just a fancy autocomplete”, but one that must be backed by at least human-level intelligence. At some level of autocomplete ability, there must be a model backing it that can be called “intelligent”, even if that intelligence looks nothing like human intelligence.

          Similarly, the “fancy autocomplete” that is GPT-4 must have some amount of intelligence, and this intelligence is a baby AGI. When AGI is invoked, people tend to get really excited, but that’s what the “baby” qualifier is for. GPT-4 is good at a large variety of tasks without extra training, and this is undeniable. You can quibble about what good means in this context, but it is able to handle simple tasks from “write some code” to “what are the key points in this document?” to “tell me a bedtime story” without being specifically trained to handle those tasks. That was unthinkable a year ago, and is clearly a sign of a model that has been able to generalize across many different tasks. Hence, AGI. It’s not very good at a lot of those tasks (but surprisingly good at a lot of them), but it knows what the task is, and is trying its best. Hence, baby AGI.

          Yeah, it’s got a lot of limitations right now. But hardware is only getting cheaper, and we’re developing techniques like Chain of Thought prompting that lets the LLMs have short-term working memory, which helps immensely. A linguist I know once said that the approaches we’re taking are like building a ladder to the moon. Well, we’ve started building a hell of a ladder, and I’m excited to see where it takes us.

          • Brocken40@sh.itjust.works
            link
            fedilink
            arrow-up
            8
            arrow-down
            3
            ·
            1 year ago

            I don’t care what yall call it, ai, agi, Stacy, it doesn’t change the fact it was 100% trained on books tagged as “bed time stories” to tell you a bed time story, it couldn’t tell you one otherwise.

            Assuming we made a agi that could predict every word I said perfectly, that would simply prove there is no free will, not that a computer has intelligence.

            Fundamentally ai produced in the current style cannot be intelligent because it cannot create new things it has not seen before.

            https://en.m.wikipedia.org/wiki/Chinese_room

            • BitSound@lemmy.world
              link
              fedilink
              arrow-up
              5
              ·
              1 year ago

              Assuming we made a agi that could predict every word I said perfectly, that would simply prove there is no free will, not that a computer has intelligence.

              But why? Also, “has free will” is exactly equivalent to “i cannot predict the behavior of this object”. This is a whole separate essay, but “free will” is relative to an observer. Nobody thinks a rock has free will. Some people think cats have free will. Lots of people think humans have free will. This is exactly in line with how hard it is to predict the behavior of each. You don’t have free will to an omniscient observer, but that observer must have above human-level intelligence. If that observer happens to have been constructed out of silicon, it doesn’t really make a difference.

              Fundamentally ai produced in the current style cannot be intelligent because it cannot create new things it has not seen before.

              But it can. It uses its prior experience to produce novel output, much like humans do. Hell, I’d say most humans wouldn’t pass your test for intelligence, and in fact they’re just 3 LLMs in a trenchcoat.

              https://en.m.wikipedia.org/wiki/Chinese_room

              Yeah, the reality is that we’ve built a Chinese room. And saying “well, it doesn’t really understand” isn’t sufficient anymore. In a few years are you going to be saying “we’re not really being oppressed by our robot overlords!”?

              • Brocken40@sh.itjust.works
                link
                fedilink
                arrow-up
                2
                arrow-down
                1
                ·
                1 year ago

                I’m saying if there is anyone, including an omnipotent observer, that can predict a humans actions perfectly that is proof that freewill doesn’t exist at all.

      • nanoUFO@sh.itjust.works
        link
        fedilink
        arrow-up
        5
        ·
        1 year ago

        It was good at answering questions that were already answered on the web and then presenting whatever info it found nicely even if it was wrong.

        • KnightontheSun@sh.itjust.works
          link
          fedilink
          arrow-up
          3
          ·
          1 year ago

          Didn’t stop my mobile game support group from implementing it. I provide clear input on an issue and I receive a polite automated answer that is completely wrong. It only improves once a human gets involved.