Sorry if this is a dumb question, but does anyone else feel like technology - specifically consumer tech - kinda peaked over a decade ago? I’m 37, and I remember being awed between like 2011 and 2014 with phones, voice assistants, smart home devices, and what websites were capable of. Now it seems like much of this stuff either hasn’t improved all that much, or is straight up worse than it used to be. Am I crazy? Have I just been out of the market for this stuff for too long?

  • Grimy@lemmy.world
    link
    fedilink
    arrow-up
    7
    arrow-down
    6
    ·
    1 month ago

    We just had an AI boom and now my computer can write text and code. It can generate images, voices and music almost as well as a human. This is in the last two years, I don’t understand the feeling. I was personally blown away the first times I used things like chatgpt, stable diffusion, elvenlabs and udio.

    • Zangoose@lemmy.world
      link
      fedilink
      arrow-up
      13
      arrow-down
      1
      ·
      1 month ago

      My current phone has less utility than the phone I had in 2018, which had a headphone jack, SD card, IR emitter (I could use it as a TV remote!), heartrate sensor, and a decent camera.

      My current laptop is less upgradable than pretty much anything that came out in 2010. The storage uses a technically standard but uncommon drive size, and the wifi and RAM are both soldered on. It is faster and has a nicer screen, but DRMs in web browsers make it hard to take advantage of that screen, and bloated electron apps make it not feel much faster.

      Oh but here’s the catch! Now, thanks to a significant amount of stolen data being used to train some autocorrect, my computer can generate code that’s significantly worse than what I can write as a junior software dev with under a year of job experience, and takes twice as long to debug. It can also generate uncanny valley level images that look about like I typed in a one sentence prompt to get them.

      • Grimy@lemmy.world
        link
        fedilink
        arrow-up
        3
        arrow-down
        1
        ·
        1 month ago

        Buying shit products doesn’t mean technology isn’t advancing. My phone still has all the things you said and it was one of the cheaper models. Talking about sd cards, a 500 gb one is 20$. They didn’t even exist in that size a decade ago if I’m not mistake.

        Is this about technology advancing or if it’s doing it morally? Your personal opinion on technology doesn’t change its merit. And seriously, if you can’t see the leaps and bound gen ai has done and how little of that uncanny feeling is left, you are just sitting there with your head in the sand.

        Fact is, if I would have asked you three years ago how much time it will take for technology to advance to such a level where consumer computers can generate realistic images out of individual pixels, you would have straight up laughed at me. I bet you would have confidently told me it was impossible.

        • Zangoose@lemmy.world
          link
          fedilink
          arrow-up
          2
          ·
          edit-2
          1 month ago

          And if I asked you 2 years ago I bet you’d think LLMs would have gotten a lot better by now :)

          • Grimy@lemmy.world
            link
            fedilink
            arrow-up
            1
            ·
            edit-2
            1 month ago

            They have? We went from a context windows of 2k to 1 million. The open source scene came about in that time and has basically caught up with paid alternatives. They can accept visual data and have gotten very good at reading it. Some llm have been built to function audio to audio without any text involved. The new thing this week is having it control your mouse and computer for you.

            I hope you arent trying to imply that since we don’t have AGI in the space of 2 years, it means llms are failling. Falling for a venture capitalist lie doesn’t mean the sector is stagnating.

            I know this was suppose to be a “gotcha” moment but it makes it clear you don’t know much about it except what the media told you to think. AI hate gets clicks.

            • Zangoose@lemmy.world
              link
              fedilink
              arrow-up
              2
              ·
              1 month ago

              I’m not trying to say LLM’s haven’t gotten better on a technical level, nor am I trying to say there should have been AGI by now. I’m trying to say that from a user perspective, ChatGPT, Google Gemini, etc. are about as useful as they were when they came out (i.e. not very). Context size might have changed, but what does that actually mean for a user? ChatGPT writing is still obviously identifiable and immediately discredits my view of someone when I see it. Same with AI generated images. From experience, ChatGPT, Gemini, and all the others still hallucinate points which makes it near-useless for learning new topics since you can’t be sure what is real and what’s made up.

              Another thing I take issue with is open source models that are driven by VCs anyway. A model of an LLM might be open source, but is the LLM actually open source? IMO this is one of those things where the definitions haven’t caught up to actual usage. A set of numerical weights achieved by training on millions of pieces of involuntarily taken data based on retroactively modified terms of service doesn’t seem open source to me, even if the model itself is. And AI companies have openly admitted that they would never be able to make what they have if they had to ask for permission. When you say that “open source” LLMs have caught up, is that true, or are these the LLM-equivalent of uploading a compiled binary to GitHub and then calling that open source?

              ChatGPT still loses OpenAI hundreds of thousands of dollars per day. The only way for a user to be profitable to them is if they own the paid tier and don’t use it. The service was always venture capital hype to begin with. The same applies to Copilot and Gemini as well, and probably to companies like Perplexity as well.

              My issue with LLMs isn’t that it’s useless or immoral. It’s that it’s mostly useless and immoral, on top of causing higher emissions, making it harder to find actual results as AI-generated slop combines with SEO. They’re also normalizing collection of any and all user data for training purposes, including private data such as health tracking apps, personal emails, and direct messages. Half-baked AI features aren’t making computers better, they’re actively making computers worse.

    • AndyMFK@lemmy.dbzer0.com
      link
      fedilink
      arrow-up
      9
      arrow-down
      1
      ·
      1 month ago

      If you think an AI can do those things almost as good as a human, you should find more capable humans to hang out with

      • Grimy@lemmy.world
        link
        fedilink
        arrow-up
        2
        ·
        edit-2
        1 month ago

        If it was shit, people wouldn’t be complaining about having to compete with it.

        It really depends on what we mean by “better than a human”. Can AI draw better then the average human, yes. Can AI draw better then the best 10% of artists, no.

        In any case, this is a tool. It helps me make up for the skills I don’t have either to entertain me or help me. It just needs to be better then the human I can afford to hire, but I’m broke so the bar is low tbh.

        And let’s be honest too, the average human is kind of shit at most things, half of America can’t even think for itself apparently, the bar isn’t very high on what they can do either.