• mindbleach@sh.itjust.works
    link
    fedilink
    arrow-up
    11
    ·
    1 year ago

    I’d say it isn’t, but I’d be saying it about the existing applications (and near-future stuff that currently sorta-kinda works), not the tech-bro delusions that wind up on billboards and cause aging suits to both sweat and drool.

    AI is the kind of bubble where the tech involved is halfway to magic. And it’ll run on your local hardware. No matter how hard the offices-and-capital side crashes and burns, that’s not going anywhere.

    Right now is the worst that image AI will ever be again.

    LLMs might stumble, because the big-iron approach seems to make a difference for them, but there are local versions and they do roughly the same things. That’s going into video games, for a start, and probably turning every single NPC into a verbal chatbot that can almost hold a conversation.

    The universe of low-stakes, high-dollar applications for AI is so small that I can’t think of anything that belongs in it.

    … entertainment is a bajillion-dollar industry where abject failure is routine and hilarious. There will be a boom like there was for CGI, only with even worse treatment of the tiny companies performing miracles. Workers getting screwed while revenue floods in is not the same thing as a bubble bursting. Unfortunately.

    I’m disappointed in Doctorow for asserting this technology will remain big and complex and expensive. When has that ever stayed true? Saying it’ll always take forest-eating big iron sounds like predicting computers will only be affordable to the five richest kings of Europe. This whole neural-network revival kicked off because consumer hardware made training feasible. More training equals better models, and more computers equals more training, but Google’s still pouring gigawatts into glorified video-game tech.

    If all the creative and academic zeal gets left working with mundane single machines - guess where all the advancements will happen.

  • FunctionFusilli@discuss.tchncs.de
    link
    fedilink
    arrow-up
    8
    arrow-down
    1
    ·
    1 year ago

    An interesting article, but it seems to be missing the big applications of AI. It isn’t all about the LLMs and other large models, but where it will definitely be used is in smaller-scale problems where specialized models can be pruned. There is a bubble, that’s for sure, but it’s in the usage of large, unpruned models for meanial tasks.

    • mo_ztt ✅@lemmy.worldOP
      link
      fedilink
      English
      arrow-up
      4
      ·
      1 year ago

      Yeah. To me it seems transparently obvious that at least some of the applications of AI will continue to change the world - maybe in a big way - after the bust that will inevitably happen to the AI-adjacent business side after the current boom. I agree with Doctorow on everything he’s saying about the business side, but that’s not the only side and it’s a little weird that he’s focusing exclusively on that aspect. But what the hell, he’s smart and I hadn’t seen this particular business-side perspective before.

      • sylver_dragon@lemmy.world
        link
        fedilink
        English
        arrow-up
        4
        ·
        1 year ago

        I think he touches on this when he talks about the high value, fault intolerant applications. The problem with AI, as it is now, is that it’s very good at producing high quality bullshit. Maybe it’s analysis and output is spot on. And maybe it’s over-fitted to something that had no real attachment to what you are trying to predict. And because much of it remains a black box, telling the two apart often takes up so much time that workers don’t save any time at all. For applications where having a better way to dig through a mountain of data would be beneficial, an AI sending you down the wrong rabbit hole can be costly and make the use of that AI questionable.

        This has been my own experience with “AI driven tools”. I work in cybersecurity, and you can’t swing a Cat-5 O’ Nine Tails without hitting some vendor trying to sell you on the next big AI driven security tool. And they’re crap, one and all. What they do very, very well is churn our false positives that analysts then lose hours two trying to to just understand what the fuck the AI saw that it alerted on. And since the AI and it’s algorithms are the “secret sauce” the vendor is selling, they do exactly fuck all to help the analysts understand the “why” behind an alert. And it’s almost always a false positive. Of course, those vendors will swear up and down, it’s just a matter of better tuning of the AI model on your network. And they’ll sell you lots of time with their specialists to try and tune the model. It won’t help, but they’ll keep selling you on that tuning all the same.

        In the long term, I do think AI will have a place in many fields. Highly specialized AI, which isn’t a black box, will be a useful tool for people in lots of fields. What it won’t be is a labor saving device. It will just make the people doing those jobs more accurate. But, it’s not being sold this way and we need the current model to collapse and demonstrate that AI is just not ready to take over most roles yet. Then maybe, we can start treating AI as a tool to make good people better and not as a way to replace them.

      • HakFoo@lemmy.sdf.org
        link
        fedilink
        arrow-up
        1
        ·
        1 year ago

        I wonder if we’ll see a lot of special purpose models-- we start with a vast model and gradually cut it out to fit on smaller hardware but only do one thing well.

        Or if we’ll just end up collapsing them into more recognizable code.

  • AnneBonny@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    6
    arrow-down
    4
    ·
    1 year ago

    Tech bubbles come in two varieties: The ones that leave something behind, and the ones that leave nothing behind. Sometimes, it can be hard to guess what kind of bubble you’re living through until it pops and you find out the hard way.

    Contrast that bubble with, say, cryptocurrency/NFTs, or the complex financial derivatives that led up to the 2008 financial crisis. These crises left behind very little reusable residue.

    The 2008 financial crisis was not the result of a “tech” bubble.

    • mindbleach@sh.itjust.works
      link
      fedilink
      arrow-up
      1
      ·
      1 year ago

      Eh. Financial instruments are an invention, and they can shape advances as readily as any electronic innovations. Joint-stock companies contributed to the age of sail as much as ship’s clocks and other navigation tools. That gimmick is ultimately why stock exchanges have been high-tech digital affairs (with quite a lot of analog interfaces) since before microchips.

      Enron claimed to have some vague new means of lowering costs, acquiring revenue, and making the line go up. It would have been transformative for the power industry if it wasn’t straight-up fraud.

      The 2008 lending crisis was halfway between those. It was a new strategy that turned crappy lending into worthwhile lending… in theory. And it relied on a ton of data. Too bad it was cyclical bullshit.

      • AnneBonny@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        The 2008 crisis was a result of predatory lending practices and mortgage backed securities that were full of subprime mortgages and other toxic assets, and a burst real estate bubble. I don’t think it was a tech bubble like the dotcom bust.

        Enron claimed to have some vague new means of lowering costs, acquiring revenue, and making the line go up. It would have been transformative for the power industry if it wasn’t straight-up fraud.

        Well, they were gaming the rules in the California energy market. They were doing shit like causing blackouts so they could price gouge.

        • mindbleach@sh.itjust.works
          link
          fedilink
          arrow-up
          2
          ·
          1 year ago

          Right, and also just making shit up. Enron’s executives literally could not describe how they made so much money. The secret ingredient was crime.

          The 2008 crisis was at least somewhat rooted in reality. It was a risk management strategy that - in theory - let them lend out a looot more money, by being less likely to lose money, even if they gained less per-dollar. But of course every single asshole involved cranked up their flour-to-sawdust ratio and promised nothing could possibly go wrong.

          I think it’s worth including finance shenanigans alongside tech bubbles, because even if finance isn’t tech, tech bubbles are absofuckinglutely just another finance shenanigan. The dot-com bubble didn’t burst because people stopped using the web. It was all about obscene sums of money being thrown at stupid ideas. Some of those stupid ideas worked! If not for the investor-class expectation that you turn the crank and get ten dollars for one dollar, we’d simply describe the period as one of naive experimentation. Better environments would allow people to try their big stupid ideas using their own money, or mundane business loans, instead of trying to attract millions and promise billions.

          It’s only a bubble because greedy idiots pump it up. Without that, failures are still news, but they don’t become a historical event.

    • davelA
      link
      fedilink
      English
      arrow-up
      5
      arrow-down
      7
      ·
      1 year ago

      Thanks for your pedantic, useless gotcha analysis.