This is just a draft, best refrain from linking. (I hope we’ll get this up tomorrow or Monday. edit: probably this week? edit 2: it’s up!!) The [bracketed] stuff is links to cites.

Please critique!


A vision came to us in a dream — and certainly not from any nameable person — on the current state of the venture capital fueled AI and machine learning industry. We asked around and several in the field concurred.

AIs are famous for “hallucinating” made-up answers with wrong facts. The hallucinations are not decreasing. In fact, the hallucinations are getting worse.

If you know how large language models work, you will understand that all output from a LLM is a “hallucination” — it’s generated from the latent space and the training data. But if your input contains mostly facts, then the output has a better chance of not being nonsense.

Unfortunately, the VC-funded AI industry runs on the promise of replacing humans with a very large shell script. If the output is just generated nonsense, that’s a problem. There is a slight panic among AI company leadership about this.

Even more unfortunately, the AI industry has run out of untainted training data. So they’re seriously considering doing the stupidest thing possible: training AIs on the output of other AIs. This is already known to make the models collapse into gibberish. [WSJ, archive]

There is enough money floating around in tech VC to fuel this nonsense for another couple of years — there are hundreds of billions of dollars (family offices, sovereign wealth funds) desperate to find an investment. If ever there was an argument for swingeing taxation followed by massive government spending programs, this would be it.

Ed Zitron gives it three more quarters (nine months). The gossip concurs with Ed on this being likely to last for another three quarters. There should be at least one more wave of massive overhiring. [Ed Zitron]

The current workaround is to hire fresh Ph.Ds to fix the hallucinations and try to underpay them on the promise of future wealth. If you have a degree with machine learning in it, gouge them for every penny you can while the gouging is good.

AI is holding up the S&P 500. This means that when the AI VC bubble pops, tech will drop. Whenever the NASDAQ catches a cold, bitcoin catches COVID — so expect crypto to go through the floor in turn.

  • bitofhope@awful.systems
    link
    fedilink
    arrow-up
    8
    ·
    9 months ago

    Yes.

    Pessimistically, the world can stay irrational (heh) longer than we can stay solvent (alive and well enough to work wih this sneerious outlet)

  • froztbyte@awful.systems
    link
    fedilink
    arrow-up
    6
    ·
    9 months ago

    The one major comment from me is I think you have a framing weakness centered on “if you know how LLMs work”, since that is a very load bearing point for much of what follows but holds no anchor point for those who do not know to also follow along (unless going on faith)

    I realize you’re not writing this as an explainer blog, but a short sentence + link to elsewhere with an explanation (“for those who don’t know, <link> has a decent explanation without being overly technical”) might be a good patch for that?

    Is it worth also casting a light on the various vendor deals with e.g. Reddit (et al), in their search for structured and scoped training data?

    • David Gerard@awful.systemsOP
      link
      fedilink
      arrow-up
      4
      ·
      9 months ago

      yeah, it’s the balance between “as you know” (when a given post will always be someone’s first) and explaining the universe from first principles

      • froztbyte@awful.systems
        link
        fedilink
        arrow-up
        4
        ·
        edit-2
        9 months ago

        Yep, know you know that. To be clear, my suggestion was more to have the extra outside ref link, not to suggest more work. Apologies, flubrain the last few days kicking my ass

  • self@awful.systemsM
    link
    fedilink
    arrow-up
    4
    ·
    9 months ago

    it’s a good section! you can tell it’s effective when an AI fan spontaneously appears to show us his entire ass.

    my one suggestion is to expand upon this paragraph:

    The current workaround is to hire fresh Ph.Ds to fix the hallucinations and try to underpay them on the promise of future wealth. If you have a degree with machine learning in it, gouge them for every penny you can while the gouging is good.

    I feel this could be followed up by a paragraph describing the extremely cultlike environment we know exists in damn near every serious AI company. for me, that’s the missing piece of why so many Ph.Ds are excited to be underpaid at OpenAI, a company with extremely questionable motives and practices in the academic space, and other AI companies with similar motives. through SneerClub we know the origins of that environment, but I haven’t seen a thorough analysis yet of the financial motives behind insisting your engineers are all members of ComStar, other than what we saw earlier this year after the schism at OpenAI. it’s very likely you have better sources for this stuff than I do though.

  • titotal@awful.systems
    link
    fedilink
    arrow-up
    3
    ·
    9 months ago

    I’m not a stock person man, but didn’t the hype from bitcoin last like a decade, despite not having a single widespread use case? Why wouldn’t LLM hype last the same amount of time, when people actually use it for things?

    • David Gerard@awful.systemsOP
      link
      fedilink
      arrow-up
      6
      ·
      9 months ago

      the market can stay fucking stupid well beyond reason and crypto has thoroughly disabused me of the efficient market hypothesis, but i don’t think AI has the Ponzi-like nature of bitcoin, there’s no dream of getting rich for free for the common degen

    • Mii@awful.systems
      link
      fedilink
      arrow-up
      5
      ·
      edit-2
      9 months ago

      From my uneducated perspective, LLM hype seems to me more like any other tech bubble than Bitcoin. It is actually built on the promise of return on investment. But somehow the whole industry seems to burn way more money than it can rake in, and this has to, at some point, raise some eyebrows with the investors. Normally, they prop up dozens of startups, calculating with a high failure rate because one successful venture would cover the losses plus turn a profit. AI companies however burn so much money and still have no way to make that back, so this concept doesn’t work.

      I don’t think you can keep this alive just by convincing the next idiot to pump in more money than you did, like you can with Bitcoin.

        • mountainriver@awful.systems
          link
          fedilink
          arrow-up
          5
          ·
          9 months ago

          I don’t have a dog in the race and I always think the bubbles will burst before they do. But with that caveat, shouldn’t the interest rates be a factor?

          My reasoning is that part of a bubble is that as long as line goes up there are assets which can be used for collateral for loans for new money to push the line up. With a low interest rate the new money is cheaper, with high interest it’s more expensive. So all else equal, the boom should burst quicker with higher interest rates.

  • Steve@awful.systemsM
    link
    fedilink
    arrow-up
    2
    ·
    7 months ago

    Sorry I missed this, especially as a mod of this sub. It was a great post, so I’m assuming all the good feedback below was helpful

  • Jake [he/him]
    link
    fedilink
    arrow-up
    1
    arrow-down
    9
    ·
    9 months ago

    It is a more nuanced issue than this and a bit naive as well, if I can be blunt and still say; friend, I cared enough to read and comment.

    The next most probable token is not related to hallucinations as described. This is like saying statistics is worthless because it doesn’t give absolute answers.

    AI can mean a great many different architectures and all have different strengths, weaknesses, and issues. The best way I can relate current LLM’s is the early days of the microprocessor. There were a lot of issues and limitations to work through. Eventually companies like Sun made some really capable and powerful machines after the first few generations of microprocessors. These devices became much more complex with time; integrating many peripheral devices. Many systems used several microprocessors in a single machine when it wasn’t cost prohibitive. If you have a computer in the last few generations, you have around twenty very similar microprocessors all working together on the same chip.

    AI is presently at that early stage. It is a useful fundamental tool, but by itself it is not very remarkable. The innovations in the peripheral space are where things get interesting. The way these innovations get integrated and the way the complexity multiplies over time will follow a similar curve as the microprocessor.

    There is and will be lots of misuse and failed companies over time, but the technology will continue. This is an inevitable future and it will never go away.

    When someone talks about hallucinations, it means the model is outside of alignment. The complexity of the model and its bias is the largest factor. In many ways this is why the the proprietary AI models will fail eventually. Open Source, offline models are the future of AI for text. An 8×7B unfiltered research model hallucinates far less than others and does not involve the massive amount of data that can be collected an inferred by proprietary AI. The majority of hallucinations are due to user input errors that are not accounted for in the model tokenizer and loader code. This is just standard code errors. Processing every possible spelling, punctuation, and grammar error is a difficult task. The next probable token is not simply a matter of the probable vector in the dataset. If the input contains a rare error the output will be in the style of a foolish error. This is not a hallucination. It is responding in the style it was addressed. If the user does not have control of the entire text inside the present context, aka previous questions and answers, the style of “stupid error” will likely remain persistent. It is still not an error, it is just a mirror. Indeed this is the greatest analogy. AI is like a mirror of yourself upon the dataset. It can only reflect what is present in the dataset and only in a simulacrum of yourself through the prompts you generate. It will show you what you want to see. It is unrivaled access to information if you have the character to find yourself and what you are looking for in that reflection.

    • blakestacey@awful.systems
      link
      fedilink
      arrow-up
      9
      arrow-down
      1
      ·
      9 months ago

      The best way I can relate current LLM’s is the early days of the microprocessor.

      Early microprocessors could do arithmetic correctly.

      • Jake [he/him]
        link
        fedilink
        arrow-up
        1
        arrow-down
        8
        ·
        9 months ago

        No they could not. They could not do floating point at all

        • Sailor Sega Saturn@awful.systems
          link
          fedilink
          arrow-up
          5
          ·
          edit-2
          9 months ago

          What is the result of subtracting the floating point number with hexadecimal representation 488299c6 from the one with hexadecimal representation 4cbe9a58?

          Sooo… it turns out Bing Chat and Gemini can’t do floating point math right now :D

          And don’t tell me the question is vague or misleading until the chatbots can actually recognize that or return an error code, this isn’t exactly the sort of thing people are feeding into integration tests since they’re indeterministic as heck and generally computed by some company losing tons of money somewhere off-site.

          (Not sharing the result, because I still stubbornly refuse to spread AI output whatsoever)

          • froztbyte@awful.systems
            link
            fedilink
            arrow-up
            7
            ·
            9 months ago

            The really fantastic part about this is that it’s long been possible to get reliable performance out of irc chatbots for this sort of thing, with people pulling all kinds of nasty extractor shit

            “It’s just people badly providing input” was the most big-brain take I’ve seen in a while. Honestly reads like someone in the industry and hates their users, of the “my code is great, it’s these damn idiots that don’t know how to use the system!” variety

            • mlen@awful.systems
              link
              fedilink
              arrow-up
              6
              ·
              9 months ago

              “you’re holding it wrong” worked for iphones, so maybe it’ll work for llms too…

              • froztbyte@awful.systems
                link
                fedilink
                arrow-up
                6
                ·
                9 months ago

                the iphone did already have other utility alongside it though, so people were making use of it regardless. not excusing how apple handled that, mind, that was bullshit, but I meant that people were still motivated users

                openai’s particular flavours of this shit are still failing to find viable footholds and there’s nothing that is a so-called “killer app”, which is the other thing that really weakens its case

                but that dynamic, of programmer disregard for how people use their products… oof. less pls. return to sender. unsubscribe.

    • self@awful.systemsM
      link
      fedilink
      arrow-up
      7
      arrow-down
      1
      ·
      9 months ago

      I’ve never said this before, but please tell me you used an LLM to generate this horseshit. no part of what you said is correct and it doesn’t take much knowledge of the tech to realize you’re either bullshitting or regurgitating marketing materials

      • Jake [he/him]
        link
        fedilink
        arrow-up
        1
        arrow-down
        7
        ·
        9 months ago

        I use the tech every day. Good luck with your echo chamber. You are a statistical inevitability. Time will teach you far more than I care to.

        • Amoeba_Girl@awful.systems
          link
          fedilink
          arrow-up
          6
          ·
          9 months ago

          wait… i’m sure it sounded cool and all but what does it mean for a person existing here and now to be inevitable in a statistical sense…

          • self@awful.systemsM
            link
            fedilink
            arrow-up
            6
            ·
            9 months ago

            that’s why I wish they’d given us more before they went I said good day sir and fucked off. I wanted more fractally wrong shit from the mind that gave us “the only issue with LLMs is user input, you poor naive soul” and “early computers couldn’t do arithmetic, ever heard of floating point? you fools” and that last one keeps being wrong in exciting new ways every time I think about it

    • ebu@awful.systems
      link
      fedilink
      arrow-up
      6
      ·
      9 months ago

      The best way I can relate current LLM’s is the early days of the microprocessor.

      i promise we did it! we made iphone 2! this is just like iphone 2! of course it doesn’t work yet but it will work eventually! we made iphone 2 please believe us!!

      he’s already banned but i love how every time this argument comes up there’s absolutely no substance to the metaphor. “ai is like the internet/microprocessors/the industrial revolution/the Renaissance”, but there’s no connective tissue or actual relation between the things being compared, just some hand-waving around the general idea of progress and pointing to other popular/revolutionary things and going “see! it’s just like that!”

      The majority of hallucinations are due to user input errors that are not accounted for in the model tokenizer and loader code. This is just standard code errors. Processing every possible spelling, punctuation, and grammar error is a difficult task.

      “i’m sorry, but you used the wrong form of ‘their’ in your prompt, that’s why it inexplicably included half a review of Click in your meeting summary.”

      AI is like a mirror of yourself upon the dataset. It can only reflect what is present in the dataset and only in a simulacrum of yourself through the prompts you generate. It will show you what you want to see. It is unrivaled access to information if you have the character to find yourself and what you are looking for in that reflection.

      s-tier. no notes. does lemmy have user flairs? because if so i’m calling dibs

      • froztbyte@awful.systems
        link
        fedilink
        arrow-up
        4
        ·
        9 months ago

        It doesn’t have flairs yet (I also thought of that a while back), but I guess now we can do it in philthy!

        • self@awful.systemsM
          link
          fedilink
          arrow-up
          4
          ·
          9 months ago

          user flairs, even in a non-federated form, are a feature I’ve wanted since the beginning. they’d be a great early feature for philthy!

    • froztbyte@awful.systems
      link
      fedilink
      arrow-up
      4
      ·
      9 months ago

      I know your idiot ass has already been banned and I’ll likely never get an answer, so I ask this in this in full recognition of saying hi to the void, but

      You were the kind of person to excitedly evangelise about bitcoin (and other shitcoins) with people at bars in the 2016~2020 years, weren’t you?