• Antiwork@hexbear.net
    link
    fedilink
    English
    arrow-up
    93
    ·
    4 months ago

    Now that capital has integrated them into their system they will not be allowed to fail. At least for now.

  • PaX [comrade/them, they/them]@hexbear.net
    link
    fedilink
    English
    arrow-up
    86
    ·
    edit-2
    4 months ago

    Good, please take the entire fake industry with you

    No offense to the AI researchers here (actually maybe only one person lol), but the people who lead/make profit off of/fundraise off of your efforts now are demons

    • ☆ Yσɠƚԋσʂ ☆OP
      link
      fedilink
      English
      arrow-up
      64
      ·
      4 months ago

      I do think that if OpenAI goes bust that’s gonna trigger a market panic that’s gonna end the hype cycle.

      • hexaflexagonbear [he/him]@hexbear.net
        link
        fedilink
        English
        arrow-up
        37
        ·
        edit-2
        4 months ago

        My guess for the dynamics: openAI investors panic, force the company to cut costs and increase pricing, other AI company investors panic, same result, AI becomes prohibitively expensive for a lot of use cases ending the hype cycle.

        • LanyrdSkynrd [comrade/them, any]@hexbear.net
          link
          fedilink
          English
          arrow-up
          23
          ·
          4 months ago

          I think that’s the best argument for why the tech industry won’t let that happen. All of the big tech stocks are getting a boost from this massive grift.

          Worst case scenario one of the tech giants buys them. Then they pare back the expenses and hide it in their balance sheet, and keep everyone thinking AGI is just around the corner.

          • hexaflexagonbear [he/him]@hexbear.net
            link
            fedilink
            English
            arrow-up
            17
            ·
            4 months ago

            It’s certainly possible, but I don’t think any of the tech giants are in a position to do that today. Google, Microsoft, and Amazon are in a cost cutting cycle, Meta’s csuite is probably on a short leash after the metaverse boomdoggle. Apple is the most likely one because they’re generally behind everyone else across all ML products but especially LLMs, but afaik they’re bracing for seeing drops in sales for the first time in 15 years, so buying openAI might be a tough pitch.

  • makotech222 [he/him]@hexbear.net
    link
    fedilink
    English
    arrow-up
    63
    arrow-down
    3
    ·
    4 months ago

    I hate when people say ‘LLMs have legitimate uses but…’. NO! THEY DONT! Its entirely a platform for building scams! It should be burnt to the ground entirely

    • charly4994 [she/her, comrade/them]@hexbear.net
      link
      fedilink
      English
      arrow-up
      46
      ·
      4 months ago

      But then how will people write 20 cover letters a day to keep up with the increasing rate of instant rejections?

      Saw a really depressing ad at work the other day where Google was advertising their thing and it was some person asking their LLM to write a letter for their daughter to this athlete bragging about how she’ll break her record one day. They couch it in “here’s a draft” but it’s just so bleak. The idea that a child so excited about doing a sport and dreaming of going to the Olympics and getting a world record can’t just write a bit of a clumsy letter expressing themselves to their hero is just beyond depressing. Writing swill for automated systems that are going to reject you anyway is one thing, but the idea that they think that this is a legitimate use of these models just highlights how obnoxiously out of touch they are.

      How do we learn and grow as people and find our own writing voices if we don’t write some of the most cringe shit imaginable when we’re young. I wrote a weird letter to Emma Watson in middle school, nobody ever read it, but it was a learning experience and made me actually have to think about my own feelings. These techbros have to have been grown in vats.

      • LocalOaf [they/them, ze/hir]@hexbear.net
        link
        fedilink
        English
        arrow-up
        44
        ·
        4 months ago

        I’ve hesitated to ever write anything about it thinking it’d come across as too yells-at-cloud or Luddite, but this comment kind of inspired me to flesh out something that’s been simmering in the back of my head ever since LLMs became kelly latest fad after the NFT boom.

        One of the most unnerving things to me about “AI” in the common understanding is that its entire hype cycle and main use cases are all tacit admissions that all of the professional and academic uses of it are proof that their pre-“AI” standards were perfunctory hoop jumping bullshit to join the professional managerial class, and their “artistic” uses are almost entirely utilized by people with zero artistic sensibilities or weirdo porno sickos. All of it belies a deep cynicism about the status quo where what could have been heartfelt but clumsy writing by young students or the athlete in your example are being unknowingly robbed of their agency and the humanizing future of looking back on clunky immature writing as a personal marker of growth. They’re just hoops to jump through to get whatever degree or accolade you’re seeking, with whatever personal growth that those achievements originally meant stripped of anything other than “achieving them is good because it advances your career and earning potential.” Techbros’ most fawning and optimistic pitches of “AI” and “The Singularity” instead read to me as the grimmest and most alienating version of neoliberal “end of history” horseshit where even art and language themselves are reduced to SEO marketized min/maxxed rat races.

        I hope this doesn’t sound too a-guy but I had to get that rant out

        Maybe I’ll expand that into something

          • LocalOaf [they/them, ze/hir]@hexbear.net
            link
            fedilink
            English
            arrow-up
            2
            ·
            4 months ago

            I’m barely better than meemaw when it comes to tech literacy, what’s the best platform for stuff like that? Is Medium bad? I’ve installed a Linux distro before but basically just want to rant and take pictures of my cats clueless

            • Medium is also “bad” but it does put your posts out into an algorithm. Kiiiind of. Everyone ends up deleting them. I just use long Mastodon/forks posts (I make a lot of accounts on every activitypub server tbqh just for gimmicks and things) but 5,000 is not a lot of characters so I also link to Firefish Pages. Easy to make server with 50,000 for personal use at least which is a bit better for longform.

              I think the best option on ActivityPub is https://writefreely.org/ you can also generally find a way to seamlessly retweet things like Lemmy linkposts or Writefreely blogs on Mastodon/Firefish whatever fork

              Not very technically savvy at all over here it’s just pretty online

                • They’re all going to make you repost them somewhere else anyways. Once you break into AcitivityPub posting it’s very good, just pretty hard to dodge the whole Ukraine net, so it’s not bad itself. But that’s not even what I’m saying, you can always post from Writefreely or Lemmy back to Reddit or Twitter or whatever and it ensures your real post won’t be deleted. You keep control of part of your data on something self hosted or friend hosted on a cloud service.

                  I found Substack’s editor impossible to paste into BTW and had other technical issues. But Medium and Substack do kiiind of offer some social media opportunities themselves. But they mostly promote dumb crap and Taibbi respectively

    • autismdragon [he/him, they/them]@hexbear.net
      link
      fedilink
      English
      arrow-up
      22
      ·
      4 months ago

      So the emotional resonance I felt when I asked ChatGPT to write me a song about my experiences still loving the parent that abused me was what to you?

      Like the results were objectively artless glurge of course but I needed that in that moment.

      • RyanGosling [none/use name]@hexbear.net
        link
        fedilink
        English
        arrow-up
        16
        ·
        edit-2
        4 months ago

        I mean this is exactly part of the reason they’re going bankrupt which is good so you should keep doing it. Companies have been using other forms of AI with some success whereas LLM just regurgitates too much random fake information for anyone serious to use professionally.

        If it goes under, use open source LLMs which have been steadily improving and almost surpassing proprietary ones.

    • bumpusoot [any]@hexbear.net
      link
      fedilink
      English
      arrow-up
      17
      ·
      edit-2
      4 months ago

      I promise this isn’t true. AI is absolutely a scam in the sense that it’s overhype as fuck, but LLMs are frequently of practical use to me when doing basically anything technical. It has helped me solve real-life problems that actually materially helps others.

      • TrashGoblin [he/him, they/them]@hexbear.net
        link
        fedilink
        English
        arrow-up
        6
        ·
        4 months ago

        As a software developer with close to 30 years of experience, I find it continually astonishing when people say LLMs are useful to them for technical stuff. I already spend too much of my life debugging code I didn’t write. I don’t need to automatically churn out more technical debt to be responsible for!

        • bumpusoot [any]@hexbear.net
          link
          fedilink
          English
          arrow-up
          4
          ·
          edit-2
          4 months ago

          I don’t work in actual software development, though I do a little of it amongst other work.

          When I need to slop out a one-time snippet or short script to do something, which I have to do like 10 times a day, it takes me like 3-20 minutes. ChatGPT 4 does it near-perfectly, takes one minute, and usually teaches me something on the way.

          Plus when I need to work out how the fuck GDB works to debug shit, it’s an absolute lifesaver. The manual is very long and remembering all the memory examination commands is hard.

          If you’re ever working on code over ~100 lines a long, then I basically agree as it takes massive debugging and is poorly factored to the point of being worthless. But for arcane, well-documented commands (ie obscure programming languages and linux tools), and short blasts of code, it’s genuinely incredibly useful on a daily basis.

        • Tabitha ☢️[she/her]@hexbear.net
          link
          fedilink
          English
          arrow-up
          3
          ·
          4 months ago

          the only software developer thing chatgpt does well exceptionally well is 101 level answers to general questions/requests and read to me paraphrased stackoverflow results with nearly google levels of reliability.

          Where chatgpt really 100x’s a person’s output is when you’re trying to generate shitloads of spam text, such as automated posting of unique comments that use the post/thread/blog/videos’s context and existing comments as context to appear relevant while still pushing a narrative or shilling a product, or building a proxy such that every page someone visits on your website, you automatically reword (plagiarize) another specific website’s article then add your ads.

        • Rexios@lemm.ee
          link
          fedilink
          English
          arrow-up
          2
          ·
          edit-2
          4 months ago

          Idk you probably sound like people did when search engines first started getting popular. If you can’t learn how to get good output from an LLM you might get left behind. I never use LLMs for large chunks of code just snippets and it’s great for that. It’s just like StackOverflow. Don’t blindly copy shit without understanding what’s actually going on. You have fun writing boilerplate code I’m never going back to hand writing that shit.

  • Postletarian [none/use name]@hexbear.net
    link
    fedilink
    English
    arrow-up
    40
    arrow-down
    1
    ·
    4 months ago

    As far as “AI” goes, it’s here to stay. As for OpenAI they will probably be bought off by one of the big ones, as is usually the case with these companies.

    • ☆ Yσɠƚԋσʂ ☆OP
      link
      fedilink
      English
      arrow-up
      35
      ·
      4 months ago

      I agree that this tech has lots of legitimate uses, and it’s actually good for the hype cycle to end early so people can get back to figuring out how to apply this stuff where it makes sense. LLMs also managed to suck up all the air in the room, but I expect the real value is going to come from using them as a component in larger systems utilizing different techniques.

      • QuillcrestFalconer [he/him]@hexbear.net
        link
        fedilink
        English
        arrow-up
        15
        arrow-down
        1
        ·
        4 months ago

        Yeah but integrating LLMs with other systems is already happening.

        Most recent case is out of Deepmind, where they managed to get silver medalist score in the International Mathematics Olympiad (IMO) using a LLM with a formal verification language (LEAN) and then using synthetic data and reinforcement learning. Although I think they had to manually formalize the problem before feeding it to the algorithm, and also it took several days to solve the problems (except for one that took minutes), so there’s still a lot of space for improvement.

        • ☆ Yσɠƚԋσʂ ☆OP
          link
          fedilink
          English
          arrow-up
          9
          ·
          4 months ago

          Sure, but you can do a lot more than that. You could combine LLMs as part of a bigger system of different kinds agents, each specializing in different things. Similarly to the way different parts of the brain focus on solving different types problems. Sort of along the lines of what this article is describing https://archive.ph/odeBU

          • quarrk [he/him]@hexbear.net
            link
            fedilink
            English
            arrow-up
            2
            ·
            4 months ago

            It’s kind of like how graphics cards are used to optimize specific repeated computations but not used for general computation

            • ☆ Yσɠƚԋσʂ ☆OP
              link
              fedilink
              English
              arrow-up
              2
              ·
              4 months ago

              Good analogy, it’s a tool for solving a fairly narrow problem in a particular domain.

  • Tachanka [comrade/them]@hexbear.net
    link
    fedilink
    English
    arrow-up
    24
    ·
    edit-2
    4 months ago

    big holders with insider information change to short positions to make money during the crash by putting their shares up as collateral to investment banks in exchange for loans, the bubble bursts, smaller investors lose money, the government steps in and bails them out because they’re “too big to fail” the torment nexus continues humming along

  • axont [she/her, comrade/them]@hexbear.net
    link
    fedilink
    English
    arrow-up
    24
    ·
    4 months ago

    Is this because AI LLMs don’t do anything good or useful? They get very simple questions wrong, will fabricate nonsense out of thin air, and even at their most useful they’re a conversational version of a Google search. I haven’t seen a single thing they do that a person would need or want.

    Maybe it could be neat in some kind of procedurally generated video game? But even that would be worse than something written by human writers. What is an LLM even for?

    • ☆ Yσɠƚԋσʂ ☆OP
      link
      fedilink
      English
      arrow-up
      13
      ·
      4 months ago

      I think there are legitimate uses for this tech, but they’re pretty niche and difficult to monetize in practice. For most jobs, correctness matters, and if the system can’t be guaranteed to produce reasonably correct results then it’s not really improving productivity in a meaningful way.

      I find this stuff is great in cases where you already have domain knowledge, and maybe you want to bounce ideas off and the output it generates can stimulate an idea in your head. Whether it understands what it’s outputting really doesn’t matter in this scenario. It also works reasonably well as a coding assistant, where it can generate code that points you in the right direction, and it can be faster to do that than googling.

      We’ll probably see some niches where LLMs can be pretty helpful, but their capabilities are incredibly oversold at the moment.

      • TrashGoblin [he/him, they/them]@hexbear.net
        link
        fedilink
        English
        arrow-up
        2
        ·
        edit-2
        4 months ago

        We might eventually get to a point where LLMs are a useful conversational user interface for systems that are actually intrinsically useful, like expert systems, but it will still be hard to justify their energy cost for such a trivial benefit.

        • ☆ Yσɠƚԋσʂ ☆OP
          link
          fedilink
          English
          arrow-up
          2
          ·
          4 months ago

          The costs of operation aren’t intrinsic though. There is a lot of progress in bringing computational costs down already, and I imagine we’ll see a lot more of that happening going forward. Here’s one example of a new technique resulting in cost reductions of over 85% https://lmsys.org/blog/2024-07-01-routellm/

    • autism_2 [any, it/its]@hexbear.net
      link
      fedilink
      English
      arrow-up
      12
      ·
      edit-2
      4 months ago

      I’ve been thinking AI generated dialogue in Animal Crossing would be an improvement over the 2020 game.

      To clarify I’m not wanting the writers at the animal crossing factory to be replaced with ChatGPT. Having conversations that are generated in real time in addition to the animals’ normal dialogue just sounds like fun. Also I want them to be catty again because I like drama.

      • fox [comrade/them]@hexbear.net
        link
        fedilink
        English
        arrow-up
        15
        ·
        edit-2
        4 months ago

        Nah, something about AI dialogue is just soulless and dull. Instantly uninteresting. Same reason I don’t read the AI slop being published in ebooks. It has no authorial intent and no personality. It isn’t even trying to entertain me. It’s worse than reading marketing emails because at least those have a purpose.

        • It depends on the training data. Once you use all data available, you get the most average output possible. If you limit your training data you can partially avoid the soullessness, but it’s more unhinged and buggy.

    • Owl [he/him]@hexbear.net
      link
      fedilink
      English
      arrow-up
      9
      ·
      4 months ago

      The LLM characters will send you on a quest, and then you’ll go do it, and then you’ll come back and they won’t know you did it and won’t be able to give you a reward, because the game doesn’t know the LLM made up a quest, and doesn’t have a way to detect that you completed the thing that was made up.

    • TrashGoblin [he/him, they/them]@hexbear.net
      link
      fedilink
      English
      arrow-up
      4
      ·
      4 months ago

      Cory Doctorow has a good write-up on the reverse centaur problem and why there’s no foreseeable way that LLMs could be profitable. Because of the way they’re error-prone, LLMs are really only suited to low-stakes uses, and there are lots of low-stakes, low-value uses people have found for them. But they need high-value use-cases to be profitable, and all of the high-value use-cases anyone has identified for them are also high-stakes.