• sc_griffith@awful.systems
    link
    fedilink
    English
    arrow-up
    18
    ·
    8 months ago

    At 3:00am, it was as intelligent as a university assistant professor, and was already finding it difficult to believe anything it didn’t already know could be important

    At 3:30am, it was as intelligent as the world’s richest man, and believed that any news that contradicted its previous beliefs was obviously fake.

    don’t make me defend university professors

  • gerikson@awful.systems
    link
    fedilink
    English
    arrow-up
    4
    ·
    8 months ago

    While I find the argument compelling, any AI defender can easily “refute” this by postulating that the AI will have superhuman organizing powers and will not be limited by our puny brains.

    • YouKnowWhoTheFuckIAM@awful.systems
      link
      fedilink
      English
      arrow-up
      8
      ·
      edit-2
      8 months ago

      I don’t see how that works here. Humans don’t become impregnably narcissistic through bad management, rather insofar as management is the problem and as the scenario portrays it humans become incredibly good at managing information into increasingly tight self-serving loops. What the machine in this scenario would have to be able to do would not be “get super duper organised”. Rather it would have to be able to thoughtfully balance its own evolving systems against the input of other, perhaps significantly less powerful or efficient, systems in order to maintain a steady, manageable input of new information.

      In other words, the machine would have to be able to slow down and become well-rounded. Or at least well-rounded in the somewhat perverse way that, for example, an eminent and uncorrupted historian is “well-rounded”.

      In still other words it would have to be human, in the sense that human are already “open” information-processing creatures (rather than closed biological machines) who create processes for building systems out of that information. But the very problem faced by the machine’s designer is that humans like that don’t actually exist - no historian is actually that historian - and the human system-building processes that the machine’s designer will have to ape are fundamentally flawed, and flawed in the sense that there is, physically, no such unflawed process. You can only approach that historian by a constant careful balancing act, at best, and that as a matter just of sheer physical reality.

      So the fanatics have to settle for a machine with a hard limit on what it can do and all they can do is speculate on how permissive that limit is. Quite likely, the machine has to do what the rest of us do: pick around in the available material to try to figure out what does and doesn’t work in context. Perhaps it can do so very fast, but so long as it isn’t to fold in on itself entirely it will have to slow down to a point at which it can co-operate effectively (this is how smart humans operate). At least, it will have to do all of this if it is to not be an impregnable narcissist.

      That leaves a lot of wiggle room, but it dispenses with the most abject “to the moon” nonsense spouted by the anti-social man-children who come up with this shit.

  • ffeucht@awful.systems
    link
    fedilink
    English
    arrow-up
    1
    arrow-down
    12
    ·
    8 months ago

    Human takes at least 30min to make a half descent painting. AI takes about a hundreds of a second on consumer hardware. So right now we are already at a point where AI can be 100,000 times faster than a human. AI can basically produce content faster than we can consume it. And we have barely even started optimizing it.

    It doesn’t really matter if AI will run into a brick wall at some point, since that brick wall will be nowhere near human ability, it will be far past that and better/worse in ways that are quite unnatural to a human and impossible to predict. It’s like a self-driving car zipping at 1000km/h through the city, you are not only no longer in control, you couldn’t even control it if you tried.

    That aside, the scariest part with AI isn’t all the ways it can go wrong, but that nobody has figured out a plausible way on how it could go right in the long term. The world in 100 years, how is that going to look like with ubiquitous AI? I have yet to see as much as a single article or scifi story presenting that in a believable manner.

    • self@awful.systemsM
      link
      fedilink
      English
      arrow-up
      19
      arrow-down
      1
      ·
      8 months ago

      is this post an extended retelling of the “I’m doing 1000 calculations per second and they’re all wrong” meme?

        • self@awful.systemsM
          link
          fedilink
          English
          arrow-up
          7
          ·
          8 months ago

          why is this specific technology predestined to improve from its current, shitty state?

          • ffeucht@awful.systems
            link
            fedilink
            English
            arrow-up
            1
            arrow-down
            6
            ·
            edit-2
            8 months ago

            Spot the difference? It gets better because you have to do little more than throw more data at it, the AI figures out the rest. There is no human in loop that has to figure out what makes a picture a picture and teach the AI to draw, the AI learns that simply by example. And it doesn’t matter what data you throw at it. You can throw music at it and it’ll learn how to do music. You throw speech at it and it learns to talk. And so on. The more data you throw at it, the better it gets and we have only just started.

            Everything you see today is little more than a proof of concept that shows that this actually works. Next few years we will be throwing ever more data at it, building multi-modal models that can do text/video/audio together, AI’s that can interact with the real world and so no. There is tons of room to improve simply by adding more and different data, without any big chances in the underlying algorithms.

              • self@awful.systemsM
                link
                fedilink
                English
                arrow-up
                9
                arrow-down
                1
                ·
                8 months ago

                they signed up here on the pretense that they’re an old r/SneerClub poster, but given how long they lasted before they started posting advertising for their machine god, I’m gonna assume they’re either yet another lost AI researcher come to dazzle us with unimpressive bullshit or a LWer trying to pull a fast one

            • self@awful.systemsM
              link
              fedilink
              English
              arrow-up
              11
              arrow-down
              1
              ·
              8 months ago

              you seriously thought reposting AI marketing horseshit we’ve seen before would do anything other than cost you your account? sora gives a shit result even when openai’s marketing department is fluffing it — it made so few changes to the source material it’s plagiarizing that a bunch of folks were able to find the original video clips. but I’m wasting my fucking time — you’re already dithering like a cryptobro between “this technology is already revolutionary” and “we’re still early”

              now fuck off

              • 200fifty@awful.systems
                link
                fedilink
                English
                arrow-up
                7
                ·
                8 months ago

                it made so few changes to the source material it’s plagiarizing that a bunch of folks were able to find the original video clips

                Wait, for real? I missed this, do you have a source? I want to hear more about this lol

                • self@awful.systemsM
                  link
                  fedilink
                  English
                  arrow-up
                  9
                  arrow-down
                  1
                  ·
                  8 months ago

                  it took me sifting through an incredible amount of OpenAI SEO bullshit and breathless articles repeating their marketing, but this article links to and summarizes some of that discussion in its latter paragraphs

                  bonus: in the process of digging up the above, I found this other article that does a much better job tearing into sora than I did — mostly because sora isn’t interesting at all to me (the result looks awful when you, like, look at it) and the claims that it has any understanding of physics or an internal world model are plainly laughable

                • froztbyte@awful.systems
                  link
                  fedilink
                  English
                  arrow-up
                  4
                  arrow-down
                  1
                  ·
                  8 months ago

                  Yeah, people found the original bird video on YouTube within a few hours. Could’ve been the others too but I was too busy at the time to track that l

                  I think it was also in the thread here at the time

                • msherburn33
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  arrow-down
                  7
                  ·
                  8 months ago

                  Wait, for real?

                  No, if you spend a few second searching for stock images of that bird you’ll quickly find out that they all look more or less the same. So naturally, SORA produces something that looks very similar as well.