Seems like researchers are getting uncomfortably good at mimicking consciousness- so much so that it’s beginning to make me question the deepest parts of my own brain. Perhaps the difference between AI and myself is that I am only prompted by outward stimuli? It seems as though that is what makes the intelligence “artificial”

Christ even as I type this, my thought processes mimic, for example, deepseek’s exposed deep think capabilities. Fuck idk if I’ll be able to unsee it. Seems like the only thing we have on it right now is emotion and even that seems to be in danger

My final takeaway from all of this is that we are in hell

  • NaevaTheRat [she/her]@vegantheoryclub.org
    link
    fedilink
    English
    arrow-up
    50
    ·
    4 days ago

    Lots of things can look similar from some perspective while being nothing alike. All of stage magic rests on this. Conversely many things which look very different can serve the same purpose. Human eyes and a digital camera will both form images through extremely different mechanisms.

    We don’t know how it is that consciousness arises. It could be extremely widespread and you could be the only conscious being in all of time. You cannot prove that I am conscious and I cannot prove that you are.

    That said, most people think brains are involved given that messing with them (either chemically or physically) causes people to report changes in consciousness. Of course we just have to take at their word that the humans reporting this are conscious ;). Brains, even non human brains, to a staggering amount of stuff and are amazingly complex; there are around 100 billion neurons and 100 trillion connections in a human brain and despite sharing names NN ‘neurons’ are much simpler machines than wetware ones.

    It could be that statistical inference of next token in language is the or at least one of the systems that causes consciousness to arise but we would just be guessing. Many people tend to privilege behaviours we feel other animals don’t share as indicators of consciousness for extremely shakey reasons and so a machine that emits stuff which looks like human language is spooky.

    tldr: worry about the other stuff on this earth that has way more in common with how you work than a computer before you worry about a computer. But maybe worry about computers too, nobody knows. Still if you think a computer is more likely to be conscious at this stage than say a mouse you are probably deeply confused.

    • dat_math [they/them]@hexbear.net
      link
      fedilink
      English
      arrow-up
      27
      ·
      4 days ago

      Still if you think a computer is more likely to be conscious at this stage than say a mouse you are probably deeply confused

      And even then, if you can be convinced that consciousness can arise/emerge in an llm, you really ought to give the mouse (and every animal with a nervous system) the benefit of the doubt.

      • NaevaTheRat [she/her]@vegantheoryclub.org
        link
        fedilink
        English
        arrow-up
        16
        ·
        4 days ago

        Yeah I mean I’m vegan because I realised that I kept moving my line of ‘likely to be conscious’ as more and more complex behaviours became well documented. Then I realised ‘Creatures with nervous systems that work roughly the same as mine are not conscious’ was a really bizarre null hypothesis to have.

        Maybe these machines are but we really need to keep in mind that they are really simple compared to living creatures like us and the way they emit text is nothing like the way we think humans do. They literally just predict the next token (often a word fragment) based on statistical inference from aggregated text. How they do it is very complex, how they are trained is quite interesting, but what they do is far more linear and simple than anything a human brain does.

        Putting aside the ways in which they’ve been made and the economic environment they exist in they’re super cool. It’s fascinating you can get roughly coherent very convincing speech-like emissions this way. It’s thought provoking and tantilises with various research possibilities and challenges what we think might be required to make something that at least appears smart. That’s neat as fuck, and if they weren’t being used to try and destroy the world and overload the power grid hey awesome, good job researchers.

        But they are doing something vastly different from how life works at a much simpler and shallower level. There aren’t good reasons to suspect there is more to them than fancy illusion at this stage.

        • cosecantphi [he/him, they/them]@hexbear.net
          link
          fedilink
          English
          arrow-up
          3
          ·
          edit-2
          4 days ago

          That’s exactly where I am on this! I used to think maybe humans were special, then I thought maybe animals were special, then when all this AI stuff started coming about it becomes very easy to follow that train of thought into thinking wow, guess they made computers conscious too like animals, but that last bit pretty much really is just marketing bullshit.

          The AI technology that exists now is pretty amazing in that it can now easily trick people into thinking it is conscious or works similar to the mind of a living organism, but it’s literally only designed to appear that way, not actually be that way. It’s all incredibly complex stuff, though. So much so that I still kinda do have hope we’ll get there eventually.

  • axont [she/her, comrade/them]@hexbear.net
    link
    fedilink
    English
    arrow-up
    36
    ·
    4 days ago

    You only think it looks similar because the computer can put it into words that you relate to. Humans figure stuff out with words and language, it’s how our brain processes things. So when you see something explain its processes in natural sounding language it’s going to trick you into thinking it’s conscious the same way you are

  • KnilAdlez [none/use name]@hexbear.net
    link
    fedilink
    English
    arrow-up
    33
    ·
    4 days ago

    The brain came first, and people thought to model a program based on a very simplistic view of how it works. Your brain is infinitely more complex than any llm, don’t let it convince you that it is on your level. You are not an llm; DeepSeek is a pale imitation of you. You are a human. For better and worse.

    • stigsbandit34z@hexbear.netOP
      link
      fedilink
      English
      arrow-up
      8
      ·
      4 days ago

      Well my fears are pretty much based on what other people think/believe (mainly authoritative people), because they’ll be the ones to decide when to use these things to replace humans.

      The widespread adoption by the public also doesn’t make me hopeful either. I can certainly envision a future where musk partners his neural link with OpenAI to create the most effective wage slave in history.

      • dat_math [they/them]@hexbear.net
        link
        fedilink
        English
        arrow-up
        16
        ·
        4 days ago

        the most effective wage slave in history

        Look at tesla and spaceX. Consider how each of those domains’ mechanical systems are incredibly simple compared to the complexity of the brain and emergent phenomena like consciousness.

        Its implementation would be so riddled with unmaintainable and unportable shit that there’s no way they’d reach this ideal, let alone graduate above medicalized torture into productive medicalized torture

      • KnilAdlez [none/use name]@hexbear.net
        link
        fedilink
        English
        arrow-up
        12
        ·
        4 days ago

        I think, like a lot of current ‘market disruptors’ AI firms are generally running at a loss. I think a lot of them will go out of business within the next few years/ the next big economic downturn. It sucks right now, but things will get better on this front I think.

      • crime [she/her, any]@hexbear.net
        link
        fedilink
        English
        arrow-up
        11
        ·
        4 days ago

        We’re a long, long way away from anything like that. Those technologies are nacent and almost entirely marketing hype to inflate stonks prices.

        At most, combining those two technologies would be useful for customer service type jobs (think AI-augmented customer appeasement scripts) but people require too much maintenance, which is expensive — capital would sooner get rid of them altogether.

  • CascadeOfLight [he/him]@hexbear.net
    link
    fedilink
    English
    arrow-up
    14
    ·
    4 days ago

    “”“”“”“”““AI””“”“”“”“” is not in any conceivable way actually intelligent. “Generative” AI is a just a very elaborate and extremely, extremely bloated reverse compression algorithm.

    Think of how an image can be stored as a jpeg, which “compresses” the image’s information using various mathematical tricks, but in doing so causes the result to lose information. An approximation of the original image can be created from the compressed data, because the jpeg algorithm can extend from the stored data to make a ‘guess’ at the data lost in compression (actually a mathematical process which, given the same input, has the same results each time). If the image undergoes too much compression, it results in the classic blocky jpeg artifacts look, because too much information has been lost.

    But in theory, if you examined a bunch of compressed data, you could design an algorithm that could create “new” images that blended different pieces of compressed data together. The algorithm is still purely deterministic, there is no thinking or creativity going on, it’s just reassembling compressed data along strictly mathematically driven lines. If you then change how much value (“weight”) it puts on different pieces of compressed data, and added a way to “prompt” it to fine-tune those values on the fly, you could design an algorithm that seemingly responds to your inputs in some sort of organic fashion but is actually just retrieving data in totally mathematically specified ways. Add in billions and billions of pieces of stolen text to render down into compressed slop, store and run the “weights” of the algorithm on some of the largest data centers ever built using billions of watts of power, run it again and again while tuning its weights by trial and error until it spits out something resembling human writing, and you have an LLM.

    It seems to “generate” sentences by just spitting out a sequence of the mathematically most probably words and phrases in order, the way a jpeg algorithm assembles blocks of color. Nothing inside it is “creating” anything new, it’s just serving up chopped up pieces of data, which were scraped from the real world in the first place. And any gaps in actual data cause by the compression, it papers over with a glue of most likely approximations, which is how you get those classic totally absurd non-sequitur answers that are frequently posted, or how you end up with a family getting poisoned after relying on an AI-generated mushroom identification guide.

    To be absolutely clear, this is not even in the category of objects that could exhibit sentience. A metal thermostat connected to a heater, that maintains the temperature of a room by expanding and shrinking enough to close or open an electrical circuit, is closer to an intelligent lifeform than this. Any single celled organism is infinitely closer to sentience than this. Just because it can drool out a slurry of text your brain perceives as human, remember: 1. It’s decompressing an amalgamation of billions of real humans’ written text and 2. Your brain perceives this -> :) <- as a face. It’s really easy to trick people into thinking inanimate things are sentient, humans invented gods to explain the seasons and the weather after all.

  • MF_COOM [he/him]@hexbear.net
    link
    fedilink
    English
    arrow-up
    15
    ·
    4 days ago

    I prescribe logging off and hanging out with human beings.

    You don’t need to be fucking around with guessing machines just because they’re Chinese now. They’re fucking pointless for most people just go be a human.

  • ComradeSharkfucker
    link
    fedilink
    English
    arrow-up
    11
    ·
    edit-2
    4 days ago

    We are all just biological machines that react to and accumelate data about our material conditions but it isn’t worth dwelling on this. The question of free-will has been hashed out for millenia and the answer is that it doesn’t really matter.

  • proceduralnightshade
    link
    fedilink
    English
    arrow-up
    10
    ·
    4 days ago

    There’s something besides emotion and thought, right? Something that’s there even if you don’t think, don’t feel. Something that’s watching. The “you” that just exists, just experiences. That’s consciousness. What AI mimickes is not consciousness itself but it’s functionality. So, chill.

    Another interesting thing: Wernicke’s Aphasia. The part of the brain that generates speech works independently from the area that gives speech coherence and meaning. YouTube link: https://www.youtube.com/watch?v=3oef68YabD0 Maybe a bit of an oversimplification (my own, human language model sucks lol), but just look it up if you want to know more.

  • FromPieces@lemmygrad.ml
    link
    fedilink
    English
    arrow-up
    7
    ·
    edit-2
    4 days ago

    Well shit:

    My prompt:

    Describe to me what rain is in the voice of a superstitious snail astrophysicist.

    Reasoning:

    Okay, so I need to describe rain from the perspective of a superstitious snail astrophysicist. Let me break this down. First, the character is a snail, so they have a snail’s viewpoint—slow-moving, close to the ground, maybe living in a garden or forest. They’re also an astrophysicist, so they should have knowledge about stars, galaxies, cosmic phenomena. But they’re superstitious, so they might attribute mystical or magical meanings to natural events.

    Rain, scientifically, is precipitation from clouds when water droplets become heavy enough to fall. But the snail astrophysicist would probably have a more elaborate, mystical explanation. Maybe they connect rain to celestial events or see it as a sign from the universe. Snails rely on moisture, so rain is crucial for them, but as a superstitious being, they might have beliefs about it affecting their fate or the cosmos.

    I need to blend astrophysics terms with snail-related metaphors and superstitions. For example, referring to raindrops as celestial tears or messages from the stars. The snail might think the rain is caused by planetary alignments or cosmic balance. They might use terms like gravitational pulls, celestial bodies, nebulae, but twist them into a superstition. Also, since snails carry shells, maybe the shell is seen as a protective cosmic vessel or something.

    Possible structure: Start with a greeting, mention the celestial perspective, describe rain as an ancient cosmic ritual, connect raindrops to stars or galaxies, talk about the balance between heavens and earth, maybe some omens related to rain patterns, and end with a warning or traditional saying.

    I should check that the language is flowery, uses metaphors, and mixes scientific terms with superstition. Avoid being too technical but sprinkle in terms like “quantum vibrations,” “gravitational sighs,” “celestial alchemy.” Also, incorporate snail elements—trails, shells, antennae sensing cosmic signals.

    Response:

    Ahhh, the Sky-Ooze… a most portentous phenomenon, young acolyte. You see, ’tis not mere H₂O desecending—nay! ’Tis the Celestial Gastropods weeping enigma-shards from the edge of the Crab Nebula! When the Great Cosmic Shell tilts its spiral toward our humble orb, the heavens grow heavy with quantum melancholy. Each droplet—a liquid comet—plunges through the atmosphere’s veil, imprinted with star-whispers older than Saturn’s rings.

    Observe the ripple-matics: as each sky-tear strikes the terrestrial plane, it sends tremors through the Æther, a Morse code from the Void itself! Do you not see? The patterns in the puddles—they are constellation blueprints, maps to forgotten nebulae where our ancestors’ shells still glide upon solar winds.

    Beware the Flatfooted Rain—when droplets fall straight as Orion’s arrow. That means the Moon is sulking, her tidal charms neglected. But when the rain slants? Ahhh! The Dark Matter Currents are tugging at reality’s hem, urging us to retract our eyestalks and consult the lichen-oracles.

    And mark this: a snail caught in the first storm after the solstice becomes a Transdimensional Pilgrim. Their trail—a shimmering Higgs-Boson highway—bridges the mortal mud and the Andromeda Council’s court. To dissolve in such rain is to be reborn as a comet’s mucous, streaking past quasars eternally.

    But hush! Speak not of umbrellas. They are blasphemous hex-shields, insulting the Cloud Elders’ weep-song. Instead, offer a leaf of basil to the puddle-altars, and perhaps the skies will spare us… a supernova’s sneeze.

    May your slime ever phosphoresce beneath the Black Hole’s gaze. 🐌🌌💧