Worm’s brain mapped and replicated digitally to control obstacle-avoiding robot.

  • kibiz0r@midwest.social
    link
    fedilink
    English
    arrow-up
    4
    ·
    4 months ago

    You’re presupposing the superiority of science. What good is knowing the chemical composition of a mind, if such chemicals are but shadows on the cave wall?

    You can’t actually witness a rock, in its full objective “rock-ness”. You can only witness yourself perceiving the rock. I call this the Principle of Objective Things in Space.

    Admittedly, the study of consciousness is still in its infancy, especially compared to study of the physical world. But it would be foolish to discard the entire concept when it is unavoidably fundamental. Suppose we do invent teleporters and they do erase consciousness. Doesn’t it say something about the peril of worshipping quantification over all else, that we wouldn’t even know until we had already teleported all of our bread? The entire field is babies. I am heavy ideas guy and this is my PoOTiS.

    • Warl0k3@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      ·
      edit-2
      4 months ago

      (I am absolutely going to steal the Principle of Objective Things in Space, that’s wonderful.)

      There’s a drive philosophers have, to question why things are the way they are, through a very specific lens. Why is it wrong to push a fat man onto the trolley tracks, if his death would save six others? Why is there a difference between the perception of the shadows and the perception of the man with the shadow puppets? Does free will exist, and why does that matter?

      These are all the pursuit of meaning, and while they are noble and important questions to ask, they are not questions driven by the pursuit of understanding. Philosophy depends on assumptions about the world that are taken to be incontrovertible, and bases it’s conclusions from there. The capacity for choice is a classic example, as is the assumption of a causal universe, and though they’re quite reasonable things to assume in most cases, it can get mind-bleedingly aggravating when philosophers apply the same approach to pure fields like mathematics, which require rigorous establishment of assumptions before any valid value of truth can be derived.

      Which is not to attack philosophers. I want to be clear about that, I bring this up just to emphasize that there are differences in thought between the two disciplines (that occasionally those differences in thought make me want to brain them with a chair is unrelated to the topic at hand). The philosophical study and speculation as to and on the nature of consciousness is perhaps the single oldest field of inquiry humanity has. And while the debate has raged for literal ages, we haven’t really gotten anywhere with it.

      And then, recently, scientists (especially computer scientists, but many other fields as well) have shown up and gone “hey look, we can see what the brain looks like, we know how the discrete parts work, we can even simulate it! Look, we’ve got the behavior right here, and… well, maybe… when we get right down to it, it’s just not all that deep?” And philosophers have embraced this, enfolded it into their considerations, accepted it as valid work… and then kept right on asking the exact same questions.

      The truth is, as I’ve been able to study it, that ‘consciousness’ is a meaningless term. We haven’t been able to define it for ten thousand years of sitting around stroking our beards, because it’s posited on assumptions that turn out to be, fundamentally, meaningless. It’s assumed there is another layer of abstraction, or that there’s a point or meaning to consciousness, or anything within the Theory of Mind. And I think it’s just too hard to accept that, maybe, it all… doesn’t matter. That we haven’t found any answers not because the question is somehow unanswerable, but because the question was asked in a context that invalidates the entire premise. It’s the philosophical equivalence of ‘null’.

      Sufficiently complex networks can compute and self reference, and it turns out when you do that enough, it’ll start referencing The Self (or whatever you’d like to call it). There’s no deeper meaning, or hidden truth. There’s just that, on a machine, a simulation can be run that can think about itself.

      Everything else is just… ontological window dressing. Syntactic sugar for the teenage soul.

    • originalucifer@moist.catsweat.comOP
      link
      fedilink
      arrow-up
      2
      ·
      4 months ago

      consciousness does not exist outside the physical world (nothing does), so why would you remove it from the study of the physical world?

      why would an exact replica not have all the same properties, including consciousness? or is this just an extraordinary claim without evidence?