Worm’s brain mapped and replicated digitally to control obstacle-avoiding robot.

  • Warl0k3@lemmy.world
    link
    fedilink
    English
    arrow-up
    3
    ·
    edit-2
    4 months ago

    (I am absolutely going to steal the Principle of Objective Things in Space, that’s wonderful.)

    There’s a drive philosophers have, to question why things are the way they are, through a very specific lens. Why is it wrong to push a fat man onto the trolley tracks, if his death would save six others? Why is there a difference between the perception of the shadows and the perception of the man with the shadow puppets? Does free will exist, and why does that matter?

    These are all the pursuit of meaning, and while they are noble and important questions to ask, they are not questions driven by the pursuit of understanding. Philosophy depends on assumptions about the world that are taken to be incontrovertible, and bases it’s conclusions from there. The capacity for choice is a classic example, as is the assumption of a causal universe, and though they’re quite reasonable things to assume in most cases, it can get mind-bleedingly aggravating when philosophers apply the same approach to pure fields like mathematics, which require rigorous establishment of assumptions before any valid value of truth can be derived.

    Which is not to attack philosophers. I want to be clear about that, I bring this up just to emphasize that there are differences in thought between the two disciplines (that occasionally those differences in thought make me want to brain them with a chair is unrelated to the topic at hand). The philosophical study and speculation as to and on the nature of consciousness is perhaps the single oldest field of inquiry humanity has. And while the debate has raged for literal ages, we haven’t really gotten anywhere with it.

    And then, recently, scientists (especially computer scientists, but many other fields as well) have shown up and gone “hey look, we can see what the brain looks like, we know how the discrete parts work, we can even simulate it! Look, we’ve got the behavior right here, and… well, maybe… when we get right down to it, it’s just not all that deep?” And philosophers have embraced this, enfolded it into their considerations, accepted it as valid work… and then kept right on asking the exact same questions.

    The truth is, as I’ve been able to study it, that ‘consciousness’ is a meaningless term. We haven’t been able to define it for ten thousand years of sitting around stroking our beards, because it’s posited on assumptions that turn out to be, fundamentally, meaningless. It’s assumed there is another layer of abstraction, or that there’s a point or meaning to consciousness, or anything within the Theory of Mind. And I think it’s just too hard to accept that, maybe, it all… doesn’t matter. That we haven’t found any answers not because the question is somehow unanswerable, but because the question was asked in a context that invalidates the entire premise. It’s the philosophical equivalence of ‘null’.

    Sufficiently complex networks can compute and self reference, and it turns out when you do that enough, it’ll start referencing The Self (or whatever you’d like to call it). There’s no deeper meaning, or hidden truth. There’s just that, on a machine, a simulation can be run that can think about itself.

    Everything else is just… ontological window dressing. Syntactic sugar for the teenage soul.