Sometimes I think society conceptualize’s individuality and consciousness wrong.
We are being recreated by quantum mechanics all the time. Why do we consider ourselves one person continuously with that in mind.
What is so different about uploading to a computer? Is the you in the computer not the same you as the one in the flesh?
Thing is, it’s impossible to know if your consciousness will remain after a process or if you will die and a new consciousness that thinks like you created until you try it. If you continue existing, then great, but if you are replaced by a replica, anyone observing you has no way of knowing because the replica will think that it existed prior to its creation.
If it has full memories of existing before it’s body was created then it is you? I would say think so.
A perfect copy of me is just as much me as I am.
A digital upload is just a less than perfect copy of a flesh substrate.
If it has full memories of existing before it’s body was created then it is you? I would say think so.
No, because it’s a difference instance. From the copies perspective, it’s you and always was. From your perspective, you “go dark.” The copy thinks it’s “you”, as in the same, uninterrupted instance that you are. It just happens to be mistaken. It’s identical except for the fact that its a separate instance of you.
A perfect copy of me is just as much me as I am.
In a sense, I think that’s fair. It thinks its you, it feels like it always was you, and I think it’s fair to take some satisfaction that, if you could have that copy, at least to that copy if feels like it was always there.
A digital upload is just a less than perfect copy of a flesh substrate.
Wait, why? That may or may not be the case, but it would be due to reasons that have to due with complicated science about structural and chemical properties of those respective systems, and investigating whether the differences between them make a difference that matters.
I can do morse code on a sheet of paper or etch it into a stone tablet, and there’s a meaningful sense in which the code is still the same.
Wait, why? That may or may not be the case, but it would be due to reasons that have to due with complicated science about structural and chemical properties of those respective systems, and investigating whether the differences between them make a difference that matters.
I think it is reasonable that the chemicals, bacteria and viruses in our body have an affect on who we are and what we act like. A computer (excluding super advanced ones like the “modern” times Orion’s Arm universe would have) would have a different affect on you than what I mentioned previously.
While your code on the two elements are the same, one program compiled separately for two separate architectures would not be. The two binaries are less than perfect copies with ever so slight changes in how they behave.
On another note:
A perfect copy of me would act on my behalf and I the copy. I don’t think there would be any meaningful distinction between the two of me unless I was separated from the other instance for a long time and had wildly different environments. If one body dies I am still alive because of the other me.
The above would apply to less than perfect copies too until it reaches a threshold where I and the copy were not acting in each other’s behalf.
I think it is reasonable that the chemicals, bacteria and viruses in our body have an affect on who we are and what we act like.
I understand that, but as I said, that would be due to reasons that have to do with complicated science about structural and chemical properties of those respective systems, and investigating whether the differences between them make a difference that matters. Do clay tablets introduce copying errors you don’t have when you write down morse code on pen and paper? Do they need some assembly-level code in the beginning because they get input into a stone tablet reading machine? Do the totality of those changes change the meaning or essense of the message as it existed on pen and paper?
A computer is different from biology, of course, but it can in principle model the effects of biology, if we decide that we need those to fully represent a mind that we want to call equivalent to the “real” thing. So we wouldn’t just simulate a brain, but a brain as it would exist in a body with its environmental inputs. Unless a computer can’t model those? Which I would probably disagree with, but there is a fair argument to be had there. But the point is, I don’t think complexity of interacting environments gets you out of the space of things that can in principle be represented on a computer.
A perfect copy of me would act on my behalf and I the copy. I don’t think there would be any meaningful distinction between the two of me unless I was separated from the other instance for a long time and had wildly different environments.
It depends on what distinction you are trying to make, and what you are deciding is meant by “me.” Just at the level of plain language, they are clearly different from you in that they are a different instance of you experiencing an independent stream of consciousness. You wouldn’t necessary coordinate, you would see different things at different times. You would be “inside” one of them and not the other, etc.
My point is that you won’t know for sure until the process is done and you either wake up in the computer or you never experience anything again (or end up in wherever consciousness goes after death, you can’t be sure of that either until you actually die). That’s what’s terrifying about it.
Also, an upload creates a copy, it does not cleanly transfer something, so which copy is the real you? If flesh and blood you and computer you exist simultaneously and experience different things, the two minds will diverge. Then which is the real you?
I think the great thing you’ve identified is that, even if you really did go dark, you wouldn’t be able to say so, and the copy would say everything was fine, because they inherited all your memories. Unless, say, we could do something fancy with the reconstructed copy to purposely change it in a deliberate way, like put a watermark over their vision saying “You are the copy!”
But, as I said before, in practical terms, I think you really can know if you were transferred in many cases, given what we do know.
For instance, for one common example, there’s the thing about a hypothetical teleporter that disintegrates you and rebuilds a copy of you somewhere else. In that case, you go dark, and it’s your copy that’s somewhere else.
For a more vulgar example, if a perfect clone of me was made on the other side of the world, and I jumped into a volcano, it’s pretty clear that I wouldn’t experience a “transfer” of myself to that new body.
And these aren’t frivolous examples, by the way. A lot of the talk of being “copied” contemplates variations on the theme of dispensing with the original and making a copy somewhere else, and the answer to whether “you” are still there comes from a mundane examination of the practical details of whatever the method is of copying.
Presumably, if we got to the point of being able to copy people, we might hopefully know enough about how actual transference works and it (hopefully) wouldn’t be an open question, and our answer would depend on all kinds of boring structural details about how consciousness works.
Anyway, long story short, I think we can chip away at a lot of the mystery with boringly practical answers in a way that steers clear of any encounter with a profound philosophical mystery. And while we shouldn’t be so cavalier about throwing away the Big Question (is it The Real You), I think a disposition for plain answers that focus on boring details probably helps steer clear of traps that people get stuck in.
Also, an upload creates a copy, it does not cleanly transfer something, so which copy is the real you? If flesh and blood you and computer you exist simultaneously and experience different things, the two minds will diverge. Then which is the real you?
I 100% agree. I think that’s exactly one of the traps that people get stuck in. If you exist side by side, your experiences won’t be exactly parallel. You can meaningfully answer by distinguishing between instances, at least, and noting the continuity of consciousness across time that one enjoys, while the other was recently poofed into existence. The fleshy one, that’s you. Done!
But here’s what I don’t get about the path people take in these philosophical jeremiads. What’s with the ending, in paralyzed fear by a question? People seem to go there, or even want to go there, like it’s the goal of the whole conversation. I’ve seen this so many times. Ending with “who is the REAL you?” or “how do you decide?” Not that those are illegitimate questions, but… it’s like you’re trying to psyche someone out by trying to entrap them in a Chris Angel MindFreak, and using the tools of philosophy to find a way to get into that particularly panicked headspace. That should not be the goal. I think those are merely pit stops on the way to a continued examination of the question, not end points, and I think some people treat them as endpoints because they value the experience of being in that headspace, which I don’t get.
Both copies are the real me in the same way a cloned git repo is the same project. If they diverge but still consider themselves the same project, they are in fact the same project. They would just have some differences on top of a base sameness.
Exactly. There’s a sense in which they are structurally the same, a sense in which you can say their identity is the same or very similar, but they are different instances. And in terms of instance they are easy to distinguish, and that is sufficient to dispense with many philosophical riddles that depend on asking which copy is “real”.
Thing is, it’s impossible to know if your consciousness will remain after a process or if you will die and a new consciousness that thinks like you created until you try it.
Wait, what? I think there are some cases where it’s pretty obvious your consciousness wouldn’t be transferred. If your brain is (say) scanned and recreated digitally, and your original brain is destroyed, it stands to reason there wasn’t a transference.
I think you may be right that there’s an open question if you were to do a gradual, bit-by-bit transference, maybe. But as a matter of principle, we should hope that we know enough about how consciousness works to say something meaningful about it, and not just declare that by “its nature” or by “definition” it can never be known but from the inside.
If you continue existing, then great, but if you are replaced by a replica, anyone observing you has no way of knowing because the replica will think that it existed prior to its creation.
That’s a question of who knows what, which should be distinguished from the underlying question of whether or not a genuine transference actually did happen.
Sometimes I think society conceptualize’s individuality and consciousness wrong. We are being recreated by quantum mechanics all the time. Why do we consider ourselves one person continuously with that in mind. What is so different about uploading to a computer? Is the you in the computer not the same you as the one in the flesh?
Thing is, it’s impossible to know if your consciousness will remain after a process or if you will die and a new consciousness that thinks like you created until you try it. If you continue existing, then great, but if you are replaced by a replica, anyone observing you has no way of knowing because the replica will think that it existed prior to its creation.
If it has full memories of existing before it’s body was created then it is you? I would say think so. A perfect copy of me is just as much me as I am. A digital upload is just a less than perfect copy of a flesh substrate.
No, because it’s a difference instance. From the copies perspective, it’s you and always was. From your perspective, you “go dark.” The copy thinks it’s “you”, as in the same, uninterrupted instance that you are. It just happens to be mistaken. It’s identical except for the fact that its a separate instance of you.
In a sense, I think that’s fair. It thinks its you, it feels like it always was you, and I think it’s fair to take some satisfaction that, if you could have that copy, at least to that copy if feels like it was always there.
Wait, why? That may or may not be the case, but it would be due to reasons that have to due with complicated science about structural and chemical properties of those respective systems, and investigating whether the differences between them make a difference that matters.
I can do morse code on a sheet of paper or etch it into a stone tablet, and there’s a meaningful sense in which the code is still the same.
I think it is reasonable that the chemicals, bacteria and viruses in our body have an affect on who we are and what we act like. A computer (excluding super advanced ones like the “modern” times Orion’s Arm universe would have) would have a different affect on you than what I mentioned previously. While your code on the two elements are the same, one program compiled separately for two separate architectures would not be. The two binaries are less than perfect copies with ever so slight changes in how they behave.
On another note: A perfect copy of me would act on my behalf and I the copy. I don’t think there would be any meaningful distinction between the two of me unless I was separated from the other instance for a long time and had wildly different environments. If one body dies I am still alive because of the other me. The above would apply to less than perfect copies too until it reaches a threshold where I and the copy were not acting in each other’s behalf.
I understand that, but as I said, that would be due to reasons that have to do with complicated science about structural and chemical properties of those respective systems, and investigating whether the differences between them make a difference that matters. Do clay tablets introduce copying errors you don’t have when you write down morse code on pen and paper? Do they need some assembly-level code in the beginning because they get input into a stone tablet reading machine? Do the totality of those changes change the meaning or essense of the message as it existed on pen and paper?
A computer is different from biology, of course, but it can in principle model the effects of biology, if we decide that we need those to fully represent a mind that we want to call equivalent to the “real” thing. So we wouldn’t just simulate a brain, but a brain as it would exist in a body with its environmental inputs. Unless a computer can’t model those? Which I would probably disagree with, but there is a fair argument to be had there. But the point is, I don’t think complexity of interacting environments gets you out of the space of things that can in principle be represented on a computer.
It depends on what distinction you are trying to make, and what you are deciding is meant by “me.” Just at the level of plain language, they are clearly different from you in that they are a different instance of you experiencing an independent stream of consciousness. You wouldn’t necessary coordinate, you would see different things at different times. You would be “inside” one of them and not the other, etc.
My point is that you won’t know for sure until the process is done and you either wake up in the computer or you never experience anything again (or end up in wherever consciousness goes after death, you can’t be sure of that either until you actually die). That’s what’s terrifying about it.
Also, an upload creates a copy, it does not cleanly transfer something, so which copy is the real you? If flesh and blood you and computer you exist simultaneously and experience different things, the two minds will diverge. Then which is the real you?
I think the great thing you’ve identified is that, even if you really did go dark, you wouldn’t be able to say so, and the copy would say everything was fine, because they inherited all your memories. Unless, say, we could do something fancy with the reconstructed copy to purposely change it in a deliberate way, like put a watermark over their vision saying “You are the copy!”
But, as I said before, in practical terms, I think you really can know if you were transferred in many cases, given what we do know.
For instance, for one common example, there’s the thing about a hypothetical teleporter that disintegrates you and rebuilds a copy of you somewhere else. In that case, you go dark, and it’s your copy that’s somewhere else.
For a more vulgar example, if a perfect clone of me was made on the other side of the world, and I jumped into a volcano, it’s pretty clear that I wouldn’t experience a “transfer” of myself to that new body.
And these aren’t frivolous examples, by the way. A lot of the talk of being “copied” contemplates variations on the theme of dispensing with the original and making a copy somewhere else, and the answer to whether “you” are still there comes from a mundane examination of the practical details of whatever the method is of copying.
Presumably, if we got to the point of being able to copy people, we might hopefully know enough about how actual transference works and it (hopefully) wouldn’t be an open question, and our answer would depend on all kinds of boring structural details about how consciousness works.
Anyway, long story short, I think we can chip away at a lot of the mystery with boringly practical answers in a way that steers clear of any encounter with a profound philosophical mystery. And while we shouldn’t be so cavalier about throwing away the Big Question (is it The Real You), I think a disposition for plain answers that focus on boring details probably helps steer clear of traps that people get stuck in.
I 100% agree. I think that’s exactly one of the traps that people get stuck in. If you exist side by side, your experiences won’t be exactly parallel. You can meaningfully answer by distinguishing between instances, at least, and noting the continuity of consciousness across time that one enjoys, while the other was recently poofed into existence. The fleshy one, that’s you. Done!
But here’s what I don’t get about the path people take in these philosophical jeremiads. What’s with the ending, in paralyzed fear by a question? People seem to go there, or even want to go there, like it’s the goal of the whole conversation. I’ve seen this so many times. Ending with “who is the REAL you?” or “how do you decide?” Not that those are illegitimate questions, but… it’s like you’re trying to psyche someone out by trying to entrap them in a Chris Angel MindFreak, and using the tools of philosophy to find a way to get into that particularly panicked headspace. That should not be the goal. I think those are merely pit stops on the way to a continued examination of the question, not end points, and I think some people treat them as endpoints because they value the experience of being in that headspace, which I don’t get.
Both copies are the real me in the same way a cloned git repo is the same project. If they diverge but still consider themselves the same project, they are in fact the same project. They would just have some differences on top of a base sameness.
That is how I think of it anyway.
Exactly. There’s a sense in which they are structurally the same, a sense in which you can say their identity is the same or very similar, but they are different instances. And in terms of instance they are easy to distinguish, and that is sufficient to dispense with many philosophical riddles that depend on asking which copy is “real”.
Wait, what? I think there are some cases where it’s pretty obvious your consciousness wouldn’t be transferred. If your brain is (say) scanned and recreated digitally, and your original brain is destroyed, it stands to reason there wasn’t a transference.
I think you may be right that there’s an open question if you were to do a gradual, bit-by-bit transference, maybe. But as a matter of principle, we should hope that we know enough about how consciousness works to say something meaningful about it, and not just declare that by “its nature” or by “definition” it can never be known but from the inside.
That’s a question of who knows what, which should be distinguished from the underlying question of whether or not a genuine transference actually did happen.