If I'm running on a silicon computer, do I have twice as much subjective experience if my computer is twice as thick?
Why is this even a good question?
Consider a computer that was printed on a flat sheet. If we stick two of these computers (one a mirror image) together face to face, we get a thicker computer. And then if we peel them apart again, we get two thin computers! Suppose that we simulate a person using these computers. It makes sense that a person running on two thin computers has twice as much "experience" as a person running on just one (for example, in the Sleeping Beauty problem, the correct betting strategy is to bet as if the probability of making the bet in a given world is proportional to the number of thin computers). So if we take two people-computers and stick them together into one thicker person-computer, the thicker person contains twice as much "experience" as a thinner one - each of their halves has as much experience as a thin person, so they have twice as much experience.
Do I disagree? Well, I think it depends somewhat on how you cash out "experience." Consider the Sleeping Beauty problem with these computers - in the classic version, our person is asked to give their probability that they're in the possibility where there's one thin computer, or the world where there are two thin computers. The correct betting strategy is to bet as if you think the probability that there are two computers is 2/3 - weighting each computer equally.
Now, consider altering the experiment so that either there's one thin computer, or one double computer. We have two possibilities - either the correct betting probability is 1/2 and the computers seem to have equal "experience", or we bite the bullet and say that the correct betting probability is 2/3 for a double computer, 10/11 for a 10x thicker computer, 1000/1001 for a 1000x thicker computer, etc.
The bullet-biting scenario is equivalent to saying that the selfish desires of the twice-thick computer are twice as important. If one computer is one person, a double computer is then two people in a box.
But of course, if you have a box with two people in it, you can knock on the side and go "hey, how many of you people are in there? I'm putting in an order for chinese food, how many entrees should I get?" Instead, the double-thick computer is running exactly the same program as the thin computer, and will order exactly the same number of entrees. In particular, a double-thick computer will make evaluations of selfish vs. altruistic priorities exactly the same as a thin computer.
There is one exception to the previous paragraph - what if the computer is programmed to care about its own thickness, and measure it with external instruments since introspection won't do, and weight its desires more when it's thicker? This is certainly possible, but by putting the caring straight into the utility function, it removes any possibility that the caring is some mysterious "experience." It's just a term in the utility function - it doesn't have to be there, in fact by default it's not. Or, heck, your robot might as easily care more about things when the tides are high, that doesn't mean that high tides grant "experience."
The original Sleeping Beauty problem, now *that's* mysterious "experience." Ordinary computers enter, weighting the possibility by the number of computers leaves. So something happens when you merge the two computers into a double computer, to destroy that experience rather than conserving it.
What do I claim explains this? The simple fact that you only offer the double computer one bet, not two. Sure, the exact same signals go to the exact same wires in each case. Except for the prior information that says that the experimenter can only make 1 bet, not 2. In this sense, "experience" just comes from the ways in which our computer can interact with the world.
So since the a double-thick computer is not more selfish than a thin one (neglecting the tides), and will not expect to be a thick computer more often in the Sleeping Beauty problem, I'd say it doesn't have more "experience" than a thin computer.
EDIT: I use betting behavior as a proxy for probability here because it's easy to see which answer is correct. However, using betting behavior as a probability is not always valid - e.g. in the absent-minded driver problem. In the sleeping beauty case it only works because the payout structure is very simple. A safer way would be to derive the probabilities from the information available to the agents, which has been done elsewhere, but is harder to follow.
Who cares? In every case, the lightbulb gets changed, so the question is obviously meaningless!
or perhaps...
We can't conclude anything from the mere fact that the lightbulb was changed. The answer depends on your prior.
or even...
Jokes like this demonstrate the need for Anthropic Atheism Plus, a safe space where fallacies and know-nothing reductionism can be explored, free from malicious trolling.
and finally...
In order to finish the work of wrecking my own joke, here are some explanatory end-notes.
(1) The reference to Atheism Plus, a forum of progressives who split from the New Atheism movement, is a dig at nyan_sandwich's affiliation with neo-reaction.
(2) This whole "joke" came about because I thought your post and his post were not only stupid, but too stupid to be worth directly engaging.
(2a) For example, you seem to be saying that if two people give the same answer to a question, then there's only one person there.
(2b) Meanwhile, nyan_sandwich's rationale for eschewing anthropic reasoning is, "This reminds me way too much of souls... I don't believe in observers."
(3) In retrospect, the joke I should have made here was, "How many functionalists does it take to change a lightbulb?" (The point being that a functionalist perspective on lightbulb-changing would see no difference between one, two, or a hundred agents being responsible for it.) And I should have commented separately on the other post.
(4) Furthermore, perhaps I should concede that both posts are only half-stupid, and that the stupidity in question is learned stupidity rather than slack-jawed stupidity. Both posts do exhibit comprehension of some relatively complicated thought-experiments, even if the philosophy introduced in order to deal with them does contain some absolute howlers (see 2a, 2b, above).
(5) And of course, I'd better ostentatiously declare that I too am looking pretty foolish by this point. This is a perennial preemptive defense employed by mockers and jesters thr
If I'm running on a silicon computer, do I have twice as much subjective experience if my computer is twice as thick?
Why is this even a good question?
Consider a computer that was printed on a flat sheet. If we stick two of these computers (one a mirror image) together face to face, we get a thicker computer. And then if we peel them apart again, we get two thin computers! Suppose that we simulate a person using these computers. It makes sense that a person running on two thin computers has twice as much "experience" as a person running on just one (for example, in the Sleeping Beauty problem, the correct betting strategy is to bet as if the probability of making the bet in a given world is proportional to the number of thin computers). So if we take two people-computers and stick them together into one thicker person-computer, the thicker person contains twice as much "experience" as a thinner one - each of their halves has as much experience as a thin person, so they have twice as much experience.
Do I disagree? Well, I think it depends somewhat on how you cash out "experience." Consider the Sleeping Beauty problem with these computers - in the classic version, our person is asked to give their probability that they're in the possibility where there's one thin computer, or the world where there are two thin computers. The correct betting strategy is to bet as if you think the probability that there are two computers is 2/3 - weighting each computer equally.
Now, consider altering the experiment so that either there's one thin computer, or one double computer. We have two possibilities - either the correct betting probability is 1/2 and the computers seem to have equal "experience", or we bite the bullet and say that the correct betting probability is 2/3 for a double computer, 10/11 for a 10x thicker computer, 1000/1001 for a 1000x thicker computer, etc.
The bullet-biting scenario is equivalent to saying that the selfish desires of the twice-thick computer are twice as important. If one computer is one person, a double computer is then two people in a box.
But of course, if you have a box with two people in it, you can knock on the side and go "hey, how many of you people are in there? I'm putting in an order for chinese food, how many entrees should I get?" Instead, the double-thick computer is running exactly the same program as the thin computer, and will order exactly the same number of entrees. In particular, a double-thick computer will make evaluations of selfish vs. altruistic priorities exactly the same as a thin computer.
There is one exception to the previous paragraph - what if the computer is programmed to care about its own thickness, and measure it with external instruments since introspection won't do, and weight its desires more when it's thicker? This is certainly possible, but by putting the caring straight into the utility function, it removes any possibility that the caring is some mysterious "experience." It's just a term in the utility function - it doesn't have to be there, in fact by default it's not. Or, heck, your robot might as easily care more about things when the tides are high, that doesn't mean that high tides grant "experience."
The original Sleeping Beauty problem, now *that's* mysterious "experience." Ordinary computers enter, weighting the possibility by the number of computers leaves. So something happens when you merge the two computers into a double computer, to destroy that experience rather than conserving it.
What do I claim explains this? The simple fact that you only offer the double computer one bet, not two. Sure, the exact same signals go to the exact same wires in each case. Except for the prior information that says that the experimenter can only make 1 bet, not 2. In this sense, "experience" just comes from the ways in which our computer can interact with the world.
So since the a double-thick computer is not more selfish than a thin one (neglecting the tides), and will not expect to be a thick computer more often in the Sleeping Beauty problem, I'd say it doesn't have more "experience" than a thin computer.
EDIT: I use betting behavior as a proxy for probability here because it's easy to see which answer is correct. However, using betting behavior as a probability is not always valid - e.g. in the absent-minded driver problem. In the sleeping beauty case it only works because the payout structure is very simple. A safer way would be to derive the probabilities from the information available to the agents, which has been done elsewhere, but is harder to follow.