I don't remember how to do it well enough to explain it in detail, but the root of the problem was that people didn't yet understand summing convergent series. For example, 1/2 + 1/4 + 1/8 + 1/16 + ... = 1. It is discussed in some books on philosophy of math, I remember coming across it several times; unfortunately, a quick check of the books I have available right now can't find a source.
That's the solution to the Achilles and the Turtle Paradox (also Zeno's), but the Arrow Paradox (in the comment you replied to) is different.
The Arrow Paradox is simply linguistic confusion, I think. Motion is a relation in space relative to different points of time, Zeno's statement that the (moving) arrow is at rest at any given instant is simply false (considered in relation to instants epsilon before or after that instant) or nonsensical (considered in enforced isolation with no information about any other instant).
I never found the Arrow Paradox particularly compelling. For the Achilles and the Turtle Paradox I can at least see why someone might have found that confusing.
That "Engfish" essay is strange. It's right that textbooks and so on encourage students to write in a way that's impersonal and overly verbose. But it doesn't recognize the advantages of academic English. It doesn't even seem to recognize the role (or existence!) of dialects in general. Instead, it takes bad examples of academic English (the writing textbook) and suggests they should be more like bad examples of informal English (the third-grader).
This implies that some parts of your brain lead to you being conscious, while others don't.
It at least implies that some processes lead to you being conscious, while others don't. The same brain region could be involved in both conscious and unconscious processes.
They're computationally equivalent by hypothesis. The thesis of substrate independence is that as far as consciousness is concerned the side effects don't matter and that capturing the essential sameness of the "AND" computation is all that does. If you're having trouble understanding this, I can't blame you in the slightest, because it's that bizarre.
(Didn't realize this site doesn't email reply notifications, thus the delayed response.)
What I'm saying is that someone who answers "algorithms" is clearly not taking that view of substrate-independence, but they could hypothesize that only some side-effects matter. A MOSFET-brain-simulation and a desert-rocks-brain-simulation could share computational properties beyond input-output, even though the side-effects are clearly not identical.
(Not saying that I endorse that hypothesis, just that it's not the same as the "side effects don't matter" version.)
the Kolmogorov complexity of a definition of an equivalence relation which tells us that an AND gate implemented in a MOSFET is equivalent to an AND gate implemented in a neuron is equivalent to an AND gate implemented in desert rocks
Isn't that only a problem for those who answer "functions" to question 5? Desert-rocks-AND-gate and MOSFET-AND-gate are functionally-equivalent by definition, but if you're not excluding side-effects it's obvious that they're not computationally equivalent.
Edit: zaph answered algorithms, so your counter-argument doesn't really target him well.
A few thoughts on the cousin_its problem:
When you calculate the expected outcome for the "deciders say nay" strategy and the "deciders say yea" strategy, you already know that the deciders will be deciders. So "you are a decider" is not new information (relative to that strategy), don't change your answer. (It may be new information relative to other strategies, where the one making the decision is an individual that wasn't necessarily going to be told "you are the decider" for the original problem. If you're told, "you are the decider", you should still conclude with 90% probability that the coin is tails.)
(Possibly a rephrasing of 1.) If the deciders in the tails universe come to the same conclusion as the deciders in the heads universe about the probability of which universe they're in, one might conclude that they didn't actually get useful information about which universe they're in.
(Also a rephrasing of 1.) The deciders do a pretty good job of predicting what universe they're in individually, but the situation is contrived to give the one wrong decider nine times the decision-making power. (Edit: And since you know about that trap in advance, you shouldn't fall into it.)
(Isomorphic?) Perhaps "there's a 90% probability that I'm in the 'tails' universe" is the wrong probability to look at. The relevant probability is, "if nine hypothetical individuals are told 'you're a decider', there's only a 10% probability that they're all in the tails universe".
Some of your analogies strike me as quite strained:
(1) I wouldn't call the probability of being revived post near-future cryogenic freezing "non-trivial but far from certain", I would call it "vanishingly small, if not zero". If sick and dying and offered a surgery as likely to work as I think cryonics is, I might well reject it in favor of more conventional death-related activities.
(3) My past self has the same relation to me as a far-future simulation of my mind reconstructed from scans of my brain-sicle? Could be, but that's far from intuitive. Also, there's no reason to use "fear" to characterize the opposing view when "think" would work just as well.
(6) What Yvain said.
Huh, interesting. I wrote something very similar on my blog a while ago. (That was on cryonics, not existential risk reduction, and it goes on about cryonics specifically. But the point about rhetoric is much the same.)
Anyways, I agree. At the very least, some statements made by smart people (including Yudkowsky) have had the effect of increasing my blanket skepticism in some areas. On the other hand, such statements have me thinking more about the topics in question than I might have otherwise, so maybe that balances out. Then again, I'm more willing to wrestle with my skepticism than most, and I'm still probably a "mediocre rationalist" (to put it in Eliezer's terms).
It does apply, the argument you attacked is wrong for a different reason. Amusingly, I see your original comment, and the follow-up arguments for incorrectness of the previous arguments as all wrong (under assumptions not widely accepted though). Let's break it up:
(1) "If I am revived, I expect to live for billions of years"
(2) "That seems wildly optimistic"
(3) "We must first think about what we anticipate, and our level of optimism must flow from that"
(3) is wrong because the general pattern of reasoning from how good the postulated outcome is to its plausibility is valid. (2) is wrong because it's not in fact too optimistic, quite the opposite. And (1) is wrong because it's not optimistic enough. If your concepts haven't broken down when the world is optimized for a magical concept of preference, it's not optimized strongly enough. "Revival" and "quality of life" are status quo natural categories which are unlikely to survive strong optimization according to the whole of human preference in a recognizable form.
Do you think that if someone frozen in the near future is revived, that's likely to happen after a friendly-AI singularity has occurred? If so, what's your reasoning for that assumption?
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
I'd be thinking that I'd like to do the honorable/right thing. There exist non-monetary costs in defecting; those include a sense of guilt. That's the difference to a True Prisoner's Dilemma, where you actually prefer defecting if you know the other person cooperated.
That last "if you know the other person cooperated" is unnecessary, in a True Prisoner's Dilemma each player prefers defecting in any circumstance.