Zachary_Kurtz comments on Savulescu: "Genetically enhance humanity or face extinction" - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (193)
Doomsday predictions have never come true in the past, no matter much confidence the futurist had. Why should we believe this particular futurist?
And why would that be?...
I don't think pre-modern catastrophes are relevant to this discussion.
The point about the anthropic issues are well taken, but I still contend that we should be skeptical of over-hyped predictions by supposed experts. Especially when they propose solutions that (apparently, to me) reduce 'freedoms.'
There is a grand tradition of them failing.
And, if we do have the anthropic explanation to 'protect us' from doomsday-like outcomes, why should we worry about them?
Can you explain how it is not hypocritical to consider anthropic explanations relevant to previous experiences but not to future ones?
Observation that you current exist trivially implies that you haven't been destroyed, but doesn't imply that you won't be destroyed. As simple as that.
I can't observe myself getting destroyed either, however.
When you close your eyes, the World doesn't go dark.
The world probably doesn't go dark. We can't know for sure without using sense data.
http://lesswrong.com/lw/pb/belief_in_the_implied_invisible/
I think I was equating quantum immortality with anthropic explanations, in general. My mistake.
You're talking about the number of branches, but perhaps the important thing is not that but measure, i.e., squared amplitude. Branching preserves measure, while quantum suicide doesn't, so you can't make up for it by branching more times if what you care about is measure.
It seems clear that on a revealed preference level, people do care about measure, and not the number of branches, since nobody actually attempts quantum suicide, nor do they try to do anything to increase the branching rate.
If you go further and ask why do we/should we care about measure instead of the number of branches, I have to answer I don't know, but I think one clue is that those who do care about the number of branches but not measure will end up in a large number of branches but have small measure, and they will have high algorithmic complexity/low algorithmic probability as a result.
(I may have written more about this in a OB comment, and I'll try to look it up. ETA: Nope, can't find it now.)
No, I'm not claiming that. I think people avoid quantum suicide because they fear death. Perhaps we can interpret that as caring about measure, or maybe not. In either case there is still a question of why do we fear death, and whether it makes sense to care about measure. As I said, I don't know the answers, but I think I do have a clue that others don't seem to have noticed yet.
ETA: Or perhaps we should take the fear of death as a hint that we should care about measure, much like how Eliezer considers his altruistic feelings to be a good reason for adopting utilitarianism.
If quantum suicide works, then there's little hurry to use it, since it's not possible to die before getting the chance. Anyone who does have quantum immortality should expect to have it proven to them, by going far enough over the record age if nothing else. So attempting quantum suicide without such proof would be wrong.
Um, what? Why did we evolve to fear death? I suspect I'm missing something here.
You're converting an "is" to an "ought" there with no explanation, or else I don't know in what sense you're using "should".
Have you looked at Jacques Mallah's papers?
Yes, something like that.
Source? I'm curious how that's calculated.
Well, if you have anyone that cares deeply about your continued living, then doing so would hurt them deeply in 99.999999% of universes. But if you're completely alone in the world or a sociopath, then go for it! (Actually, I calculated the percentage for Mega Millions jackpot, which is 1-1/(56^5*46) = 1-1/2.5e10 = 99.999999996%. Doesn't affect your argument, of course.)
This is a legitimate heuristic, but how familiar are you with the object-level reasoning in this case, which IMO is much stronger?
not very. Thanks for the link.
So I assume you're not afraid of AI?