turchin comments on Open Thread March 28 - April 3 , 2016 - Less Wrong Discussion
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (69)
Well, for the avoidance of doubt, I do not endorse any such use and I hope I haven't fallen into such sloppiness myself.
No, I didn't intend to say or imply that at all. I do, however, say that if evolution has found some particular mode of thinking or feeling or acting useful (for evolution's goals, which of course need not be ours) then that isn't generally invalidated by new discoveries about why the world is the way that's made those things evolutionarily fruitful.
(Of course it could be, given the "right" discoveries. Suppose it turns out that something about humans having sex accelerates some currently unknown process that will in a few hundred years make the earth explode. Then the urge to have sex that evolution has implanted in most people would be evolutionarily suboptimal in the long run and we might do better to use artificial insemination until we figure out how to stop the earth-exploding process.)
You could have deduced that I'd noticed that, from the fact that I wrote
but no matter.
I didn't intend to say or imply that, either, and this one I don't see how you got out of what I wrote. I apologize if I was very unclear. But I might endorse as a version of Egan's law something like "If something is a terrible risk, discovering new scientific underpinnings for things doesn't stop it being a terrible risk unless the new discoveries actually change either the probabilities or the consequences". Whether that applies in the present case is, I take it, one of the points under dispute.
I take it you mean might not be; it could turn out that even in this rather unusual situation "normal" is the best you can do.
I have never been able to understand what different predictions about the world anyone expects if "QI works" versus if "QI doesn't work", beyond the predictions already made by physics. (QI seems to me to mean: standard physics, plus a decision to condition probabilities on future rather than present epistemic state. The first bit is unproblematic; the second bit -- which is what you need to say e.g. "I will survive" -- seems to me like a decision rather than a proposition, and I don't know what it would mean to say that it does or doesn't work.)
I'm not really seeing any connection to speak of between cryonics and QI. (Except for this. Suppose you reckon that cryonics has a 5% chance of working on other people, but QI considerations lead you to say that for you it will almost certainly work. No, sorry, I see you give QI a 10% chance of working. So I mean that for you it will work with probability more like 10%. Does that mean that you'd be prepared to pay about twice as much for cryonics as you would be without bringing QI into it? (Given the presumably regrettable costs for whatever influence you might have hoped to have post mortem using the money: children, charities, etc.)
QI predicts not the different variants of the world, but different variants of my future experiences. It says that I will not experience "no existence", but will experience my most probable survival way. If I have a chance to survive 1 in 1000 in some situation, QI shifts probability that I will experience survival up to 1.
But it could fail in unpredictable ways: if we are in the simulation, and my plane crashes, the next my experience will be probably screen with title "game over", not experience of me alive on the ground.
I agree with what you said in brackets about cryonics. I also think that investing in cryonics will help to promote it and all other good things, so it doesn't contradict with my regrettable costs. I think that one rational way of action is make a will where one gives all his money to cryocompany. (It also depends of existence and well being of children, and other useful charities, which could prevent x-risks, so it may need more complex consideration.)