I think perhaps the better rationality quote from that honors linear algebra site you linked is "See if you can use this proof to show the square root of three is irrational. Then try the square root of four. If it works, you did something wrong."
If we take the outside view, we can see that overall the introduction of technology has done humanity quite a lot of good; let's not make the mistake of being too cautious.
No, but I damn well expect you to defect the hundredth time. If he's playing true tit-for-tat, you can exploit that by playing along for a time, but cooperating on the hundredth go can't help you in any way, it will only kill a million people.
Do not kill a million people, please.
I would certainly *hope* you would defect, Eliezer. Can I really trust you with the future of the human race?
You can make a pretty cool society, but it's meaningless unless you can protect it from disruption, that's the point of the quote. Of course the converse also holds, you can protect your society from disruption, but it's meaningless unless it's pretty cool.
"Re: We win, because anything less would not be maximally wonderful.
Um, it depends. If we have AI, and they have AI and they have chosen a utility function closer to that which would be favoured by natural selection under such circumstances - then we might well lose.
Is spending the hundred million years gearing up for alien contact - to avoid being obliterated by it - 'maximally wonderful'? Probably not for any humans involved. "
Then wouldn't you rather we lose?
"Everything being maximally wonderful is a bit like what the birds of paradise have. What will happen when their ecosystem is invaded by organisms which have evolved along less idyllic lines?"
We win, because anything less would not be maximally wonderful.
Caledonian, if you want to build an AI that locks the human race in tiny pens until it gets around to slaughtering us, that's... lovely, and I wish you... the best of luck, but I think all else equal I would rather support the guy who wants to build an AI that saves the world and makes everything maximally wonderful.
"Would you have classified the happiness of cocaine as 'happiness', if someone had asked you in another context?"
I'm not sure I understand what you mean here. Do you think it's clear that coke-happiness is happiness, or do you think it's clear that coke-happiness is not happiness?
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
It seems like you should be able to make experimental predictions about irreducible things. Take a quark, or a gluon, or the Grand Quantum Lifestream, or whatever reality is at the bottom, I don't really follow physics closely. In any case, you can make predictions about those things, and that's part and parcel of making predictions about airplanes and grizzly bears.
Even if it turns out that the Grand Quantum Lifestream is reducible further, you can make predictions about its components. Unless you think everything is infinitely reducible, but that proposition strikes me as unlikely.
Well, maybe the fundamental basis of reality is like a fractal. I wouldn't want to rule that out without thinking about it. But in any case it doesn't sound like what you're arguing.