Ming the Merciless offers you a choice that you cannot refuse. Either (a) his torturer will rip one of your fingernails off, or (b) his torturer will inflict pain more intense than you can imagine, continuously for the next 24 hours, without otherwise harming you. But in case (b) only, his evil genius neuroscientists will cause you to afterwards completely forget the experience, and any other aftereffects from the stress will be put right as well. If you refuse to make a choice, you will get (b) without the amnesia.
What do you choose?
If you choose (a), how much worse would (a) have to be, for you to choose (b)? If you choose (b), how much less bad would (a) have to be, for you to choose (a)?
lesswrong.com's web server is in the US but both of its nameservers are in Australia, leading to very slow lookups for me -- often slow enough that my resolver times out (and caches the failure).
I am my own DNS admin so I can work around this by forcing a cache flush when I need to, but I imagine this would be a more serious problem for people who rely on their ISPs' DNS servers.
This is interesting.
Apparently, humans (and teams of them) are beating computer programs at... protein folding?
would anyone be interested in a 2-3 post sequence on metaethics? The idea would be to present a slower, more simplified version of Eliezer's metaethics. I've notice that many people have had trouble grasping it (myself included), and I think an alternate presentation might help. Thoughts?
Please add a favicon, they make bookmarking much easier. The FHI diamond in green might work, but just about anything is better than nothing.
This is the latest Off Topic Thread I could find. Are we supposed to make off-topic posts in the Open Thread now? Anyway, to be safe, I'll post here.
There was a recent article in the NY Times about fixing tendon problems with simple eccentric exercise. It might be helpful for others here who make heavy use of computers, which can cause tendon problems. I've had pain in the tendons in my shoulders and arms, which I eventually managed to control using weekly sessions of eccentric exercise.
Are there any Less Wrong-like web sites that are about intellectual pursuits in general?
If we come up with a strong AI that we suspect is un-Friendly, should we use it to help us create Friendly AI? (Perhaps by playing a single game of 20 Questions, which has probably been played enough times that every possible sequence of yes-or-no answers has come up?)
Anybody got a good reason for adopting a certain utility function versus some other one?
Because I can't find one, and now I feel weird, cause without a decent utility function, rationalism gives you knowledge, but no wisdom.
Imagine that you were somehow shown a magically 100% sound, 100% persuasive proof that you could not have permanent effect on reality, and that the entire multiverse would eventually end.
I agree with you, Anna, that in that case the concept of my aims does not cease to be predictively useful. (Consequently, I take back my "then I have no preferences" .) It is just that I have not devoted any serious brain time to what my aims might be if knew for sure I cannot have a permanent effect. (Nor does it bother me that I am bad at predicting what I might do if I knew for sure I cannot have a permanent effect.)
Most of the people who say they are loyal to goal system zero seem to have only a superficial commitment to goal system zero. In contrast, Garcia clearly had a very strong deep commitment to goal system zero. Another way of saying what I said above: like Garcia's, my commitment to goal system zero is strong and deep. But that is probably not helping you.
One of the ways I have approached CEV is to think of the superintelligence as implementing what would have happened if the superintelligence had not come into being -- with certain modifications. An example of a modification you and I will agree is desirable: if Joe suffers brain damage the day before the superintelligence comes into being, the superintelligence arranges things the way that Joe would have arranged them if he had not suffered the brain damage. The intelligence might learn that by e.g. reading what Joe posted on the internet before his injury. In summary, one line of investigation that seems worthwhile to me is to get away from this slippery concept of preference or volition and think instead of what the superintelligence predicts would have happened if the superintelligence does not act. Note that e.g. the human sense of right and wrong are predicted by any competent agent to have huge effects on what will happen.
My adoption of goal system zero in 1992 helped me to resolve an emotional problem of mine. I severely doubt it would help your professional goals and concerns for me to describe that, though.
Would you go into why you only care about permanent effects? It seems highly bizarre to me (especially since, as Eliezer has pointed out, everything that happens is permanent insofar as occupies volume in 4d spacetime).
Here's your space to talk about anything totally unrelated to being Less Wrong