Ming the Merciless offers you a choice that you cannot refuse. Either (a) his torturer will rip one of your fingernails off, or (b) his torturer will inflict pain more intense than you can imagine, continuously for the next 24 hours, without otherwise harming you. But in case (b) only, his evil genius neuroscientists will cause you to afterwards completely forget the experience, and any other aftereffects from the stress will be put right as well. If you refuse to make a choice, you will get (b) without the amnesia.
What do you choose?
If you choose (a), how much worse would (a) have to be, for you to choose (b)? If you choose (b), how much less bad would (a) have to be, for you to choose (a)?
lesswrong.com's web server is in the US but both of its nameservers are in Australia, leading to very slow lookups for me -- often slow enough that my resolver times out (and caches the failure).
I am my own DNS admin so I can work around this by forcing a cache flush when I need to, but I imagine this would be a more serious problem for people who rely on their ISPs' DNS servers.
This is interesting.
Apparently, humans (and teams of them) are beating computer programs at... protein folding?
would anyone be interested in a 2-3 post sequence on metaethics? The idea would be to present a slower, more simplified version of Eliezer's metaethics. I've notice that many people have had trouble grasping it (myself included), and I think an alternate presentation might help. Thoughts?
Please add a favicon, they make bookmarking much easier. The FHI diamond in green might work, but just about anything is better than nothing.
This is the latest Off Topic Thread I could find. Are we supposed to make off-topic posts in the Open Thread now? Anyway, to be safe, I'll post here.
There was a recent article in the NY Times about fixing tendon problems with simple eccentric exercise. It might be helpful for others here who make heavy use of computers, which can cause tendon problems. I've had pain in the tendons in my shoulders and arms, which I eventually managed to control using weekly sessions of eccentric exercise.
Are there any Less Wrong-like web sites that are about intellectual pursuits in general?
If we come up with a strong AI that we suspect is un-Friendly, should we use it to help us create Friendly AI? (Perhaps by playing a single game of 20 Questions, which has probably been played enough times that every possible sequence of yes-or-no answers has come up?)
Anybody got a good reason for adopting a certain utility function versus some other one?
Because I can't find one, and now I feel weird, cause without a decent utility function, rationalism gives you knowledge, but no wisdom.
I understand that your stated goal system has effects on your external behavior.
Still, I was trying to understand your claim that "If... there really is no way for me or my friends to have a permanent effect on reality, then I have no preference for what happens" (emphasis mine). Imagine that you were somehow shown a magically 100% sound, 100% persuasive proof that you could not have any permanent effect on reality, and that the entire multiverse would eventually end. In this circumstance, I doubt very much that the concept “Hollerith’s aims” would cease to be predictively useful. Whether you ate breakfast, or sought to end your life, or took up a new trade, or whatever, I suspect that your actions would have a purposive structure unlike the random bouncing about of inanimate systems. If you maintain that you would have no "preferences" under these circumstances (despite a model of "Hollerith's preferences" being useful to predict your behavior under these circumstances), this suggests you're using the term "preferences" in an interesting way.
The reason I’m trying to pursue this line of inquiry is that I am not clear what “preference” does and should mean, as any of us discuss ethics and meta-ethics. No doubt you feel some desire to realize goals that are valued by goal system zero, and no doubt you act partially on that desire as well. No doubt you also feel and act partially on other desires or preferences that a particular aspect of you does not endorse. The thing I’m confused about is... well, I don’t know how to say what I’m confused about; I’m confused. But something like:
My confusion is not specific to you, and maybe I shouldn’t have responded to you with it. But your example is particularly interesting in that the preferences you verbally endorse are particularly far from the ordinary, felt, behaviorally enacted preferences that we mostly start out with as humans. And given that distance, it is natural to ask, “Why, and in what sense, should we call these preferences ‘Hollerith’s preferences’/ ‘Hollerith’s ethics’/ ‘the right thing to do’ ”? Psychologically, is “right” just functioning as a floating xml tag of apparent justified-ness?
Imagine that you were somehow shown a magically 100% sound, 100% persuasive proof that you could not have permanent effect on reality, and that the entire multiverse would eventually end.
I agree with you, Anna, that in that case the concept of my aims does not cease to be predictively useful. (Consequently, I take back my "then I have no preferences" .) It is just that I have not devoted any serious brain time to what my aims might be if knew for sure I cannot have a permanent effect. (Nor does it bother me that I am bad at predicting wha...
Here's your space to talk about anything totally unrelated to being Less Wrong