Ming the Merciless offers you a choice that you cannot refuse. Either (a) his torturer will rip one of your fingernails off, or (b) his torturer will inflict pain more intense than you can imagine, continuously for the next 24 hours, without otherwise harming you. But in case (b) only, his evil genius neuroscientists will cause you to afterwards completely forget the experience, and any other aftereffects from the stress will be put right as well. If you refuse to make a choice, you will get (b) without the amnesia.
What do you choose?
If you choose (a), how much worse would (a) have to be, for you to choose (b)? If you choose (b), how much less bad would (a) have to be, for you to choose (a)?
lesswrong.com's web server is in the US but both of its nameservers are in Australia, leading to very slow lookups for me -- often slow enough that my resolver times out (and caches the failure).
I am my own DNS admin so I can work around this by forcing a cache flush when I need to, but I imagine this would be a more serious problem for people who rely on their ISPs' DNS servers.
This is interesting.
Apparently, humans (and teams of them) are beating computer programs at... protein folding?
would anyone be interested in a 2-3 post sequence on metaethics? The idea would be to present a slower, more simplified version of Eliezer's metaethics. I've notice that many people have had trouble grasping it (myself included), and I think an alternate presentation might help. Thoughts?
Please add a favicon, they make bookmarking much easier. The FHI diamond in green might work, but just about anything is better than nothing.
This is the latest Off Topic Thread I could find. Are we supposed to make off-topic posts in the Open Thread now? Anyway, to be safe, I'll post here.
There was a recent article in the NY Times about fixing tendon problems with simple eccentric exercise. It might be helpful for others here who make heavy use of computers, which can cause tendon problems. I've had pain in the tendons in my shoulders and arms, which I eventually managed to control using weekly sessions of eccentric exercise.
Are there any Less Wrong-like web sites that are about intellectual pursuits in general?
If we come up with a strong AI that we suspect is un-Friendly, should we use it to help us create Friendly AI? (Perhaps by playing a single game of 20 Questions, which has probably been played enough times that every possible sequence of yes-or-no answers has come up?)
Anybody got a good reason for adopting a certain utility function versus some other one?
Because I can't find one, and now I feel weird, cause without a decent utility function, rationalism gives you knowledge, but no wisdom.
Well, in attempting to address your counterexample, I had to guess what you meant, as it is not very clear. What situation do you envision in which the UFAI would expect to gain utility by building an FAI?
And it seems a little strange to accuse me of offering solutions before the problem is fully explored, when I was responding to a proposal for a solution (using UFAI to build FAI).
(Also, I have edited out a typo (repetition of the word "it") in my statement which you quoted.)
What situation do you envision in which the UFAI would expect to gain utility by building an FAI?
The situation I described: cooperation between FAI and UFAI. Two unrelated AIs are never truly antagonistic, so they have something to gain from cooperation.
And it seems a little strange to accuse me of offering solutions before the problem is fully explored, when I was responding to a proposal for a solution (using UFAI to build FAI).
The same problem on both accounts, confident assertions about a confusing issue. This happened twice in a row, because th...
Here's your space to talk about anything totally unrelated to being Less Wrong