Ming the Merciless offers you a choice that you cannot refuse. Either (a) his torturer will rip one of your fingernails off, or (b) his torturer will inflict pain more intense than you can imagine, continuously for the next 24 hours, without otherwise harming you. But in case (b) only, his evil genius neuroscientists will cause you to afterwards completely forget the experience, and any other aftereffects from the stress will be put right as well. If you refuse to make a choice, you will get (b) without the amnesia.
What do you choose?
If you choose (a), how much worse would (a) have to be, for you to choose (b)? If you choose (b), how much less bad would (a) have to be, for you to choose (a)?
lesswrong.com's web server is in the US but both of its nameservers are in Australia, leading to very slow lookups for me -- often slow enough that my resolver times out (and caches the failure).
I am my own DNS admin so I can work around this by forcing a cache flush when I need to, but I imagine this would be a more serious problem for people who rely on their ISPs' DNS servers.
This is interesting.
Apparently, humans (and teams of them) are beating computer programs at... protein folding?
would anyone be interested in a 2-3 post sequence on metaethics? The idea would be to present a slower, more simplified version of Eliezer's metaethics. I've notice that many people have had trouble grasping it (myself included), and I think an alternate presentation might help. Thoughts?
Please add a favicon, they make bookmarking much easier. The FHI diamond in green might work, but just about anything is better than nothing.
This is the latest Off Topic Thread I could find. Are we supposed to make off-topic posts in the Open Thread now? Anyway, to be safe, I'll post here.
There was a recent article in the NY Times about fixing tendon problems with simple eccentric exercise. It might be helpful for others here who make heavy use of computers, which can cause tendon problems. I've had pain in the tendons in my shoulders and arms, which I eventually managed to control using weekly sessions of eccentric exercise.
Are there any Less Wrong-like web sites that are about intellectual pursuits in general?
If we come up with a strong AI that we suspect is un-Friendly, should we use it to help us create Friendly AI? (Perhaps by playing a single game of 20 Questions, which has probably been played enough times that every possible sequence of yes-or-no answers has come up?)
Anybody got a good reason for adopting a certain utility function versus some other one?
Because I can't find one, and now I feel weird, cause without a decent utility function, rationalism gives you knowledge, but no wisdom.
Suppose the UFAI figures out a fake theorem that it would like us to believe, because it would lead us down a path of creating an AI it likes. If we were to ask it to prove this fake theorem, it would give back to us something that looks very much like a proof, so that we would miss the point where it goes wrong. Maybe we require a machine verifiable proof, but it takes advantage of a flaw in our automatic verifiers, or the way we interpret the results. So how does it get us to ask about this fake theorem? It might manipulate its proof of a theorem we do ask about to inspire us to ask that question. It might respond to our request for a proof with, "Well, that is not quite right, but if you make these modifications...". Keep in mind, this is a hostile intelligence that is way beyond us. It will take any opportunity to subtly manipulate us that it gets. And these have only been the ideas that I, a mere human, could come up with.
I am not sure what sort of template you mean, but I suspect that it will have the same problem. Basically, room for the UFAI to use its superior intelligence to help us is room for betrayal.
Step back and check what are you arguing. The discussion is whether a scenario where the UFAI is helpful is at all plausible. Of course all sorts of stuff can go wrong. Of course it isn't a good idea. The arguments saying "but this could go wrong too" don't advance the discussion a bit, as it's already understood.
Here's your space to talk about anything totally unrelated to being Less Wrong