In response to Taboo Your Words
Comment author: PK 16 February 2008 04:25:02AM 13 points [-]

Sounds interesting. We must now verify if it works for useful questions.

Could someone explain what FAI is without using the words "Friendly", or any synonyms?

In response to comment by PK on Taboo Your Words
Comment author: Origin64 04 November 2012 08:53:48PM 5 points [-]

An AI which acts toward whatever the observer deems to be beneficial to the human condition. It's impossible to put it into falsifiable criteria if you can't define what is (and on what timescale?) beneficial to the human race. And I'm pretty confident nobody knows what's beneficial to the human condition on the longest term, because that's the problem we're building the FAI to solve.

In the end, we will have to build an AI as best we can and trust its judgement. Or not build it. It's a cosmic gamble.

Comment author: Origin64 04 November 2012 08:38:36PM 0 points [-]

I believed the first two, one out of personal experience and the other out of System 1. I guessed that as a soft, water-fat intellectual, I'd have more trouble adjusting to a military lifestyle than someone who's actually been in a fight in his life. And that people from warmer climes deal with warmer temperatures more easily, well, I guess I believe people adapt to their circumstances. People from a warmer climate might sweat more and drink more water, or use less energy to generate less heat, whereas a man in Siberia might move more than is strictly necessary to keep his body temperature stable.

The other three are in subjects I know nothing about, and therefore I couldn't have predicted them. A wise man knows his limits...

Comment author: PeterisP 24 October 2010 12:27:34PM 6 points [-]

Well, I fail to see any need for backward-in-time causation to get the prediction right 100 out of 100 times.

As far as I understand, similar experiments have been performed in practice and homo sapiens are quite split in two groups 'one-boxers' and 'two-boxers' who generally have strong preferences towards one or other due to whatever differences in their education, logic experience, genetics, reasoning style or whatever factors that are somewhat stable specific to that individual.

Having perfect predictive power (or even the possibility of it existing) is implied and suggested, but it's not really given, it's not really necessary, and IMHO it's not possible and not useful to use this 'perfect predictive power' in any reasoning here.

From the given data in the situation (100 out of 100 that you saw), you know that Omega is a super-intelligent sorter who somehow manages to achieve 99.5% or better accuracy in sorting people into one-boxers and two-boxers.

This accuracy seems also higher than the accuracy of most (all?) people in self-evaluation, i.e., as in many other decision scenarios, there is a significant difference in what people believe they would decide in situation X, and what they actually decide if it happens. [citation might be needed, but I don't have one at the moment, I do recall reading papers about such experiments]. The 'everybody is a perfect logician/rationalist and behaves as such' assumption often doesn't hold up in real life even for self-described perfect rationalists who make strong conscious effort to do so.

In effect, data suggests that probably Omega knows your traits and decision chances (taking into account you taking into account all this) better than you do - it's simply smarter than homo sapiens. Assuming that this is really so, it's better for you to choose option B. Assuming that this is not so, and you believe that you can out-analyze Omega's perception of yourself, then you should choose the opposite of whatever Omega would think of you (gaining 1.000.000 instead of 1.000 or 1.001.000 instead of 1.000.000). If you don't know what Omega knows about you - then you don't get this bonus.

Comment author: Origin64 03 November 2012 04:18:08PM 0 points [-]

So what you're saying is that the only reason this problem is a problem is because the problem hasn't been defined narrowly enough. You don't know what Omega is capable of, so you don't know which choice to make. So there is no way to logically solve the problem (with the goal of maximizing utility) without additional information.

Here's what I'd do: I'd pick up B, open it, and take A iff I found it empty. That way, Omega's decision of what to put in the box would have to incorporate the variable of what Omega put in the box, causing an infinite regress which will use all cpu cycles until the process is terminated. Although that'll probably result in the AI picking an easier victim to torment and not even giving me a measly thousand dollars.

View more: Prev