I didn't know that EY's purpose with this blog was to recruit future AI researchers, but since it is, I for one am one on who he has succeeded.
One very funny consecuence of defining "fair" as "that which everyone agrees to be "fair"" is that if you indeed could convince everyone of the correctness of that definition, nobody could ever know what IS "fair", since they would look at their definition of "fair", which is "that which everyone agrees to be "fair"", then they would look at what everyone does agree to be fair, and conclude that "that which everyone agrees to be "fair" is "that which everyone agrees t...
I stopped to answer your definitional questions while reading and defined "arbitrary" as "some variable in a system of justifications where the variable could be anything and be equally justified regardless of what it is" and "justification" as "the belief that the action that is justified will directly on indirectly further the cause of the utility function in the terms of which it is defined and does it more effectively than any other action; for beliefs, the belief that the belief that is justified will reflect the territory in the most accurate way possible (I hope I'm not passing the buck here)"
When you dream about an apple, though, can you be said to recognize anything? No external stimulus triggers the apple recognition program; it just happens to be triggered by unpredictable, tired firings of the brain and you starting to dream about an apple is the result of it being triggered in the first place, not the other way around.
Confusing - now the central question of rationality is no longer "why do you believe what you believe?"
The point is: even in a moralless meaningless nihilistic universe, it all adds up to normality.
Pablo Stafforini A brief note to the (surprisingly numerous) egoists/moral nihilists who commented so far. Can't you folks see that virtually all the reasons to be skeptical about morality are also reasons to be skeptical about practical rationality? Don't you folks realize that the argument that begins questioning whether one should care about others naturally leads to the question of whether one should care about oneself? Whenever I read commenters here proudly voicing that they are concerned with nothing but their own "persistence odds", or th...
'You can't rationally choose your utility function.' - I'm actually excepting that Eliezer writes a post on this, it's a core thing when thinking about morality etc
James Andrix 'Doing nothing or picking randomly are also choices, you would need a reason for them to be the correct rational choice. 'Doing nothing' in particular is the kind of thing we would design into an agent as a safe default, but 'set all motors to 0' is as much a choice as 'set all motors to 1'. Doing at random is no more correct than doing each potential option sequentially.'
Doing nothing or picking randomly are no less rationally justified than acting by some arbitrary moral system. There is no rationally justifiable way that any rational being "should" act. You can't rationally choose your utility function.
I'd do everything that I do now. Moral realism demolished.
Good post. This should be elementary, but people often point out these kinds of seeming paradoxes with great glee when arguing for relativism. Now I can point them to this post.
Ian C. is implying an AI design where the AI would look at it's programmers and determine what they really had wanted to program the AI to do - and would have if they were perfect programmers - and then do that.
But that very function would have to be programmed into the AI. My head, for one, spins in self-refential confusion over this idea.