Stuart_Armstrong comments on Decision Theory FAQ - Less Wrong

52 Post author: lukeprog 28 February 2013 02:15PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (467)

You are viewing a single comment's thread. Show more comments above.

Comment author: Stuart_Armstrong 15 March 2013 02:06:17PM 0 points [-]

A nice rephrasing of the "no Oracle" argument.

Comment author: Eliezer_Yudkowsky 15 March 2013 06:20:21PM 4 points [-]

Only in the sense that any working Oracle can be trivially transformed into a Genie. The argument doesn't say that it's difficult to construct a non-Genie Oracle and use it as an Oracle if that's what you want; the difficulty there is for other reasons.

Nick Bostrom takes Oracles seriously so I dust off the concept every year and take another look at it. It's been looking slightly more solvable lately, I'm not sure if it would be solvable enough even assuming the trend continued.

Comment author: Stuart_Armstrong 18 March 2013 10:19:00AM *  1 point [-]

A clarification: my point was that denying orthogonality requires denying the possibility of Oracles being constructed; your post seemed a rephrasing of that general idea (that once you can have a machine that can solve some things abstractly, then you need just connect that abstract ability to some implementation module).

Comment author: Eliezer_Yudkowsky 18 March 2013 07:56:14PM 2 points [-]

Ah. K. It does seem to me like "you can construct it as an Oracle and then turn it into an arbitrary Genie" sounds weaker than "denying the Orthogonality thesis means superintelligences cannot know 1, 2, and 3." The sort of person who denies OT is liable to deny Oracle construction because the Oracle itself would be converted unto the true morality, but find it much more counterintuitive that an SI could not know something. Also we want to focus on the general shortness of the gap from epistemic knowledge to a working agent.

Comment author: Stuart_Armstrong 19 March 2013 11:09:31AM 0 points [-]

Possibly. I think your argument needs to be a bit developed to show that one can extract the knowledge usefully, which is not a trivial statement for general AI. So your argument is better in the end, but needs more argument to establish.