Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Will_Newsome comments on General purpose intelligence: arguing the Orthogonality thesis - Less Wrong

21 Post author: Stuart_Armstrong 15 May 2012 10:23AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (156)

You are viewing a single comment's thread.

Comment author: Will_Newsome 15 May 2012 05:59:37PM *  8 points [-]

We will also take the materialistic position that humans themselves can be viewed as non-deterministic algorithms[2]

I'm not a philosopher of mind but I think "materialistic" might be a misleading word here, being too similar to "materialist". Wouldn't "computationalistic" or maybe "functionalistic" be more precise? ("-istic" as opposed to "-ist" to avoid connotational baggage.) Also it's ambiguous whether footnote two is a stipulation for interpreting the paper or a brief description of the consensus view in physics.

At various points you make somewhat bold philosophical or conceptual claims based off of speculative mathematical formalisms. Even though I'm familiar with and have much respect for the cited mathematics, this still makes me nervous, because when I read philosophical papers that take such an approach my prior is high for subtle or subtly unjustified equivocation; I'd be even more suspicious were I a philosopher who wasn't already familiar with universal AI, which isn't a well-known or widely respected academic subfield. The necessity of finding clearly trustworthy analogies between mathematical and phenomenal concepts is a hard problem to solve both when thinking about the problem oneself and when presenting one's thoughts to others, and there might not be a good solution in general, but there are a few instances in your paper that I think are especially shaky. E.g.,

For utility function maximisers, the AIXI is the theoretically best agent there is, more successful at reaching its goals (up to a finite constant) than any other agent (Hutter, 2005). AIXI itself is incomputable, but there are computable variants such as AIXItl or Gödel machines (Schmidhuber, 2007) that accomplish comparable levels of efficiency. These methods work for whatever utility function is plugged into them. Thus in the extreme theoretical case, the Orthogonality thesis seems trivially true.

You overreach here. AIXItl or Goedel machines might not be intelligent even given arbitrarily much resources; in fact I believe Eliezer's position is that Goedel machines immediately run into intractable Loebian problems. AIXI-tl could share a similar fate. As far as I know no one's found an agent algorithm that fits your requirements without controversy. E.g., the grounding problem is unsolved and so we can't know that any given agent algorithm won't reliably end up wireheading. So the theoretical orthogonality thesis isn't trivially true, contra your claim, and such an instance of overreaching justifies hypothetical philosophers' skepticism about the general soundness of your analogical approach.

Unfortunately I'll have to end there.