JGWeissman comments on General purpose intelligence: arguing the Orthogonality thesis - Less Wrong

20 Post author: Stuart_Armstrong 15 May 2012 10:23AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (156)

You are viewing a single comment's thread. Show more comments above.

Comment author: Will_Newsome 16 May 2012 07:21:44PM 5 points [-]

Offbeat counter: You're assuming that this ontology that privileges "goals" over e.g. morality is correct. What if it's not? Are you extremely confident that you've carved up reality correctly? (Recall that EU maximizers haven't been shown to lead to AGI, and that many philosophers who have thought deeply about the matter hold meta-ethical views opposed to your apparent meta-ethics.) I.e., what if your above analysis is not even wrong?

Comment author: JGWeissman 16 May 2012 07:35:02PM 2 points [-]

You're assuming that this ontology that privileges "goals" over e.g. morality is correct.

I don't believe that goals are ontologically fundamental. I am reasoning (at a high level of abstraction) about the behavior of a physical system designed to pursue a goal. If I understood what you mean by "morality", I could reason about a physical system designed to use that and likely predict different behaviors than for the physical system designed to pursue a goal, but that doesn't change my point about what happens with goals.

Recall that EU maximizers haven't been shown to lead to AGI

I don't expect EU maximizers to lead to AGI. I expect EU maximizing AGIs, whatever has led to them, to be effective EU maximizers.

Comment author: Will_Newsome 16 May 2012 08:15:38PM 4 points [-]

Sorry, I meant "ontology" in the information science sense, not the metaphysics sense; I simply meant that you're conceptually (not necessarily metaphysically) privileging goals. What if you're wrong to do that? I suppose I'm suggesting that carving out "goals" might be smuggling in conclusions that make you think universal convergence is unlikely. If you conceptually privileged rational morality instead, as many meta-ethicists do, then your conclusions might change, in which case it seems you'd have to be unjustifiably confident in your "goal"-centric conceptualization.

Comment author: JGWeissman 16 May 2012 08:26:30PM 1 point [-]

I think I am only "privileging" goals in a weak sense, since by talking about a goal driven agent, I do not deny the possibility of an agent built on anything else, including your "rational morality", though I don't know what that is.

Are you arguing that a goal driven agent is impossible? (Note that this is a stronger claim than it being wiser to build some other sort of agent, which would not contradict my reasoning about what a goal driven agent would do.)

Comment author: Will_Newsome 11 June 2012 10:15:42PM 1 point [-]

(Yeah, the argument would have been something like, given a sufficiently rich and explanatory concept of "agent", goal-driven agents might not be possible --- or more precisely, they aren't agents insofar as they're making tradeoffs in favor of local homeostatic-like improvements as opposed to traditionally-rational, complex, normatively loaded decision policies. Or something like that.)