Stuart_Armstrong comments on General purpose intelligence: arguing the Orthogonality thesis - Less Wrong

20 Post author: Stuart_Armstrong 15 May 2012 10:23AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (156)

You are viewing a single comment's thread. Show more comments above.

Comment author: jacob_cannell 15 May 2012 03:02:47PM 0 points [-]

The orthogonality thesis is non-controversial. Ben's point is that what matters is not the question of what types of goals are theoretically compatible with superoptimization, but rather what types of goals we can expect to be associated with superoptimization in reality.

In reality AGI's with superoptimization power will be created by human agencies (or their descendants) with goal systems subject to extremely narrow socio-economic filters.

The other tangential consideration is that AGI's with superoptimization power and long planning horizons/zero time discount may have highly convergent instrumental values/goals which are equivalent in effect to terminal values/goals for agents with short planning horizons (such as humans). From a human perspective, we may observe all super-AGIs to appear to have strangely similar ethics/morality/goals, even though what we are really observing are convergent instrumental values and short term opening plans as their true goals concern the end of the universe and are essentially unknowable to us.

Comment author: Stuart_Armstrong 15 May 2012 03:06:15PM 4 points [-]

The orthogonality thesis is non-controversial

The orthogonality thesis is highly controversial - among philosophers.