OrphanWilde comments on Friendly AI and the limits of computational epistemology - Less Wrong

18 Post author: Mitchell_Porter 08 August 2012 01:16PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (146)

You are viewing a single comment's thread.

Comment author: OrphanWilde 08 August 2012 02:22:51PM 2 points [-]

I might be mistaken, but it seems like you're forwarding a theory of consciousness, as opposed to a theory of intelligence.

Two issues with that - first, that's not necessarily the goal of AI research. Second, you're evaluating consciousness, or possibly intelligence, from the inside, rather than the outside.

Comment author: dbc 08 August 2012 03:51:18PM 2 points [-]

I think consciousness is relevant here because it may be an important component of our preferences. For instance, all else being equal, I would prefer a universe filled with conscious beings to one filled with paper clips. If an AI cannot figure out what consciousness is, then it could have a hard time enacting human preferences.

Comment author: OrphanWilde 08 August 2012 03:55:42PM 2 points [-]

That presumes consciousness can only be understood or recognized from the inside. An AI doesn't have to know what consciousness feels like (or more particularly, what "feels like" even means) in order to recognize it.

Comment author: torekp 11 August 2012 05:23:29PM 0 points [-]

True, but it does need to recognize it, and if it is somehow irreversibly committed to computationalism and that is a mistake, it will fail to be promote consciousness correctly.

For what it's worth, I strongly doubt Mitchell's argument for the "irreversibly committed" step. Even an AI lacking all human-like sensation and feeling might reject computationalism, I suspect, provided that it's false.