You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Mitchell_Porter comments on Friendly AI and the limits of computational epistemology - Less Wrong Discussion

18 Post author: Mitchell_Porter 08 August 2012 01:16PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (146)

You are viewing a single comment's thread. Show more comments above.

Comment author: Mitchell_Porter 09 August 2012 04:13:44AM -1 points [-]

The idea is not that state machines can't have qualia. Something with qualia will still be a state machine. But you couldn't know that something had qualia, if you just had the state machine description and no preexisting concept of qualia.

If a certain bunch of electrons are what's conscious in the brain, my point is that the "electrons" are actually qualia and that this isn't part of our physics concept of what an electron is; and that you - or a Friendly AI - couldn't arrive at this "discovery" by reasoning just within physical and computational ontologies.

Comment author: Manfred 09 August 2012 01:11:02PM *  1 point [-]

you - or a Friendly AI - couldn't arrive at this "discovery" by reasoning just within physical and computational ontologies.

Could an AI just look at the physical causes of humans saying "I think I have qualia"? Why wouldn't these electrons be a central cause, if they're the key to qualia?

Comment author: David_Gerard 09 August 2012 08:36:00PM 0 points [-]

Please expand the word "qualia", and please explain how you see that the presence or absence of these phenomena will make an observable difference in the problem you are addressing.

Comment author: Mitchell_Porter 10 August 2012 06:45:24AM -1 points [-]

See this discussion. Physical theories of human identity must equate the world of appearances, which is the only world that we actually know about, with some part of a posited world of "physical entities". Everything from the world of appearances is a quale, but an AI with a computational-materialist philosophy only "knows" various hypotheses about what the physical entities are. The most it could do is develop a concept like "the type of physical entity which causes a human to talk about appearances", but it still won't spontaneously attach the right significance to such concepts (e.g. to a concept of pain).

I have agreed elsewhere that it is - remotely! - possible that an appropriately guided AI could solve the hard problems of consciousness and ethics before humans did, e.g. by establishing a fantastically detailed causal model of human thought, and contemplating the deliberations of a philosophical sim-human. But when even the humans guiding the AI abandon their privileged epistemic access to phenomenological facts, and personally imitate the AI's limitations by restricting themselves to computational epistemology, then the project is doomed.