You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Will_Newsome comments on Personal research update - Less Wrong Discussion

4 Post author: Mitchell_Porter 29 January 2012 09:32AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (31)

You are viewing a single comment's thread.

Comment author: Will_Newsome 10 February 2012 08:45:23AM 1 point [-]

Your agenda strikes me as potentially fruitful but insufficiently meta. There are many philosophical problems an FAI would need to be able to solve, and I certainly agree that consciousness is a huge one. But this would seem to me to indicate that we need to find a way to automate philosophical progress generally, rather than find a way to algorithmicize our human-derived intuitions about consciousness. Non? Are you of the opinion that we need to understand how brains do their magic if we're to be sure that our seed AI will be able to figure out how to do similar magic?

Wheeler talks about quantum mechanics as statistically describing the behavior of masses of logical operations. Goertzel is like 'well logical operations are just a rather rigid and unsatisfying form of thought, maybe you get quantum from masses of Mind operations'. As far as crackpot theories go it seems okay, and superficially looks like what you're trying to do in a much more technical way by unifying physics and experience.

Anyway, I wish you good luck on your journey.

(I apologize if this comment is unclear, I am highly distracted.)