You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

MrMind comments on Open thread, 7-14 July 2014 - Less Wrong Discussion

2 Post author: David_Gerard 07 July 2014 07:14AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (232)

You are viewing a single comment's thread. Show more comments above.

Comment author: MrMind 10 July 2014 08:44:24AM 1 point [-]

A difficult question to answer, because many elements are not precisely defined. But let's just say for a moment that 'intellect' is conflated with 'universal Turing machine' and 'thinking' with 'processing a program'.
There are of course limits for any finite UTM: on one side, you cannot 'probe' thoughts too deeply because of constraints on memory/energy/time, on the other side there are thoughts that are simply too complex. So no, an AI could never imagine anything that is too complex or too expensive for it to think.
For us humans the situation is even worse, because our brains are no computers, and we are very capable of imagining incoherent things.