Lightwave comments on A Little Puzzle about Termination - Less Wrong

2 [deleted] 07 February 2013 03:07PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (36)

You are viewing a single comment's thread. Show more comments above.

Comment author: Lightwave 04 February 2013 09:44:55AM *  2 points [-]

On the other hand, given that humans (especially on LW) do analyze things on several meta levels, it seems possible to program an AI to do the same, and in fact many discussions of AI assume this (e.g. discussing whether the AI will suspect it's trapped in some simulation). It's an interesting question how intelligent can an AI get without having the need (or ability) to go meta.

Comment author: [deleted] 04 February 2013 02:53:19PM 1 point [-]

Also true. Indeed, this puzzle is all about resolving confusion between object and meta level(s); hopefully no one here at LW endorses the view that a (sufficiently well programmed) AI is incapable of going meta, so to speak.

Comment author: pinyaka 04 February 2013 07:50:21PM 0 points [-]

I wonder how one would calculate what level of meta-knowledge about a completeness condition is necessary for a some priority task.