JoshuaZ comments on Thoughts on the Singularity Institute (SI) - Less Wrong

256 Post author: HoldenKarnofsky 11 May 2012 04:31AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (1270)

You are viewing a single comment's thread. Show more comments above.

Comment author: JoshuaZ 15 May 2012 01:10:12AM 0 points [-]

I have no strong intuition about whether this is true or not, but I do intuit that if it's true, the value of sufficiently for which it's true is so high it'd be nearly impossible to achieve it accidentally.

I'm not sure. The analogy might be similar to how an sufficiently complicated process is extremely likely to be able to model a Turing machine. .And in this sort of context, extremely simple systems do end up being Turing complete such as the Game of Life. As a rough rule of thumb from a programming perspective, once some language or scripting system has more than minimal capabilities, it will almost certainly be Turing equivalent.

I don't know how good an analogy this is, but if it is a good analogy, then one maybe should conclude the exact opposite of your intuition.

Comment author: [deleted] 15 May 2012 06:33:13PM 2 points [-]

A language can be Turing-complete while still being so impractical that writing a program to solve a certain problem will seldom be any easier than solving the problem yourself (exhibits A and B). In fact, I guess that a vast majority of languages in the space of all possible Turing-complete languages are like that.

(Too bad that a human's “easier” isn't the same as a superhuman AGI's “easier”.)