timtyler comments on Why AI may not foom - Less Wrong

23 Post author: John_Maxwell_IV 24 March 2013 08:11AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (78)

You are viewing a single comment's thread.

Comment author: timtyler 25 March 2013 12:50:57AM *  1 point [-]

A flurry of insights that either dies out or expands exponentially doesn't seem like a very good description of how human minds work, and I don't think it would describe an AGI well either.

That's how technological evolution works, though. If you're in olden-days-Tasmania, you get devolution. Otherwise, you get progress. There's a threshold effect involved. We have to reproduce the progress seen in cultural evolution - not just make a mind.