orthonormal comments on Other Existential Risks - Less Wrong

32 Post author: multifoliaterose 17 August 2010 09:24PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (120)

You are viewing a single comment's thread.

Comment author: orthonormal 21 August 2010 07:19:40AM 4 points [-]

On the issue of AI timelines:

A quantitative analysis of the sort you seek is really not possible for the specifics of future technological development. If we knew exactly what obstacles stood in the way, we'd be all but there. Hence the reliance instead on antipredictions and disjunctions, which leave a lot of uncertainty but can still point strongly in one direction.

My own reasoning behind an "AI in the next few decades" position is that, even if every other approach people have thought of and will think of bogs down, there's always the ability to simulate a human brain, and the only obstacles there are scanning technology and computing power. In those domains, it's rather less controversial to predict further advances (well within the theoretical limits).

Any form of cognitive enhancement (even just uploaded brains running faster than embodied brains, not to mention increasing memory or cognitive abilities) makes AI development easier and easier, and could enter a runaway state on its own.

Secondly, please don't cite Tim Tyler as a source if you're going to hold SIAI responsible for the argument. He's a technophile who counts himself a fellow-traveler, but he definitely doesn't speak for them on such issues.

Comment author: timtyler 21 August 2010 07:27:46AM *  1 point [-]

please don't cite Tim Tyler as a source if you're going to hold SIAI responsible for the argument

Surely the poster wasn't doing that!

Comment author: multifoliaterose 21 August 2010 07:27:53AM *  0 points [-]

Secondly, please don't cite Tim Tyler as a source if you're going to hold SIAI responsible for the argument. He's a technophile who counts himself a fellow-traveler, but he definitely doesn't speak for them on such issues.

I was not citing Tim Tyler as a source for SIAI's views, I was addressing his argument as one of many in favor of short term focus on AI.

Is there something that you would suggest that I do to make this more clear in the top level post?