Will_Newsome comments on The AI design space near the FAI [draft] - Less Wrong

3 Post author: Dmytry 18 March 2012 10:29AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (48)

You are viewing a single comment's thread. Show more comments above.

Comment author: Will_Newsome 29 March 2012 12:46:46AM 5 points [-]

I agree with everything you've written as far as my modal hypothesis goes, but I also think we're going to lose in that case, so I've sort of renormalized to focus my attention at least somewhat more on worlds where for some reason academic/industry AI approaches don't work, even if that requires some sort of deus ex machina. My intuition says that highly recursive narrow AI style techniques should give you AGI, but to some extent this does go against e.g. the position of many philosophers of mind, and in this case I hope they're right. Trying to imagine intermediate scenarios led me to think about this kinda stuff.

It would of course be incredibly foolish to entirely write off worlds where AGI is relatively easy, but I also think we should think about cases where for whatever reason that isn't the case, and if it's not the case then SingInst is in a uniquely good position to build uFAI.

Comment author: J_Taylor 01 April 2012 11:32:24PM 2 points [-]

I've sort of renormalized to focus my attention at least somewhat more on worlds where for some reason academic/industry AI approaches don't work, even if that requires some sort of deus ex machina

I apologize for asking, but I just want to clarify something. When you write 'deus ex machina', you're not solely using the term in a metaphorical sort of way, are you? Because, if you mean what it sort of sounds like you mean, at least some of your public positions suddenly make a lot more sense.

Comment author: Will_Newsome 02 April 2012 01:29:35AM 2 points [-]

Yes, literal deus ex machina is one scenario which I find plausible.