JoshuaZ comments on Thoughts on the Singularity Institute (SI) - Less Wrong

256 Post author: HoldenKarnofsky 11 May 2012 04:31AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (1270)

You are viewing a single comment's thread. Show more comments above.

Comment author: JoshuaZ 14 May 2012 03:44:03PM 15 points [-]

The largest concern from reading this isn't really what it brings up in management context, but what it says about the SI in general. Here an area where there's real expertise and basic books that discuss well-understood methods and they didn't do any of that. Given that, how likely should I think it is that when the SI and mainstream AI people disagree that part of the problem may be the SI people not paying attention to basics?

Comment author: TheOtherDave 14 May 2012 04:17:42PM 4 points [-]

(nods) The nice thing about general-purpose techniques for winning at life (as opposed to domain-specific ones) is that there's lots of evidence available as to how effective they are.

Comment author: ciphergoth 21 May 2012 06:06:19PM 1 point [-]

I doubt there's all that much of a correlation between these things to be honest.

Comment author: private_messaging 16 May 2012 01:43:25PM *  0 points [-]

Precisely. For example of one existing base: the existing software that searches for solutions to engineering problems. Such as 'self improvement' via design of better chips. Works within narrowly defined field, to cull the search space. Should we expect state of the art software of this kind to be beaten by someone's contemporary paperclip maximizer? By how much?

Incredibly relevant to AI risk, but analysis can't be faked without really having technical expertise.