timtyler comments on Should I believe what the SIAI claims? - Less Wrong

23 Post author: XiXiDu 12 August 2010 02:33PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (600)

You are viewing a single comment's thread. Show more comments above.

Comment author: timtyler 30 December 2010 08:04:12PM *  2 points [-]

There is also insufficient evidence to conclude that Yudkowsky, or someone within the SIAI, is smart enough to be able to tackle the problem of friendliness mathematically.

The short-term goal seems more modest - prove that self-improving agents can have stable goal structures.

If true, that would be fascinating - and important. I don't know what the chances of success are, but Yudkowsky's pitch is along the lines of: look this stuff is pretty important, and we are spending less on it than we do on testing lipstick.

That's a pitch which it is hard to argue with, IMO. Machine intelligence research does seem important and currently-underfunded. Yudkowsky is - IMHO - a pretty smart fellow. If he will work on the problem for $80K a year (or whatever) it seems as though there is a reasonable case for letting him get on with it.