Vladimir_Nesov comments on Evaluating the feasibility of SI's plan - Less Wrong

25 Post author: JoshuaFox 10 January 2013 08:17AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (186)

You are viewing a single comment's thread. Show more comments above.

Comment author: Vladimir_Nesov 12 January 2013 01:39:53PM *  0 points [-]

Your argument depends on the relative size of "success" where random stumbling needs to end up in, and its ability to attract the corrections. If "success" is something like "consequentialism", I agree that intermediate errors might "correct" themselves (in some kind of selection process), and the program ends up as an agent. If it's "consequentialism with specifically goal H", it doesn't seem like there is any reason for the (partially) random stumbling to end up with goal H and not some other goal G.

(Learning what its intended purpose was doesn't seem different from learning what the mass of the Moon is, it doesn't automatically have the power of directing agent's motivations towards that intended purpose, unless for example this property of going towards the original intended purpose is somehow preserved in all the self-modifications, which does sound like a victory condition.)

Comment author: timtyler 12 January 2013 02:24:26PM *  0 points [-]

I am not sure you can legitimately characterise the efforts of an intelligent agent as being "random stumbling".

Anyway, I was pointing out a flaw in the reasoning supporting a small probability of failure (under the described circumstances). Maybe some other argument supports a small probability of failure. However, the original argument would still be wrong.

Other approaches - including messy ones like neural networks - might result in a stable self-improving system with a desirable goal, apart from trying to develop a deterministic self-improving system that has a stable goal from the beginning.

A good job too. After all, those are our current circumstances. Complex messy systems like Google and hedge funds are growing towards machine intelligence - while trying to preserve what they value in the process.