You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Kawoomba comments on Open Thread, March 1-15, 2013 - Less Wrong Discussion

3 Post author: Jayson_Virissimo 01 March 2013 12:00PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (237)

You are viewing a single comment's thread. Show more comments above.

Comment author: Kawoomba 07 March 2013 09:00:03AM *  0 points [-]

edit: More relevant reply:

A human researcher would see all of the AI's code and the "pill" (the proposed change), yet even without that element of "chance" it is not yet a solved problem what the change would end up doing.

If the first human-programmed foom-able AI is not yet orders of magnitude smarter than a human - and it's doubtful it would be, given that it's still human-designed, then the AI would have no advantage in understanding its own code that the human researcher wouldn't have.

If the human researcher cannot yet solve keeping the utility function steady under modifications, why should the similar-magnitude-of-intelligence AI (both have full access to the code-base)?

Just remember that it's the not-yet-foomed AI that has to deal with these issues, before it can go weeeeeeeeeeeeeeeeKILLHUMANS (foom).