Vladimir_Slepnev comments on Ethical Injunctions - Less Wrong

26 Post author: Eliezer_Yudkowsky 20 October 2008 11:00PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (67)

Sort By: Old

You are viewing a single comment's thread.

Comment author: Vladimir_Slepnev 21 October 2008 12:31:02PM 0 points [-]

So AIs are dangerous, because they're blind optimization processes; evolution is cruel, because it's a blind optimization process... and still Eliezer wants to build an optimizer-based AI. Why? We human beings are not optimizers or outcome pumps. We are a layered cake of instincts, and precisely this allows us to be moral and kind.

No idea what I'm talking about, but the "subsumption architecture" papers seem to me much more promising - a more gradual, less dangerous, more incrementally effective path to creating friendly intelligent beings. I hope something like this this will be Eliezer's next epiphany: the possibility of non-optimizer-based high intelligence, and its higher robustness compared to paperclip bombs.