Fly2 comments on No Universally Compelling Arguments - Less Wrong

33 Post author: Eliezer_Yudkowsky 26 June 2008 08:29AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (46)

Sort By: Old

You are viewing a single comment's thread.

Comment author: Fly2 26 June 2008 04:50:16PM 0 points [-]

roko: "Game theory doesn't tell you what you should do, it only tells you how to do it. E.g. in the classic prisoner's dilemma, defection is only an optimal strategy if you've already decided that the right thing to do is to minimize your prison sentence."

Survival and growth affect the trajectory of a particle in mind space. Some "ethical systems" may act as attractors. Particles interact, clumps interact, higher level behaviors emerge. A super AI might be able to navigate the density substructures of mind space guided by game theory. The "right" decision would be the one that maximizes persistence/growth. (I'm not saying that this would be good for humanity. I'm only suggesting that a theory of non-human ethics is possible.)

(Phil Goetz, I wrote the above before reading your comment: "...variation in possible minds, for sufficiently intelligent AIs, is smaller than the variation in human minds" Yes, this what I was trying to convey by "attractors" and navigation of density substructures in mind space.)