Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Wei_Dai comments on Mirrors and Paintings - Less Wrong

12 Post author: Eliezer_Yudkowsky 23 August 2008 12:29AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (41)

Sort By: Old

You are viewing a single comment's thread. Show more comments above.

Comment author: Wei_Dai 15 May 2011 11:35:59PM 10 points [-]

During my short visit to SIAI, I noticed that Eliezer clearly had much higher status than others there, so their relative lack of publicly-visible disagreements with Eliezer may be due to that. (You do realize that the people you listed are all affiliated with SIAI?) Also, Marcello Herreshof did have a significant disagreement with Eliezer about CEV here.

Comment author: wallowinmaya 16 May 2011 09:36:50AM *  0 points [-]

Hm, you're right, I didn't notice that all are affiliated with SIAI. But: Probably there is a reason why Eliezer has high status...

Marcello writes:

So, what do we do if there is more than one basin of attraction a moral reasoner considering all the arguments can land in? What if there are no basins?

Crap. This is really a problem. So, who else disagrees with Eli's CEV? What does e.g. Bostrom think? And does anyone have better proposals? I ( and probably many others) would be really interested in the opinion of other "famous lesswrongers" such as Yvain, Alicorn, Kaj Sotala, or you, Wei Dai. See, I have the feeling that in regard to metaethics I have nothing relevant to say due to cognitive limitations. Therefore I have to rely on the opinion of people, which convinced me of their mental superiority in many other areas. I know that such line of thoughts can easily be interpreted as conformistic sycophancy and lead to cultish, fanatic behavior, and I usually disdain this kind of reasoning, but in my position this seems to be best strategy.

Comment author: Wei_Dai 18 May 2011 08:33:07PM *  8 points [-]

What does e.g. Bostrom think?

He hasn't taken a position on CEV, as far as I can tell.

I ( and probably many others) would be really interested in the opinion of other "famous lesswrongers" such as Yvain, Alicorn, Kaj Sotala, or you, Wei Dai.

I'm curious enough about this to look up the answers for you, but next time try "Google".

Yvain: Coherent extrapolated volition utilitarianism is especially interesting; it says that instead of using actual preferences, we should use ideal preferences - what your preferences would be if you were smarter and had achieved more reflective equilibrium - and that instead of having to calculate each person's preference individually, we should abstract them into an ideal set of preferences for all human beings. This would be an optimal moral system if it were possible, but the philosophical and computational challenges are immense.

Kaj: Some informal proposals for defining Friendliness do exist. The one that currently seems most promising is called Coherent Extrapolated Volition. In the CEV proposal, an AI will be built (or, to be exact, a proto-AI will be built to program another) to extrapolate what the ultimate desires of all the humans in the world would be if those humans knew everything a superintelligent being could potentially know; could think faster and smarter; were more like they wanted to be (more altruistic, more hard-working, whatever your ideal self is); would have lived with other humans for a longer time; had mainly those parts of themselves taken into account that they wanted to be taken into account. The ultimate desire - the volition - of everyone is extrapolated, with the AI then beginning to direct humanity towards a future where everyone's volitions are fulfilled in the best manner possible. The desirability of the different futures is weighted by the strength of humanity's desire - a smaller group of people with a very intense desire to see something happen may "overrule" a larger group who'd slightly prefer the opposite alternative but doesn't really care all that much either way. Humanity is not instantly "upgraded" to the ideal state, but instead gradually directed towards it.

CEV avoids the problem of its programmers having to define the wanted values exactly, as it draws them directly out of the minds of people. Likewise it avoids the problem of confusing ends with means, as it'll explictly model society's development and the development of different desires as well. Everybody who thinks their favorite political model happens to objectively be the best in the world for everyone should be happy to implement CEV - if it really turns out that it is the best one in the world, CEV will end up implementing it. (Likewise, if it is the best for humanity that an AI stays mostly out of its affairs, that will happen as well.) A perfect implementation of CEV is unbiased in the sense that it will produce the same kind of world regardless of who builds it, and regardless of what their ideology happens to be - assuming the builders are intelligent enough to avoid including their own empirical beliefs (aside for the bare minimum required for the mind to function) into the model, and trust that if they are correct, the AI will figure them out on its own.

Alicorn: But I'm very dubious about CEV as a solution to fragility of value, and I think there are far more and deeper differences in human moral beliefs and human preferences than any monolithic solution can address. That doesn't mean we can't drastically improve things, though - or at least wind up with something that I like!

See also Criticisms of CEV (request for links).

Comment author: wallowinmaya 19 May 2011 07:47:32AM 0 points [-]

Thanks, this is awesome!

but next time try "Google".

I'm sorry....