David_Gerard comments on Less Wrong: Open Thread, December 2010 - Less Wrong Discussion
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (84)
And I'll start things off with a question I couldn't find a place for or a post for.
Coherent extrapolated volition. That 2004 paper sets out what it would be and why we want it, in the broadest outlines.
Has there been any progress on making this concept any more concrete since 2004? How to work out a CEV? Or even one person's EV? I couldn't find anything.
I'm interested because it's an idea with obvious application even if the intelligence doing the calculation is human.
On the topic of CEV: the Wikipedia article only has primary sources and needs third-party ones.
Have you looked at the paper by Roko and Nick, published this year?
No, I hadn't found that one. Thank you!
Though it still doesn't answer the question - it just states why it's a good idea, not how one would actually do it. There's a suggestion that reflective equilibrium is a good start, but that competing ideas to CEV also include that.
Is there even a little material on how one would actually do CEV? Some "and then a miracle occurs" in the middle is fine for these purposes, we have human intelligences on hand.
Are these two papers really all there is to show so far for the concept of CEV?
It isn't a subject that I would expect anyone from, say, SIAI to actually discuss honestly. Saying sane things about CEV would be a political minefield.
Expand? Are you talking about saying things about the output of CEV, or something else?
Not just the output, the input and means of computation are also potential minefields of moral politics. After all this touches on what amounts to the ultimate moral question: "If I had ultimate power how would I decide how to use it?" When you are answering that question in public you must use extreme caution, at least you must if you have any real intent to gain power.
There are some things that are safe to say about CEV, particularly things on a technical side. But for most part it is best to avoid giving too many straight answers. I said something on the subject of what can be considered the subproblem ("Do you confess to being consequentialist, even when it sounds nasty?"). Eliezer's responses took a similar position:
When describing CEV mechanisms in detail from the position of someone with more than detached academic interest you are stuck between a rock and a hard place.
On one hand you must signal idealistic egalitarian thinking such that you do not trigger in the average reader those aversive instincts we have for avoiding human tyrants.
On the other hand you must also be aware that other important members (ie. many of those likely to fund you) of your audience will have a deeper understanding of the practical issues and will see the same description as naive to the point of being outright dangerous and destructive.
I've been transparent about CEV and intend to continue this policy.
Including the part where you claim you wish to run it on the entirety of humanity? Wow, that's... scary. I have no good reason to be confident that I or those I care about would survive such a singularity.
Michael Vassar is usually the voice within SIAI of such concerns. It hasn't been formally written up yet, but besides the Last Judge notion expressed in the original CEV paper, I've also been looking favorably on the notion of giving a binary veto over the whole process, though not detailed control, to a coherent extrapolated superposition of SIAI donors weighted by percentage of income donated (not donations) or some other measure of effort exerted.
And before anyone points it out, yes I realize that this would require a further amendment to the main CEV extrapolation process so that it didn't deliberately try to sneak just over the veto barrier.
Look, people who are carrying the Idiot Ball just don't successfully build AIs that match up to their intentions in the first place. If you think I'm an idiot, worry about me being the first idiot to cross the Idiot Finish Line and fulfill the human species' destiny of instant death, don't worry about my plans going right enough to go wrong in complicated ways.
The thing that spooks me most about CEV (aside from the difficulty of gathering the information about what people really care about and the further difficulties of accurate extrapolition and some doubts about whether the whole thing can be made coherent) is that it seems to be planned to be a thing that will be perfected and then imposed, rather than a system which will take feedback from the people whose lives are theoretically being improved.
Excuse me if this has been an ongoing topic and an aspect of CEV which is at least being considered, but I don't think I've seen this angle brought up.
Won't this incentiveice people to lower their income in many situations because the fraction of their income being donations increases even if the total amount decreases?
Is that what they mean by "getting the inside track on the singularity"? ;-)
Woah there. I remind you that what prompted your first reply here was me supporting you on this particular subject!
My application is so that an organisation can work out not only what people want from it but what they would want from it. This assumes some general intelligences on hand to do the working out, but we have those.
I can sure see that in the fundraising prospectus. "We've been working on something but can't tell you what it is. Trust us, though!"
Let's assume things are better than that and it is possible to talk about CEV. Is anyone from SIAI in the house and working on what CEV means?
even if it comes out perfect hanson will just say that it's based on far mode thinking and is thus incoherent WRT near values :p
what sort of person would I be if I was getting enough food, sex and sleep (the source of which was secure) to allow me to stay in far mode all the time? I have no idea.
A happily married (or equivalent) one? I am cosy in domesticity but also have a small child to divert my immediate energies, and I find myself regarding raising her as my important work and everything else as either part of that or amusement. Thankfully it appears raising a child requires less focused effort than ideal-minded parents seem to think (I can't find the study quickly, sorry - anyone?), so this allows me to sit on the couch working or reading stuff while she plays, occasionally tweaking her flow of interesting stuff and occasionally dealing with her coming over and jumping up and down on me.
well you should be working on CEV and I shouldn't.
Hence the question ;-)
Bad in bed, for a start. In far mode all the time?
No-one's added sources to the CEV article other than primary ones, so I've merged-and-redirected it to the Friendly AI article. It can of course be un-merged any time.