Perplexed comments on Less Wrong: Open Thread, September 2010 - Less Wrong

3 Post author: matt 01 September 2010 01:40AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (610)

You are viewing a single comment's thread. Show more comments above.

Comment author: Perplexed 09 September 2010 02:42:07AM 3 points [-]

I cannot see any explanation for your misinterpretation other than willful ignorance.

I, on the other hand, would trace the source of my "misinterpretation" to LucasSloane's answer to this comment by me.

I include Eliezer's vehement assurances that Robin Hanson (like me) is misinterpreting. (Sibling comment to your own.) But it is completely obvious that Eliezer wishes to create his FAI quickly and secretly expressly because he does not wish to have to deal with the contemporaneous volition of mankind. He simply cannot trust mankind to do the right thing, so he will do it for them.

I'm pretty sure I am not misinterpreting. If someone here is misinterpreting by force of will, it may be you.

Comment author: DSimon 09 September 2010 02:54:46AM *  4 points [-]

Eliezer's motivations aside, I'm confused about the part where you say that one can pay to get influence over CEV/FAI. The payment is for the privilege to argue about what sort of world a CEV-based AI would create. You don't have to pay to discuss (and presumably, if you have something really insightful to contribute, influence) the implementation of CEV itself.

Comment author: Perplexed 09 September 2010 03:49:50AM 1 point [-]

Upon closer reading, I notice that you are trying to draw a clear distinction between the implementation of CEV and the kind of world CEV produces. I had been thinking that the implementation would have a big influence on the kind of world.

But you may be assuming that the world created by the FAI, under the guidance of the volition of mankind really depends on that volition and not on the programming fine print that implements "coherent" and "extrapolated". Well, if you think that, and the price tags only buy you the opportunity to speculate on what mankind will actually want, ... well ..., yes, that is another possible interpretation.

Comment author: Nisan 09 September 2010 06:49:26PM *  9 points [-]

Yeah. When I read that pricing schedule, what I see is Eliezer preempting:

  • enthusiastic singularitarians whiling away their hours dreaming about how everyone is going to have a rocketpack after the Singularity;

  • criticism of the form "CEV will do X, which is clearly bad. Therefore CEV is a bad idea." (where X might be "restrict human autonomy"). This kind of criticism comes from people who don't understand that CEV is an attempt to avoid doing any X that is not clearly good.

The CEV document continues to welcome other kinds of criticism, such as the objection that the coherent extrapolated volition of the entire species would be unacceptably worse for an individual than that of 1000 like-minded individuals (Roko says something like this) or a single individual (wedrifid says something like this) -- the psychological unity of mankind notwithstanding.

Comment author: Perplexed 09 September 2010 03:35:00AM 1 point [-]

Look down below the payment schedule (which, of course, was somewhat tongue in cheek) to the Q&A where Eliezer makes clear that the SIAI and their donors will have to make certain decisions based on their own best guesses, simply because they are the ones doing the work.

Comment author: Will_Newsome 09 September 2010 02:29:17PM *  1 point [-]

I'm confused. You seem to be postulating that the SIAI would be willing to sell significant portions of the light cone for paltry sums of money. This means that either the SIAI is the most blatantly stupid organization to ever have existed, or you were a little too incautious with your postulation.

Comment author: Perplexed 09 September 2010 02:46:30PM *  -1 points [-]

... light cone...

wtf?

Whereas you seem to be postulating that in the absence of sums of money, the SIAI has something to sell. Nothing stupid about it. Merely desperate.

Comment author: Will_Newsome 09 September 2010 03:01:12PM *  4 points [-]

SIAI has money. Not a ton of it, but enough that they don't have to sell shares. The AGI programmers would much, much, much sooner extrapolate only their values than accept a small extremely transitory reward in exchange for their power. Of note is that this completely and entirely goes against all the stated ethics of SIAI. However, I realize that stated ethics don't mean much when that much power is on the line, and it would be silly to assume the opposite.

That said, this stems from your misinterpretation of the CEV document. No one has ever interpreted it the way you did. If that's what Eliezer actually meant, then of course everyone would be freaking out about it. I would be freaking out about it. And rightfully so; such a system would be incredibly unethical. For Eliezer to simply publicly announce that he was open to bribes (or blackmail) would be incredibly stupid. Do you believe that Eliezer would so something that incredibly stupid? If not, then you misinterpreted the text. Which doesn't mean you can't criticize SIAI for other reasons, or speculate about the ulterior motives of the AGI researchers, but it does mean you should acknowledge that you messed up. (The downvotes alone are pretty strong evidence in that regard.)

I will note that I'm very confused by your reaction, and thus admit a strong possibility that I misunderstand you, you misunderstand me, or we misunderstand each other, in which case I doubt the above two paragraphs will help much.

Comment author: Perplexed 09 September 2010 04:02:34PM *  0 points [-]

For Eliezer to simply publicly announce that he was open to bribes (or blackmail) would be incredibly stupid. Do you believe that Eliezer would so something that incredibly stupid?

Of course not. But if he offers access and potentially influence in exchange for money, he is simply doing what all politicians do. What pretty much everyone does.

Eliezer was quite clear that he would do nothing that violates his own moral standards. He was also quite clear (though perhaps joking) that he didn't even want to continue to listen to folks who don't pay their fair share.

Do you believe that Eliezer would so something that incredibly stupid?

Ok, I already gave that question an implicit "No" answer. But I think it also deserves an implicit "Yes". Let me ask you: Do you think Eliezer would ever say anything off-the-cuff which shows a lack of attention to appearances that verges on stupidity?

Comment author: ata 09 September 2010 08:28:57PM *  8 points [-]

Eliezer was quite clear that he would do nothing that violates his own moral standards. He was also quite clear (though perhaps joking) that he didn't even want to continue to listen to folks who don't pay their fair share.

He was quite clear that he didn't want to continue listening to people who thought that arguing about the specific output of CEV, at the object level, was a useful activity, and that he would listen to anyone who could make substantive intellectual contributions to the actual problems at hand, regardless of their donations or lack thereof ("It goes without saying that anyone wishing to point out a problem is welcome to do so. Likewise for talking about the technical side of Friendly AI." — the part right after the last paragraph you quoted...). You are taking a mailing list moderation experiment and blowing it way out of proportion; he was essentially saying "In my experience, this activity is fun, easy, and useless, and it is therefore tempting to do it in place of actually helping; therefore, if you want take up people's time by doing that on SL4, my privately-operated discussion space that I don't actually have to let you use at all if I don't want to, then you have to agree to do something I do consider useful; if you disagree, then you can do it wherever the hell you want aside from SL4." That's it. Nothing there could be interpreted remotely as selling influence or even access. I've disputed aspects of SIAI's PR, but I don't even think a typical member of the public (with minimal background sufficient to understand the terms used) would read it that way.

Comment author: Will_Newsome 09 September 2010 08:14:14PM 0 points [-]

Of course not. But if he offers access and potentially influence in exchange for money, he is simply doing what all politicians do. What pretty much everyone does.

At this point I'm sure I misunderstood you, such that any quibbles I have left are covered by other commenters. My apologies. Blame it on the oxycodone.

Do you think Eliezer would ever say anything off-the-cuff which shows a lack of attention to appearances that verges on stupidity?

OF COURSE NOT! Haven't you read the Eliezer Yudkowsky Facts post and comments? Yeesh, newcomers these days...

Comment author: Gabriel 09 September 2010 12:52:22PM *  1 point [-]

But it is completely obvious that Eliezer wishes to create his FAI quickly and secretly expressly because he does not wish to have to deal with the contemporaneous volition of mankind.

I'd guess he wants to create FAI quickly because, among other things, ~150000 people are dying each day. And secretely because there are people who would build and run an UFAI without regard for the consequences and therefore sharing knowledge with them is a bad idea. I believe that even if he wanted FAI only to give people optional immortality and not do anything else, he would still want to do it quickly and secretly.

Comment author: Perplexed 09 September 2010 01:00:46PM 0 points [-]

I think your guesses as to the rationalizations that would be offered are right on the mark.

Comment author: jimrandomh 09 September 2010 01:34:02PM 1 point [-]

I'd guess he wants to create FAI quickly because, among other things, ~150000 people are dying each day. And secretely because there are people who would build and run an UFAI without regard for the consequences and therefore sharing knowledge with them is a bad idea.

I think your guesses as to the rationalizations that would be offered are right on the mark.

Putting aside whether this is a rationalization for hidden other reasons, do you think this justification is a valid argument? Do you think it's strong enough? If not, why not? And if so, why should it matter if there are other reasons too?

Comment author: Perplexed 09 September 2010 01:47:09PM -2 points [-]

I think those reasons are transparent bullshit.

A question. What fraction of those ~150000 people per day fall into the category of "people who would build and run an UFAI without regard for the consequences"?

Another question: At what stage of the process does sharing knowledge with these people become a good idea?

... why should it matter if there are other reasons too?

Tell me what those other reasons are, and maybe I can answer you.

Comment author: jimrandomh 09 September 2010 01:59:04PM 1 point [-]

Do you think this justification is wrong because you don't think 1.5*10^5 deaths per day are a huge deal, or because you don't think constructing an FAI in secretis the best way to stop them?

Comment author: Perplexed 09 September 2010 02:07:30PM 0 points [-]

Both. Though actually, I didn't say the justification was wrong. I said it was bullshit. It is offered only to distract oneself and to distract others.

Is it really possible that you don't see this choice of justification as manipulative? Is it possible that being manipulated does not make you angry?

Comment author: Gabriel 09 September 2010 03:13:18PM 3 points [-]

You're discounting the reasoning showing that Eliezer's behavior is consistent with him being a good guy and claiming that it is merely a distraction. You haven't justified those statements -- they are supposed to be "obvious".

What do you think you know and how do you think you know it? You make statements about the real motivations of Eliezer Yudkowsky. Do you know how you have arrived at those beliefs?

Comment author: Perplexed 09 September 2010 03:48:32PM 0 points [-]

You're discounting the reasoning showing that Eliezer's behavior is consistent with him being a good guy

I don't recall seeing any such reasoning.

You make statements about the real motivations of Eliezer Yudkowsky.

Did I? Where? What I am pretty sure I have expressed is that I distrust all self-serving claims about real motivations. Nothing personal - I tend to mistrust all claims of benevolence from powerful individuals, whether they be religious leaders, politicians, or fiction writers. Since Eliezer fits all three categories, he gets some extra scrutiny.

Comment author: katydee 09 September 2010 02:52:34AM 0 points [-]

I do not understand how this reply relates to my remarks on your post. Even if everything you say in this post is true, nobody is paying a thousand dollars to control the future of mankind. I also think that attempting to harness the volition of mankind is almost as far as you can get from attempting to avoid it altogether.

Comment author: Perplexed 09 September 2010 03:20:42AM 1 point [-]

It feels a bit bizarre to be conducting this conversation arguing against your claims that mankind will be consulted, at the same time as I am trying to convince someone else that it will be impossible to keep the scheme secret from mankind.

Look at Robin's comment, Eliezer's response, and the recent conversation flowing from that.

Comment author: katydee 09 September 2010 03:39:46AM *  0 points [-]

You still aren't addressing my main point about the thousand dollars. Also, if you think CEV is somehow designed to avoid consulting mankind, I think there is a fundamental problem with your understanding of CEV. It is, quite literally, a design based on consulting mankind.

Comment author: Perplexed 09 September 2010 04:05:58AM 2 points [-]

Your point about the thousand dollars. Well, in the first place, I didn't say "control". I said "have enormous power over" if your ideals match up with Eliezer's.

In the second place, if you feel that a certain amount of hyperbole for dramatic effect is completely inappropriate in a discussion of this importance, then I will apologize for mine and I will accept your apology for yours.

Comment author: katydee 09 September 2010 04:17:12AM 0 points [-]

Before I agree to anything, what importance is that?

Comment author: Perplexed 09 September 2010 04:22:36AM 1 point [-]

Huh? I didn't ask you to agree to anything.

What importance is what?

I'm sorry if you got the impression I was requesting or demanding an apology. I just said that I would accept one if offered. I really don't think your exaggeration was severe enough to warrant one, though.

Comment author: Perplexed 09 September 2010 04:37:03AM *  3 points [-]

Whoops. I didn't read carefully enough. Me: "a discussion of this importance". You: "What importance is that?" Sorry. Stupid of me.

So. "Importance". Well, the discussion is important because I am badmouthing SIAI and CEV. Yet any realistic assessment of existential risk has to rank uFAI near the top and SIAI is the most prominent organization doing something about it. And FAI, with the F derived from CEV is the existing plan. So wtf am I doing badmouthing CEV, etc.?

The thing is, I agree it is important. So important we can't afford to get it wrong. And I think that any attempt to build an FAI in secret, against the wishes of mankind (because mankind is currently not mature enough to know what is good for it), has the potential to become the most evil thing ever done in mankind's whole sorry history.

That is the importance.

Comment author: katydee 09 September 2010 04:52:44AM *  1 point [-]

I view what you're saying as essentially correct. That being said, I think that any attempt to build an FAI in public also has the potential to become the most evil thing ever done in mankind's whole sorry history, and I view our chances as much better with the Eliezer/Marcello CEV plan.

Comment author: Perplexed 09 September 2010 12:17:01PM 2 points [-]

Yes, building an FAI brings dangers either way. However, building and refining CEV ideology and technology seems like something that can be done in the light of day, and may be fruitful regardless of who it is that eventually builds the first super-AI.

I suppose that the decision-theory work is, in a sense, CEV technology.

More than anything else, what disturbs me here is the attitude of "We know what is best for you - don't worry your silly little heads about this stuff. Trust us. We will let you all give us your opinions once we have 'raised the waterline' a bit."

Comment author: timtyler 09 September 2010 07:57:37AM *  -1 points [-]

I view our chances as much better with the Eliezer/Marcello CEV plan.

Which boils down to "trust us" - as far as I can see. Gollum's triumphant dance springs to mind.

An obvious potential cause of future problems is extreme weath inequality - since technology seems so good at creating and maintaining weath inequality. That may result in bloody rebellions - or poverty. The more knowledge secrets there are the more wealth inequality is likely to result. So, from that perspective, openness is good: it gives power to the people - rather than keeping it isolated in the hands of an elite.

Comment author: timtyler 09 September 2010 09:10:54AM *  -1 points [-]

You seem to be taking CEV seriously - which seems more like a kind of compliment.

My reaction was more like Cypher's:

"Jesus! What a mind job! So: you're here to SAVE THE WORLD. What do you say to something like that?"

Comment author: Perplexed 09 September 2010 12:33:03PM 4 points [-]

You seem to be taking CEV seriously - which seems more like a kind of compliment.

Of course I take it seriously. It is a serious response to a serious problem from a serious person who takes himself entirely too seriously.

And it is probably the exactly wrong solution to the problem.

So: you're here to SAVE THE WORLD. What do you say to something like that?

I would start by asking whether they want to save it like Noah did, or like Ozymandius did, or maybe like Borlaug did. Sure doesn't look like a Borlaug "Give them the tools" kind of save at all.

Comment author: NancyLebovitz 09 September 2010 01:50:09PM *  0 points [-]

It's based on consulting mankind, but the extrapolation aspect means that the result could be something that mankind as it exists when CEV is implemented doesn't want at all.

"I'm doing this to you because it's what I've deduced it's what you really want" is scary stuff.

Maybe CEV will be sensible enough (by my current unextrapolated idea of sensible, of course) to observe the effects of what it's doing and maybe even consult about them, but this isn't inevitable.

Comment author: SilasBarta 09 September 2010 02:06:52PM *  -2 points [-]

"I'm doing this to you because it's what I've deduced it's what you really want" is scary stuff.

At risk of sounding really ignorant or flamebaitish, don't NT women already expect men to treat them like that? E.g. "I'm spending a lot of money on a surprise for our anniversary because I've deduced that is what you really want, despite your repeated protestations that this is not what you want." (among milder examples)

Edit: I stand corrected, see FAWS reply.
Edit2: May I delete this inflammatory, turned-out-uninsightful-anyway comment? I think it provoked someone to vote down my last 12 comments ...

Comment author: NancyLebovitz 09 September 2010 03:48:54PM 3 points [-]

It took me a bit to figure out you meant neurotypical rather than iNtuitive-Thinking.

I think everyone would rather get what they want without having to take the trouble of asking for it clearly. In extreme cases, they don't even want to take the trouble to formulate what they want clearly to themselves.

And, yeah, flamebaitish. I don't know if you've read accounts by women who've been abused by male partners, but one common feature of the men is expecting to automatically get what they want.

It would be interesting to look at whether some behavior which is considered abusive by men is considered annoying but tolerable if it's done by women. Of course the degree of enforcement matters.

Comment author: FAWS 09 September 2010 02:21:08PM *  2 points [-]

Not universally, only (mostly) to the extent that they expect them to actually get it right, and regarding currently existing wants, not what they should want (would want to want if only they were smart enough etc.).

Comment author: SilasBarta 09 September 2010 02:29:39PM 1 point [-]

not what they should want (would want to want if only they were smart enough etc.)

Ah, good point. I stand corrected.