katydee comments on Less Wrong: Open Thread, September 2010 - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (610)
You still aren't addressing my main point about the thousand dollars. Also, if you think CEV is somehow designed to avoid consulting mankind, I think there is a fundamental problem with your understanding of CEV. It is, quite literally, a design based on consulting mankind.
Your point about the thousand dollars. Well, in the first place, I didn't say "control". I said "have enormous power over" if your ideals match up with Eliezer's.
In the second place, if you feel that a certain amount of hyperbole for dramatic effect is completely inappropriate in a discussion of this importance, then I will apologize for mine and I will accept your apology for yours.
Before I agree to anything, what importance is that?
Huh? I didn't ask you to agree to anything.
What importance is what?
I'm sorry if you got the impression I was requesting or demanding an apology. I just said that I would accept one if offered. I really don't think your exaggeration was severe enough to warrant one, though.
Whoops. I didn't read carefully enough. Me: "a discussion of this importance". You: "What importance is that?" Sorry. Stupid of me.
So. "Importance". Well, the discussion is important because I am badmouthing SIAI and CEV. Yet any realistic assessment of existential risk has to rank uFAI near the top and SIAI is the most prominent organization doing something about it. And FAI, with the F derived from CEV is the existing plan. So wtf am I doing badmouthing CEV, etc.?
The thing is, I agree it is important. So important we can't afford to get it wrong. And I think that any attempt to build an FAI in secret, against the wishes of mankind (because mankind is currently not mature enough to know what is good for it), has the potential to become the most evil thing ever done in mankind's whole sorry history.
That is the importance.
I view what you're saying as essentially correct. That being said, I think that any attempt to build an FAI in public also has the potential to become the most evil thing ever done in mankind's whole sorry history, and I view our chances as much better with the Eliezer/Marcello CEV plan.
Yes, building an FAI brings dangers either way. However, building and refining CEV ideology and technology seems like something that can be done in the light of day, and may be fruitful regardless of who it is that eventually builds the first super-AI.
I suppose that the decision-theory work is, in a sense, CEV technology.
More than anything else, what disturbs me here is the attitude of "We know what is best for you - don't worry your silly little heads about this stuff. Trust us. We will let you all give us your opinions once we have 'raised the waterline' a bit."
Suppose FAI development reaches a point where it probably works and would be powerful, but can't be turned on just yet because the developers haven't finished verifying its friendliness and building safeguards. If it were public, someone might decide to copy the unfinished, unsafe version and turn it on anyways. They might do so because they want to influence its goal function to favor themselves, for example.
Allowing people who are too stupid to handle AGIs safely to have the source code to one that works, destroys the world. And I just don't see a viable strategy for creating an AGI while working in public, without a very large chance of that happening.
With near certainty. I know I would. I haven't seen anyone propose a sane goal function just yet.
So, doesn't it seem to anyone else that our priority here ought to be to strive for consensus on goals, so that we at least come to understand better just what obstacles stand in the way of achieving consensus?
And also to get a better feel for whether having one's own volition overruled by the coherent extrapolated volition of mankind is something one really wants.
To my mind, the really important question is whether we have one-big-AI which we hope is friendly, or an ecosystem of less powerful AIs and humans cooperating and competing under some kind of constitution. I think that the latter is the obvious way to go. And I just don't trust anyone pushing for the first option - particularly when they want to be the one who defines "friendly".
Hopefully, having posted this publicly means you'll never get the opportunity.
Upvoted because this is exactly the kind of thinking which needs to be deconstructed and analyzed here.
Which boils down to "trust us" - as far as I can see. Gollum's triumphant dance springs to mind.
An obvious potential cause of future problems is extreme weath inequality - since technology seems so good at creating and maintaining weath inequality. That may result in bloody rebellions - or poverty. The more knowledge secrets there are the more wealth inequality is likely to result. So, from that perspective, openness is good: it gives power to the people - rather than keeping it isolated in the hands of an elite.
Couldn't agree more (for once).
You seem to be taking CEV seriously - which seems more like a kind of compliment.
My reaction was more like Cypher's:
"Jesus! What a mind job! So: you're here to SAVE THE WORLD. What do you say to something like that?"
Of course I take it seriously. It is a serious response to a serious problem from a serious person who takes himself entirely too seriously.
And it is probably the exactly wrong solution to the problem.
I would start by asking whether they want to save it like Noah did, or like Ozymandius did, or maybe like Borlaug did. Sure doesn't look like a Borlaug "Give them the tools" kind of save at all.
It's based on consulting mankind, but the extrapolation aspect means that the result could be something that mankind as it exists when CEV is implemented doesn't want at all.
"I'm doing this to you because it's what I've deduced it's what you really want" is scary stuff.
Maybe CEV will be sensible enough (by my current unextrapolated idea of sensible, of course) to observe the effects of what it's doing and maybe even consult about them, but this isn't inevitable.
At risk of sounding really ignorant or flamebaitish, don't NT women already expect men to treat them like that? E.g. "I'm spending a lot of money on a surprise for our anniversary because I've deduced that is what you really want, despite your repeated protestations that this is not what you want." (among milder examples)
Edit: I stand corrected, see FAWS reply.
Edit2: May I delete this inflammatory, turned-out-uninsightful-anyway comment? I think it provoked someone to vote down my last 12 comments ...
It took me a bit to figure out you meant neurotypical rather than iNtuitive-Thinking.
I think everyone would rather get what they want without having to take the trouble of asking for it clearly. In extreme cases, they don't even want to take the trouble to formulate what they want clearly to themselves.
And, yeah, flamebaitish. I don't know if you've read accounts by women who've been abused by male partners, but one common feature of the men is expecting to automatically get what they want.
It would be interesting to look at whether some behavior which is considered abusive by men is considered annoying but tolerable if it's done by women. Of course the degree of enforcement matters.
Not universally, only (mostly) to the extent that they expect them to actually get it right, and regarding currently existing wants, not what they should want (would want to want if only they were smart enough etc.).
Ah, good point. I stand corrected.