Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: luzr 16 December 2008 08:36:13AM 0 points [-]

"Errr.... luzr, why would I assume that the majority of GAIs that we create will think in a way I define as 'right'?"

It is not about what YOU define as right.

Anyway, considering that Eliezer is existing self-aware sentient GI agent, with obviously high intelligence and he is able to ask such questions despite his original biological programming makes me suppose that some other powerful strong sentient self-aware GI should reach the same point. I also *believe* that more general intelligence make GI converge to such "right thinking".

What makes me worry most is building GAI as non-sentient utility maximizer. OTOH, I *believe* that 'non-sentient utility maximizer' is mutually exclusive with 'learning' strong AGI system - in other words, any system capable of learning and exceeding human inteligence must outgrow non-sentience and utility maximizing. I migh be wrong, of course. But the fact that universe is not paperclipped yet makes me hope...

Comment author: xxd 27 January 2012 06:20:56PM 1 point [-]

Could reach the same point.

Said Eliezer agent is programmed genetically to value his own genes and those of humanity.

An artificial Elizer could reach the conclusion that humanity is worth keeping but is by no means obliged to come to that conclusion. On the contrary, genetics determines that at least some of us humans value the continued existence of humanity.

Comment author: Uni 30 March 2011 08:53:19AM 0 points [-]

I recommend reading this sequence.

Thanks for recommending.

Suffice it to say that you are wrong, and power does not bring with it morality.

I have never assumed that "power brings with it morality" if we with power mean limited power. Some superhuman AI might very well be more immoral than humans are. I think unlimited power would bring with it morality. If you have access to every single particle in the universe and can put it wherever you want, and thus create whatever is theoretically possible for an almighty being to create, you will know how to fill all of spacetime with the largest possible amount of happiness. And you will do that, since you will be intelligent enough to understand that that's what gives you the most happiness. (And, needless to say, you will also find a way to be the one to experience all that happiness.) Given hedonistic utilitarianism, this is the best thing that could happen, no matter who got the unlimited power and what was initially the moral standards of that person. If you don't think hedonistic utilitarianism (or hedonism) is moral, it's understandable that you think a world filled with the maximum amount of happiness might not be a moral outcome, especially if achieving that goal took killing lots of people against their will, for example. But that alone doesn't prove I'm wrong. Much of what humans think to be very wrong is not in all circumstances wrong. To prove me wrong, you have to either prove hedonism and hedonistic utilitarianism wrong first, or prove that a being with unlimited power wouldn't understand that it would be best for him to fill the universe with as much happiness as possible and experience all that happiness.

a happy person doesn't hate.

What is your support for this claim?

Observation.

Comment author: xxd 27 January 2012 06:07:11PM *  0 points [-]

This is a cliche and may be false but it's assumed true: "Power corrupts and absolute power corrupts absolutely".

I wouldn't want anybody to have absolute power not even myself, the only possible use of absolute power I would like to have would be to stop any evil person getting it.

To my mind evil = coercion and therefore any human who seeks any kind of coercion over others is evil.

My version of evil is the least evil I believe.

EDIT: Why did I get voted down for saying "power corrupts" - the corrollary of which is rejection of power is less corrupt whereas Eliezer gets voted up for saying exactly the same thing? Someone who voted me down should respond with their reasoning.

Comment author: xxd 27 January 2012 05:59:08PM 1 point [-]

Now this is the $64 google-illion question!

I don't agree that the null hypothesis: take the ring and do nothing with it is evil. My definition of evil is coercion leading to loss of resources up to and including loss of one's self. Thus absolute evil is loss of one's self across humanity which includes as one use case humanity's extinction (but is not limited to humanity's extinction obviously because being converted into zimboes isn't technically extinction..)

Nobody can argue that the likes of Gaddafi exist in the human population: those who are interested in being the total boss of others (even thought they add no value to the lives of others) to the extent that they are willing to kill to maintain their boss position.

I would define these people as evil or with evil intent. I would thus state that I would like under no circumstances somebody like this to grab the ring of power and thus I would be compelled to grab it myself.

The conundrum is that I fit the definition of evil myself. Though I don't seek power to coerce as an end in itself I would like the power to defend myself against involuntary coercion.

So I see a Gaddafi equivalent go to grab the ring and I beat him to it.

What do I do next?

Well I can't honestly say that I have the right to kill the millions of Gaddafi equivalent but I think that on average they add a net negative to the utility of humanity.

I'm left, however, with the nagging suspicion that under certain circumstances, Gaddafi type figures might be beneficial to humanity as a whole. Consider: crowdsourcing the majority of political decisions would probably satisfy the average utility function of humanity. It's fair but not to everybody. We have almost such a system today (even though it's been usurped by corporations). But in times of crisis such as during war, it's more efficient to have rapid decisions made by a small group of "experts" combined with those who need to make ruthless decisions so we can't kill the Gaddafis.

What is therefore optimal in my opinion? I reckon I'd take all the Gaddafis off planet and put them in simulations to be recalled only at times of need and leave sanitized nice people zimbo copies of them. Then I would destroy the ring of power and return to my previous life before I was tempted to torture those who have done me harm in the past.

Comment author: xxd 26 January 2012 10:42:57PM 0 points [-]

Xannon decides how much Zaire gets. Zaire decides how much Yancy gets. Yancy decides how much Xannon gets.

If any is left over they go through the process again for the remainder ad infinitum until an approximation of all of the pie has been eaten.

Comment author: wedrifid 21 June 2010 08:42:35AM 2 points [-]

It's as if people compartmentalize them and think about only one or the other at a time.

Or just disagree with a specific transhumanist moral (or interpretation thereof). If you are growing "too powerful too quickly" the right thing for an FAI (or, for that matter, anyone else) to do is to stop you by any means necessary. A recursively self improving PhilGoetz with that sort of power and growth rate will be an unfriendly singularity. Cease your expansion or we will kill you before it is too late.

Comment author: xxd 16 December 2011 12:29:09AM 0 points [-]

Although I disagree with your heartbreak position I agree with this.

Comment author: PhilGoetz 06 December 2011 10:16:03PM *  0 points [-]

Phil: an AI who is seeking resources to further it's own goals at the expense of everyone else is by definition an unfriendly AI.

The question is whether the PhilGoetz utility function, or the average human utility function, are better. Assume both are implemented in AIs of equal power. What makes the average human utility function "friendlier"? It would have you outlaw homosexuality and sex before marriage, remove all environmental protection laws, make child abuse and wife abuse legal, take away legal rights from women, give wedgies to smart people, etc.

Now consider this: I'd prefer the average of all human utility function over my maximized utility function even if it means I have less utility.

I don't think you understand utility functions.

Comment author: xxd 16 December 2011 12:27:54AM 0 points [-]

"The question is whether the PhilGoetz utility function, or the average human utility function, are better. "

That is indeed the question. But I think you've framed and stacked the the deck here with your description of what you believe the average human utility function is in order to attempt to take the moral high ground rather than arguing against my point which is this:

How do you maximize the preferred utility function for everyone instead of just a small group?

In response to comment by xxd on Building Weirdtopia
Comment author: wedrifid 15 December 2011 06:18:29PM *  0 points [-]

If it's not fear what is your objection to having your heart broken?

The same objection I have to someone cutting off my little toe. It is painful and means that I'll forever be missing a part of myself. Not a big deal - just a minor to moderate negative outcome.

And how can you possibly take upon yourself the right to decide for everybody else?

You are responding to a straw man again - and I am rather surprised that you have been rewarded for doing so since it is rather insulting to attribute silly beliefs to people without cause. This is a complete reversal of what Wedrifid_2010 said. He vehemetly rejected thepokeduck's proposal that everyone should have their heart broken - because he found the idea of someone deciding that everyone else should have their heart broken abhorrent and presumptive.

Then, in the very comment you replied to, Wedrifid_2010 said:

Sure, if they are into that sort of thing I don't particularly care.

That is explicitly declaring no inclination toward controlling other people's self-heart-breaking impulses.

Comment author: xxd 15 December 2011 08:26:36PM 0 points [-]

You're deliberately ignoring this comment of yours: "If the superhappys were going to remove our ability to have our hearts broken I wouldn't blow up earth to prevent it."

You are therefore at least slightly in favor of controlling other people and many would interpret your tongue-in-cheek comment to say you support it.

In response to comment by xxd on Building Weirdtopia
Comment author: wedrifid 15 December 2011 06:00:31AM 0 points [-]

Wow. I wonder what you are so afraid of... I've had my heart broken multiple times and it's not pleasant to be sure but it's hardly the end of the world.

Nothing in Wedrifid_2010's comment seems to indicate fear. You seem to be replying to a straw man.

Comment author: xxd 15 December 2011 05:53:26PM 0 points [-]

If it's not fear what is your objection to having your heart broken? And how can you possibly take upon yourself the right to decide for everybody else?

In response to comment by Larks on Building Weirdtopia
Comment author: Leonhart 16 December 2010 03:11:35PM 6 points [-]

Why would this be an improvement? Weirdtopia is not just weird customs, it's weird customs that are still recognisably an improvement. Let's be careful not to dilute the meaning.

Comment author: xxd 15 December 2011 12:02:51AM 2 points [-]

Only people who don't want children can have children. As a way to reduce the population.

Of course this wouldn't be required in a post-scarcity environment but as a plausible wierdtopia..

Comment author: RomanDavis 17 December 2010 12:46:57PM *  3 points [-]

I do think I have a richer human experience because of the fights I've been in, and the heartbreak I've felt.

I'm not sure, if it for ''everyone.'' I tend to assume the future should be a place of less coercion, not more.

But I could see the value in a human race that partook of severe injury as an occasional vice in the same way as say, spicy food.

Comment author: xxd 15 December 2011 12:00:17AM 1 point [-]

Yup.

View more: Next