gRR comments on Open Thread, May 1-15, 2012 - Less Wrong

7 Post author: OpenThreadGuy 01 May 2012 04:14AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (264)

You are viewing a single comment's thread.

Comment author: gRR 02 May 2012 03:08:40PM 3 points [-]

Argument for Friendly Universe:

Pleasure/pain is one of the simplest control mechanism, thus it seems probable that it would be discovered by any sufficiently-advanced evolutionary processes anywhere.

Once general intelligence arises as a result of an evolutionary process, it will apply itself to optimizing the (unnecessary) pain away.

Generally, it will succeed. (General intelligence = power of general-purpose optimization.)

Although in a big universe there would exist worlds where unnecessary suffering does not decrease to zero, it would only happen via a long and constantly increasing chains of low-probability coincidences. The total measure of those worlds will tend to zero.

Conclusion: the universe (either big or small) generally operates in such a way as to minimize the unnecessary suffering of all sentient beings.

Generalization: the universe (either big or small) generally operates in such a way as to maximize the values of all sentient beings.

Comment author: Viliam_Bur 02 May 2012 05:14:14PM 2 points [-]

Once general intelligence arises as a result of an evolutionary process, it will apply itself to optimizing the (unnecessary) pain away.

Its own pain, probably. Why do you believe it will care about the pain of other beings?

Comment author: gRR 02 May 2012 05:59:56PM 1 point [-]

Cooperation with other intelligent beings is instrumentally useful, unless the pain of others is one's terminal value.

Comment author: Viliam_Bur 03 May 2012 07:18:52AM 1 point [-]

If one being is a thousand times more intelligent than another, such cooperation may be a waste of time.

Comment author: gRR 03 May 2012 10:57:09AM 0 points [-]

Why do you think so? By default, I think their interaction would run like this: the much more intelligent being will easily persuade/trick the other one to do whatever the first one wants, so they'll cooperate.

Comment author: Viliam_Bur 03 May 2012 03:10:30PM *  1 point [-]

Imagine yourself and a bug. A bug that understands numbers up to one hundred, and is even able to do basic mathematical operations, though in 50% of cases it gets the answer wrong. That's pretty impressive for a bug... but how much value would a cooperation with this bug provide to you? To compare, how much value would you get by removing such bugs from your house, or by driving your car without caring how many bugs do you kill by doing so.

You don't have to want to make the bugs suffer. It's enough if they have zero value for you, and you can gain some value by ignoring their pain. (You could also tell them to leave your house, but maybe they have nowhere else to go, or are just too stupid to find a way out, or they always forget and return.)

Now imagine a being with similar attitude towards humans. Any kind of human thought or work it can do better, and at a lesser cost than communicating with us. It does not hate us, it just can derive some important value by replacing our cities with something else, or by increasing radiation, etc.

(And that's still assuming a rather benevolent being with values similar to ours. More friendly than a hypothetical Mother-Theresa-bot convinced that the most beautiful gift for a human is that they can participate in suffering.)

Comment author: gRR 03 May 2012 04:46:35PM 0 points [-]

Such a scenario is certainly conceivable. On the other hand, bugs do not have general intelligence. So we can only speculate about how interaction between us and much more intelligent aliens would go. By default, I'd say they'd leave us alone. Unless, of course, there's a hyperspace bypass that needs to be built.

Comment author: Matt_Simpson 02 May 2012 07:39:38PM 0 points [-]

The conclusion doesn't follow. Ripping apart your body to use the atoms to construct something terminally useful is also instrumentally useful.

Comment author: gRR 02 May 2012 07:44:51PM 0 points [-]

Only if there's general lack of atoms around. When atoms are in abundance, it's more instrumentally useful to ask me for help constructing whatever you find terminally useful.

Comment author: Matt_Simpson 02 May 2012 07:55:27PM 0 points [-]

Right, but your conclusion still doesn't follow - my example was just to show the flaw in your logic. Generally, you have to consider the trade-offs between cooperating and doing anything else instead.

Comment author: gRR 02 May 2012 08:09:49PM -1 points [-]

Well, of course. But which my conclusion you mean that doesn't follow?

Comment author: Matt_Simpson 02 May 2012 08:44:34PM 0 points [-]

Once general intelligence arises as a result of an evolutionary process, it will apply itself to optimizing the (unnecessary) pain [of others] away.

Comment author: gRR 02 May 2012 09:02:42PM 0 points [-]

But "[of others]" part is unnecessary. If every intelligent agent optimizes away their own unnecessary pain, it is sufficient for the conclusion. Unless, of course, there exists a significant number of intelligent agents that have pain of others as a terminal goal, or there's a serious lack of atoms for all agents to achieve their otherwise non-contradicting goals.

Comment author: Matt_Simpson 02 May 2012 09:17:15PM 1 point [-]

If every intelligent agent optimizes away their own unnecessary pain, it is sufficient for the conclusion.

This is highly dependent on the strategic structure of the situation.

Comment author: Thomas 02 May 2012 05:35:41PM -1 points [-]

Since I would care, I think other intelligent could care also. One who cares might be enough to abolish us all from the pain. A billion of those who don't care, are not enough to conserve the pain.

Comment author: Grognor 10 May 2012 04:16:07PM 1 point [-]

Do you actually buy this? I don't have the spoons or the time to refute it point-by-point, but I think it's completely, maybe even obviously and overdetermined-ly wrong, if a somewhat interesting idea.

Comment author: gRR 10 May 2012 08:43:23PM 0 points [-]

I wrote it for novelty value, although it seems to be a defensible position. I can think of counterarguments, and counter-counterarguments, etc. Of course, if you are not interested and/or don't have time, you shouldn't argue about it.

Thanks for the "spoons" link, a great metaphor there.

Comment author: shminux 02 May 2012 03:59:23PM *  1 point [-]

I'd be interested in seeing you playing a Devil's advocate to your own position and try your best to counter each of the arguments.

Comment author: gRR 02 May 2012 04:35:05PM 3 points [-]

Fair enough :)

Counterarguments:

The rate of appearance of new suffering intelligent agents may be higher than the rate of disappearance of suffering due to optimization efforts.

A significant number of evolved intelligent agents may have directly opposing values.

The power of general intelligence may be greatly exaggerated.

Comment author: Thomas 02 May 2012 04:49:20PM 1 point [-]

The power of general intelligence may be greatly exaggerated.

I rather think, that the power of general intelligence is greatly underestimated. Don't missunderestimate!

Comment author: gRR 02 May 2012 06:05:39PM *  0 points [-]

The probability of a general intelligence destroying itself because of errors of judgement may be large. This would mean that "the power of general intelligence is greatly exaggerated" - nonexistent intelligence is unable to optimize anything anymore.

Comment author: shminux 02 May 2012 04:49:00PM 0 points [-]

Which side do you find more compelling and why?

Comment author: gRR 02 May 2012 06:02:05PM 0 points [-]

What's your opinion?

Comment author: shminux 02 May 2012 07:43:00PM 1 point [-]

Pleasure/pain is one of the simplest control mechanism, thus it seems probable that it would be discovered by any sufficiently-advanced evolutionary processes anywhere.

What other mechanisms have you compared it to?

Once general intelligence arises as a result of an evolutionary process, it will apply itself to optimizing the (unnecessary) pain away... Generally, it will succeed. (General intelligence = power of general-purpose optimization.)

How do you define "pain" in a general case? How does one define unnecessary pain? Does boredom counts as a necessary pain? How far in the future do you have to trace the consequences before deciding that a certain discomfort is unnecessary?

Comment author: gRR 02 May 2012 08:01:16PM *  0 points [-]

What other mechanisms have you compared it to?

To a lack of any.

How do you define "pain" in a general case?

Sharp negative reinforcement in a behavioristic learning process.

How does one define unnecessary pain?

Useless/inefficient for the necessary learning purposes.

Does boredom counts as a necessary pain?

Depends on the circumstances. When boredom is inevitable and there's nothing I can do about it, I would prefer to be without it.

How far in the future do you have to trace the consequences before deciding that a certain discomfort is unnecessary?

Same time range in which my utility function operates.

(EDIT: I'm sorry, I should have asked you for your own answers to your questions first. Stupid me.)