Viliam_Bur comments on Open Thread, May 1-15, 2012 - Less Wrong

7 Post author: OpenThreadGuy 01 May 2012 04:14AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (264)

You are viewing a single comment's thread. Show more comments above.

Comment author: Viliam_Bur 02 May 2012 05:14:14PM 2 points [-]

Once general intelligence arises as a result of an evolutionary process, it will apply itself to optimizing the (unnecessary) pain away.

Its own pain, probably. Why do you believe it will care about the pain of other beings?

Comment author: gRR 02 May 2012 05:59:56PM 1 point [-]

Cooperation with other intelligent beings is instrumentally useful, unless the pain of others is one's terminal value.

Comment author: Viliam_Bur 03 May 2012 07:18:52AM 1 point [-]

If one being is a thousand times more intelligent than another, such cooperation may be a waste of time.

Comment author: gRR 03 May 2012 10:57:09AM 0 points [-]

Why do you think so? By default, I think their interaction would run like this: the much more intelligent being will easily persuade/trick the other one to do whatever the first one wants, so they'll cooperate.

Comment author: Viliam_Bur 03 May 2012 03:10:30PM *  1 point [-]

Imagine yourself and a bug. A bug that understands numbers up to one hundred, and is even able to do basic mathematical operations, though in 50% of cases it gets the answer wrong. That's pretty impressive for a bug... but how much value would a cooperation with this bug provide to you? To compare, how much value would you get by removing such bugs from your house, or by driving your car without caring how many bugs do you kill by doing so.

You don't have to want to make the bugs suffer. It's enough if they have zero value for you, and you can gain some value by ignoring their pain. (You could also tell them to leave your house, but maybe they have nowhere else to go, or are just too stupid to find a way out, or they always forget and return.)

Now imagine a being with similar attitude towards humans. Any kind of human thought or work it can do better, and at a lesser cost than communicating with us. It does not hate us, it just can derive some important value by replacing our cities with something else, or by increasing radiation, etc.

(And that's still assuming a rather benevolent being with values similar to ours. More friendly than a hypothetical Mother-Theresa-bot convinced that the most beautiful gift for a human is that they can participate in suffering.)

Comment author: gRR 03 May 2012 04:46:35PM 0 points [-]

Such a scenario is certainly conceivable. On the other hand, bugs do not have general intelligence. So we can only speculate about how interaction between us and much more intelligent aliens would go. By default, I'd say they'd leave us alone. Unless, of course, there's a hyperspace bypass that needs to be built.

Comment author: Matt_Simpson 02 May 2012 07:39:38PM 0 points [-]

The conclusion doesn't follow. Ripping apart your body to use the atoms to construct something terminally useful is also instrumentally useful.

Comment author: gRR 02 May 2012 07:44:51PM 0 points [-]

Only if there's general lack of atoms around. When atoms are in abundance, it's more instrumentally useful to ask me for help constructing whatever you find terminally useful.

Comment author: Matt_Simpson 02 May 2012 07:55:27PM 0 points [-]

Right, but your conclusion still doesn't follow - my example was just to show the flaw in your logic. Generally, you have to consider the trade-offs between cooperating and doing anything else instead.

Comment author: gRR 02 May 2012 08:09:49PM -1 points [-]

Well, of course. But which my conclusion you mean that doesn't follow?

Comment author: Matt_Simpson 02 May 2012 08:44:34PM 0 points [-]

Once general intelligence arises as a result of an evolutionary process, it will apply itself to optimizing the (unnecessary) pain [of others] away.

Comment author: gRR 02 May 2012 09:02:42PM 0 points [-]

But "[of others]" part is unnecessary. If every intelligent agent optimizes away their own unnecessary pain, it is sufficient for the conclusion. Unless, of course, there exists a significant number of intelligent agents that have pain of others as a terminal goal, or there's a serious lack of atoms for all agents to achieve their otherwise non-contradicting goals.

Comment author: Matt_Simpson 02 May 2012 09:17:15PM 1 point [-]

If every intelligent agent optimizes away their own unnecessary pain, it is sufficient for the conclusion.

This is highly dependent on the strategic structure of the situation.

Comment author: Thomas 02 May 2012 05:35:41PM -1 points [-]

Since I would care, I think other intelligent could care also. One who cares might be enough to abolish us all from the pain. A billion of those who don't care, are not enough to conserve the pain.