shminux comments on Open Thread, May 1-15, 2012 - Less Wrong

7 Post author: OpenThreadGuy 01 May 2012 04:14AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (264)

You are viewing a single comment's thread. Show more comments above.

Comment author: shminux 02 May 2012 03:59:23PM *  1 point [-]

I'd be interested in seeing you playing a Devil's advocate to your own position and try your best to counter each of the arguments.

Comment author: gRR 02 May 2012 04:35:05PM 3 points [-]

Fair enough :)

Counterarguments:

The rate of appearance of new suffering intelligent agents may be higher than the rate of disappearance of suffering due to optimization efforts.

A significant number of evolved intelligent agents may have directly opposing values.

The power of general intelligence may be greatly exaggerated.

Comment author: Thomas 02 May 2012 04:49:20PM 1 point [-]

The power of general intelligence may be greatly exaggerated.

I rather think, that the power of general intelligence is greatly underestimated. Don't missunderestimate!

Comment author: gRR 02 May 2012 06:05:39PM *  0 points [-]

The probability of a general intelligence destroying itself because of errors of judgement may be large. This would mean that "the power of general intelligence is greatly exaggerated" - nonexistent intelligence is unable to optimize anything anymore.

Comment author: shminux 02 May 2012 04:49:00PM 0 points [-]

Which side do you find more compelling and why?

Comment author: gRR 02 May 2012 06:02:05PM 0 points [-]

What's your opinion?

Comment author: shminux 02 May 2012 07:43:00PM 1 point [-]

Pleasure/pain is one of the simplest control mechanism, thus it seems probable that it would be discovered by any sufficiently-advanced evolutionary processes anywhere.

What other mechanisms have you compared it to?

Once general intelligence arises as a result of an evolutionary process, it will apply itself to optimizing the (unnecessary) pain away... Generally, it will succeed. (General intelligence = power of general-purpose optimization.)

How do you define "pain" in a general case? How does one define unnecessary pain? Does boredom counts as a necessary pain? How far in the future do you have to trace the consequences before deciding that a certain discomfort is unnecessary?

Comment author: gRR 02 May 2012 08:01:16PM *  0 points [-]

What other mechanisms have you compared it to?

To a lack of any.

How do you define "pain" in a general case?

Sharp negative reinforcement in a behavioristic learning process.

How does one define unnecessary pain?

Useless/inefficient for the necessary learning purposes.

Does boredom counts as a necessary pain?

Depends on the circumstances. When boredom is inevitable and there's nothing I can do about it, I would prefer to be without it.

How far in the future do you have to trace the consequences before deciding that a certain discomfort is unnecessary?

Same time range in which my utility function operates.

(EDIT: I'm sorry, I should have asked you for your own answers to your questions first. Stupid me.)