shminux comments on Open Thread, May 1-15, 2012 - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (264)
I'd be interested in seeing you playing a Devil's advocate to your own position and try your best to counter each of the arguments.
Fair enough :)
Counterarguments:
The rate of appearance of new suffering intelligent agents may be higher than the rate of disappearance of suffering due to optimization efforts.
A significant number of evolved intelligent agents may have directly opposing values.
The power of general intelligence may be greatly exaggerated.
I rather think, that the power of general intelligence is greatly underestimated. Don't missunderestimate!
The probability of a general intelligence destroying itself because of errors of judgement may be large. This would mean that "the power of general intelligence is greatly exaggerated" - nonexistent intelligence is unable to optimize anything anymore.
Which side do you find more compelling and why?
What's your opinion?
What other mechanisms have you compared it to?
How do you define "pain" in a general case? How does one define unnecessary pain? Does boredom counts as a necessary pain? How far in the future do you have to trace the consequences before deciding that a certain discomfort is unnecessary?
To a lack of any.
Sharp negative reinforcement in a behavioristic learning process.
Useless/inefficient for the necessary learning purposes.
Depends on the circumstances. When boredom is inevitable and there's nothing I can do about it, I would prefer to be without it.
Same time range in which my utility function operates.
(EDIT: I'm sorry, I should have asked you for your own answers to your questions first. Stupid me.)