Aurini comments on Exterminating life is rational - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (272)
Honestly, isn't this nitpicking? It's true that Lord Azatoth stopped selecting for genes in our species ten thousand years ago, but when that game stopped working for him he switched to making our memes compete against eachother (in any sane world we'd be having this conversation in Chinese, and my mother's 'Scottish' surname wouldn't be Nordic).
You're absolutely right, and he did simplify this portion, but it doesn't undermine the weight of his argument any more than my saying "I'm not sexist, I'm a fully evolved male!" is rendered irrelevant by the fact that current social mores have little to nothing to do with evolutionary biology.
It's one thing to correct Phil's statement, or offer a suggested rewording that would improve the strength of the point he was trying to make, but if feels as if you're pin pointing this one poor choice of wording, and using it to imply that the entire premise is flawed.
Argumentum ad evolutionum is both common enough and horribly wrong enough that I would not call it "nitpicking." The claim that unselfish agents will be outcompeted by selfish agents is complex, context-dependent, and requires support. The idea that there will somehow be an equilibrium in which unselfish agents get crowded out seems absurd, and this is what "evolution" seems intended to evoke, because evolution is (in significant part) about competitively crowding out the sub-optimal.
He also makes a much bigger mistake, and I should have addressed that in greater detail. Utility curves are arational, and term "selfish" gets confused way more than it should. It seems clear from context that he means it hedonistically, i.e. my own hedonistic experience is my only concern if I'm selfish; I don't care about what other people want or think. If my actual utility curve involves other people's utility, or it involves maximizing the number of paper clips in existence, there is absolutely no reason to believe I could better accomplish goals if I were "selfish" by this definition.
Utility curves are strictly arational. A rational paperclip maximizer is an entirely possible being. Any statement of the kind "Rational agents are/are not selfish" is a type error; selfishness is entirely orthogonal to rationality.
Instead of trying to interpret the context, you should believe that I mean what I say literally. I repeat:
In fact, I have already explained my usage of the word "selfish" to you in this same context, repeatedly, in a different post.
Psychohistorian wrote:
I quote myself again:
If it isn't working, why don't you try something different?
(I deleted that paragraph.)
Do you have an idea for something else to try?
I don't think it's really a necessary distinction; the idea of an unselfish utility maximizer doesn't quite make sense, because utility is defined so nebulously that pretty much everyone has to seek maximizing their utility.
You're right that it doesn't make sense, which is why some people assume I mean something else when I say "selfish". But a lot of commenters do seem to believe in unselfish utility maximizers, which is why I keep using the word.
Avoiding morally charged words. If possible shy far far away from ANY pattern that people can automatically match against with system 2 so that system 1 stays engaged.
My article here http://www.forbes.com/2009/06/22/singularity-robots-computers-opinions-contributors-artificial-intelligence-09_land.html is an attempt to do this.
Do you mean "system 1 ... system 2"?
as a section header may have thrown me off there.
That aside, I do understand what you're saying, and I did notice the original contrast between the 1%/1%. Though I'd note it doesn't follow that a rational agent would be willing to take a 1% chance of destroying the universe in exchange for a 1% increase in his utility function; the universe being destroyed would probably output a negative, i.e. greater than 100% loss, so that's not an even bet.
The whole arational point is my mistake; the whole paragraph:
reads very much like it is using selfish in the strict rather than holistic utility sense, and that was what I was focusing on in this response. I was focusing specifically on that section and did not reread the whole post, so I got the wrong idea. My point on evolution remains, and the negative-utility argument still makes the +1% for 1% chance of destruction argument fail. But this doesn't matter much, since one can hardly suppose all agents in charge of making such decisions will be perfectly rational.
That's why what I wrote in that section was:
You wrote:
I am supposing that. That's why it's in the title of the post. I don't mean that I am certain that is how things will turn out to be. I mean that this post says that rational behavior leads to these consequences. If that means that the only way to avoid the destruction of life is to cultivate a particular bias, then that's the implication.