fubarobfusco comments on The Value Learning Problem - LessWrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (37)
This seems like a distracting example that is likely to set off a lot of people's politics behavior.
For instance, it may be misread as saying that humans who don't draw that conclusion are somehow broken.
I was just about to post this quote as a quite well-chosen example which uses an easily understood analogy to defuse all those arguments that an AI should be smart enough to know what is 'intended' in one quick sweep (one might say yudkowskyesk so).
*yudkowskily
I think the word Gunnar was going for was "Yudkowskyesquely", unfortunately.
Such humans are Natural_Selection!Broken, but the point is that that's not Human!Broken.
Do you have a better example of what the algorithm feels like from the inside? (Pointing out that an example could be problematic seems less useful than supplying a fix also.)
Well, how about just generalizing it away from the politically pointy example?
Although just as the original example is prone to political criticism, this one may be prone to the critique that, as a matter of fact, in the generations after the discovery of evolution, quite a few humans did attempt to adopt their interpretation of evolution's goals as the proper goals of humanity.
The sex example is more concrete. This new one blurs the point.
Your revised example is just as prone to that, isn't it?
Which makes me guess (I know, guessing is rude, sorry) that this isn't your real objection, and you're just reacting to the keyword "contraceptives".
It doesn't look to me as if fubarobfusco's example is as prone to that problem.
With the original example:
With fubarobfusco's modified version:
Perhaps fubarobfusco is "reacting to the keyword 'contraceptives'" in the following sense: he sees that word, recognizes that there is a whole lot of political/religious controversy around it, and feels that it would be best avoided. I'm not sure there's anything wrong with that.
Hm. Yeah, point taken, though I'd probably have to be American to be able to take this seriously on a gut level.
Still, the original example was clearer. It had a clear opposition, Bad according to genes, Good according to humans (even if not all of them). The modified example would lose that, as people generally do leave, and want to leave, descendants. It doesn't convey that sense of a sharp break with the "original intention".
Can't seem to think of an equally strong example that would be less likely to be objectionable...
Yes, but that's because most human beings interpret normative force as being a command coming from an authority figure, and vice-versa. Let them hallucinate an authority figure and they'll think there's reason to do what It says.