Constant comments on Open Thread: September 2011 - LessWrong

5 Post author: Pavitra 03 September 2011 07:50PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (441)

You are viewing a single comment's thread. Show more comments above.

Comment author: [deleted] 04 September 2011 10:22:20AM 8 points [-]

We don't optimize for well-being, we optimize for what we (think we) want, which are two very different things.

Natural selection does not cease operation. Say, for example, that someone invents a box that fully reproduces in every respect the subjective experience of eating and of having eaten by directly stimulating the brain. Dieters would love this device. Here's a device that implements in extreme form the very danger that you fear. In this case, the specific danger is that you will stop eating and die.

So the question is, will the device wipe out the human race? Almost certainly it will not wipe out the entire human race, simply because there are enough people around who would nevertheless choose to eat despite the availability of the device, possibly because they make a conscious decision to do so. These people will be the survivors, and they will reproduce, and their children will have both their values (transmitted culturally) and their genes, and so will probably be particularly resistant to the device.

That's an extreme case. In the actual case, there are doubtless many people who are not adapting well to technological change. They will tend to die out disproportionately, will tend to reproduce disproportionately less.

We have a model of this future in today's addictive drugs. Some people are more resistant to the lure of addictive drugs than others. Some people's lives are destroyed as they pursue the unnatural bliss of drugs, but many people manage to avoid their fate.

Many people have so far managed the trick of pursuing super stimuli without destroying their lives in the process.

Comment author: Eugine_Nier 07 September 2011 07:46:24AM 19 points [-]

Keep in mind, it's possible to evolve to extinction.

Comment author: smk 07 September 2011 02:36:02PM 0 points [-]

I wish I could upvote that more than once.

Comment author: wedrifid 07 September 2011 02:47:49PM 0 points [-]

The post or the comment? If the former then you just prompted me to vote it up for you. :)

Comment author: Will_Newsome 10 September 2011 05:09:42PM 3 points [-]

Me too. smk, your wish has been granted.

Comment author: [deleted] 12 September 2011 10:46:43PM *  4 points [-]

What struck me about the example in this post that its basically genetically equivalent to reliable easy to use contraception.

And now that I think about it humanity basically is like a giant petri dish where someone dumped some antibiotics. The demographic transition is a temporary affair, a die off of maladapted genotypes and memeplexes.

Comment author: nerzhin 05 September 2011 04:18:55PM 3 points [-]

It is not at all clear that the people resistant to addictive drugs are reproducing at a higher rate than those who aren't.

Comment author: Kaj_Sotala 04 September 2011 04:31:05PM 3 points [-]

Sure, I don't think humanity is in any danger of being destroyed by conventional technologies, and I'm pretty sure the Singularity will be happen - in one form or another - way before then. But there may very well be a lot of suffering on the way.

Comment author: Will_Newsome 10 September 2011 05:11:30PM *  0 points [-]

Have you checked out CFAI? It's like CEV but with less of an emphasis on humans. I really don't like humans and would rather only deal with them via implicit meta-level 'get information about morality from your environment' means, which is more explicit in CFAI than CEV.

Comment author: Kaj_Sotala 10 September 2011 07:17:52PM 0 points [-]

I've read part of it, though not all. (I'm a bit confused as to how your comment relates to mine.)

Comment author: Will_Newsome 10 September 2011 10:57:47PM *  0 points [-]

CEV takes more of an economic perspective where agent-extrapolations make deals with each other. The "good" agent-extrapolations might win out in the end (due to having a more-timeless discount rate, say), but there might be a lot of suffering along the way. CFAI on the other hand takes a less deal-centric perspective where the AI's more directly supposed to reason everything through from first principles, which can avoid predictably-stupid-in-retrospect agents getting much of the future's pie, so to speak. So I'm more afraid of CEV-like thinking than CFAI-like thinking, even though both are scary, because I am more afraid of humans being evil than I'm afraid of me not getting what I want. This may or may not overlap at all with your concerns.

(The difference isn't necessarily whether or not they converge on the same policy, it might also be how quickly they converge on that policy. CFAI seems like it'd converge on justifiedness more quickly, but maybe not.)

Comment author: Iabalka 05 September 2011 02:14:51PM 1 point [-]

Are you suggesting to leave everything to natural selection? Doesn't strike me as the rationalists' way.