Blueberry comments on The Social Coprocessor Model - Less Wrong

22 [deleted] 14 May 2010 05:10PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (570)

You are viewing a single comment's thread. Show more comments above.

Comment author: Blueberry 16 May 2010 02:50:20PM 6 points [-]

I mean, if you play them really well, you'll end up with lots of allies and high status among your friends and acquaintances, but what does that matter in the larger scheme of things? And what do you think of the idea that allies and status were much more important in our EEA (i.e., tribal societies) than today, and as a result we are biased to overestimate their importance?

Their importance is a function of our values, which came from the EEA and are not so easily changed. Those values, like wanting friendship, community, relationships, and respect, are a part of what make us human.

I actually don't interpret social interactions as "status and alliance games," which is kind of cynical and seems to miss the point. Instead, I try to recognize that people have certain emotional requirements that need to be met in order to gain their trust, friendship, and attraction, and that typical social interactions are about building that type of trust and connection.

Comment author: Wei_Dai 16 May 2010 05:22:54PM 7 points [-]

Most of what we call values seem to respond to arguments, so they're not really the kind of fixed values that a utility maximizer would have. I would be wary about calling some cognitive feature "values that came from the EEA and are not easily changed". Given the right argument or insight, they probably can be changed.

So, granted that it's human to want friendship, community, etc., I'm still curious whether it's also human to care less about these things after realizing that they boil down to status and alliance games, and that the outcomes of these games don't count for much in the larger scheme of things.

Comment author: Vladimir_M 16 May 2010 08:02:37PM *  5 points [-]

So, granted that it's human to want friendship, community, etc., I'm still curious whether it's also human to care less about these things after realizing that they boil down to status and alliance games, and that the outcomes of these games don't count for much in the larger scheme of things.

Well, is it also human to stop desiring tasty food once you realize that it boils down to super-stimulation of hardware that evolved as a device for impromptu chemical analysis to sort out nutritionally adequate stuff from the rest?

As for the "larger scheme of things," that's one of those emotionally-appealing sweeping arguments that can be applied to literally anything to make it seem pointless and unworthy of effort. Selectively applying it is a common human bias. (In fact, I'd say it's a powerful general technique for producing biased argumentation.)

Comment author: Wei_Dai 17 May 2010 03:26:05AM 5 points [-]

Well, is it also human to stop desiring tasty food once you realize that it boils down to super-stimulation of hardware that evolved as a device for impromptu chemical analysis to sort out nutritionally adequate stuff from the rest?

Not to stop desiring it entirely, but to care less about it than if I didn't realize, yes. (I only have a sample size of one here, namely myself, so I'm curious if others have the same experience.)

As for the "larger scheme of things," that's one of those emotionally-appealing sweeping arguments that can be applied to literally anything to make it seem pointless and unworthy of effort. Selectively applying it is a common human bias. (In fact, I'd say it's a powerful general technique for producing biased argumentation.)

I don't think I'm applying it selectively... we're human and we can only talk about one thing at a time, but other than that I think I do realize that this is a general argument that can be applied to all of our values. It doesn't seem to affect all of them equally though. Some values, such as wanting to be immortal, and wanting to understand the nature of reality, consciousness, etc., seem to survive the argument much better than others. :)

Comment author: Vladimir_M 17 May 2010 05:53:05AM *  3 points [-]

I think I do realize that this is a general argument that can be applied to all of our values. It doesn't seem to affect all of them equally though. Some values, such as wanting to be immortal, and wanting to understand the nature of reality, consciousness, etc., seem to survive the argument much better than others. :)

Honestly, I don't see what you're basing that conclusion on. What, according to you, determines which human values survive that argument and which not?

Comment author: Wei_Dai 18 May 2010 03:46:52PM 2 points [-]

Honestly, I don't see what you're basing that conclusion on.

I'm surprised that you find the conclusion surprising or controversial. (The conclusion being that some some values survive the "larger scheme of things" argument much better than others.) I know that you wrote earlier:

As for the "larger scheme of things," that's one of those emotionally-appealing sweeping arguments that can be applied to literally anything to make it seem pointless and unworthy of effort.

but I didn't think those words reflected your actual beliefs (I thought you just weren't paying enough attention to what you were writing). Do you really think that people like me, who do not think that literally everything is pointless and unworthy of effort, have just avoided applying the argument to some of our values?

What, according to you, determines which human values survive that argument and which not?

It seems obvious to me that some values (e.g., avoiding great pain) survive the argument by being hardwired to not respond to any arguments, while others (saving humanity so we can develop an intergalactic civilization, or being the first person in an eventually intergalactic civilization to really understand how decisions are supposed to be made) are grand enough that "larger scheme of things" just don't apply. (I'm not totally sure I'm interpreting your question correctly, so let me know if that doesn't answer it.)

Comment author: Vladimir_M 18 May 2010 05:50:25PM *  4 points [-]

Wei_Dai:

Do you really think that people like me, who do not think that literally everything is pointless and unworthy of effort, have just avoided applying the argument to some of our values?

As the only logical possibilities, it's either that, or you have thought about it and concluded that the argument is not applicable to some values. I don't find the reasons for this conclusion obvious, and I do see many selective applications of this argument as a common bias in practice, which is why I asked.

It seems obvious to me that some values (e.g., avoiding great pain) survive the argument by being hardwired to not respond to any arguments, while others (saving humanity so we can develop an intergalactic civilization, or being the first person in an eventually intergalactic civilization to really understand how decisions are supposed to be made) are grand enough that "larger scheme of things" just don't apply. (I'm not totally sure I'm interpreting your question correctly, so let me know if that doesn't answer it.)

Yes, that answers my question, thanks. I do have disagreements with your conclusion, but I grant that you are not committing the above mentioned fallacy outright.

In particular, my objections are that: (1) for many people, social isolation and lack of status is in fact a hardwired source of great pain (though this may not apply to you, so there is no disagreement here if you're not making claims about other people), (2) I find the future large-scale developments you speculate about highly unlikely, even assuming technology won't be the limiting factor, and finally (3) even an intergalactic civilization will matter nothing in the "larger scheme of things" assuming the eventual heat death of the universe. But each of these, except perhaps (1), would be a complex topic for a whole another discussion, so I think we can leave our disagreements rest at this point now that we've clarified them.

Comment deleted 16 May 2010 05:36:36PM [-]
Comment author: Wei_Dai 18 May 2010 05:56:05PM *  4 points [-]

What makes the desire to obtain high status within some small group a legitimate piece of Godshatter (good), as opposed to a kind of scope insensitivity (bad)? Or to put it another way, why isn't scope insensitivity (the non-linear way that a typical human being values other people's suffering) also considered Godshatter?

Comment deleted 18 May 2010 07:23:49PM [-]
Comment author: Wei_Dai 21 May 2010 04:58:44PM *  2 points [-]

Do we have a general criterion for deciding these things? Or is it still unresolved in general?

I think it's unresolved in general. I brought up scope insensitivity as a counter-example to the "Godshatter" argument, or at least a strong form of it which says we should keep all of the values that evolution has handed down to us. It seems likely that we shouldn't, but exactly where to draw the line is unclear to me. Still, to me, desire for high status in some small group seems to be the same kind of "crazy" value as scope insensitivity.

In this specific case, it seems to me that there are many aspects of social interaction that are zero-sum or even negative sum. For the purpose of Coherent Extrapolated Volition, zero sum or negative sum elements are like scope insensitivity, i.e. bad.

I wasn't talking about CEV, I was mainly talking about what you or I should value, now, as individuals. I'm not sure that positive-sum/zero-sum has much to do with that.

Comment author: Vladimir_Nesov 21 May 2010 08:08:27PM *  1 point [-]

Deciding which psychological drives to keep, and which to abandon, is the same as figuring out full formal preference (assuming you have more expressive power than just keeping/abandoning), so there is no heuristic for doing that simpler than full formal preference. This problem isn't just unresolved, it's almost FAI-complete (preference theory, as opposed to efficient implementation).

Comment deleted 21 May 2010 06:10:03PM [-]
Comment author: whpearson 21 May 2010 10:10:03PM 1 point [-]

My guess:

Status should be about gaining allies and mates, correct? Just as charity is about helping people.

Gaining more allies and mates (especially for a male) should be better than gaining fewer agreed? If so, why do maths professors spend so much time and effort trying to gain status in the small world of maths? They would be better off appealing to the lowest common denominator and using their intellect to wow people in something more accessible.

Comment author: Stille 27 May 2010 10:23:10AM 1 point [-]

The quality of the allies also matters. Having allies that can't help you in your chosen goals is a drain on resources.