On manipulating others

-4 Jonii 16 June 2013 05:44PM

I recently had a discussion with a friend of mine on the topic of reading others, socially. What they want, what they think, where are they going, etc. During this discussion, I verbalized my intuition on the topic of manipulating others how you think they should act, and what I said had me puzzled for the next few days. So, after much thinking I came to a conclusion, but I want to see what LW thinks of my pondering.

Basically, the idea is that, social clumsiness many very intelligent people face is actually very much self-imposed, a handicap placed upon themselves because we feel iffy about consciously manipulating others as pawns in our grander schemes.

Basically, the reasoning of mine was this: Treating other people as pawns in your plan, rather than actual people, is wrong. You should not strip others of their power to decide for themselves. But say, you are more intelligent than others, and could with planning lead others to do things you want them to. This power over others presents you with an unfair advantage, and this unfair advantage presents you with an iffy ethical dilemma. If you can force other people to do what you will, regardless of their initial disposition, aren't you treating them as pawns rather than autonomous human beings? If you strip them of power to have their initial disposition affect their decisions, aren't you doing wrong? Of course, it's usually very difficult to get people to do what you want. Two equals discussing, both may try this, but both may fail, and even if another succeeds, it's still considered "fair game" by all parties. But more easily this manipulating happens, the more of your brain you need to shut down to make the discussion "fair". At some point, expressing any opinion and leading other people at all seems risky and iffy.

So how do people cope? My theory is this: They stop interacting. Voicing their own opinion, asking other people for things, or even having any goal other than following directions laid out by others becomes off-limits. If they do any of that, it opens an ugly, ethical box of worms of the shape "Should I make them do this?"

So basically, my hypothesis is, the reason intelligent people are so often socially clumsy is because it's a facade, a self-imposed handicap they keep up because evolution has programmed us to have repulsion towards unfairly manipulating others. Because they can make others do anything, they choose to do nothing. This manifests as being easily led, a kind of "doormat", lacking their own will or ego, even.

It's simplistic, there are complications I can readily see that make the whole picture more complicated, but this stripped down dynamic of being more intelligent forcing you to feign helplessness is what I'm interested in, so that's what I presented. Is there any reason to think a mechanic like this actually exists? Is it widespread? Has there been actual study on this mechanic already?

There are aplenty of interesting-looking areas of study if this dynamic is actually a real thing. Say, PUA could look a bit different when aimed at doormat-style people. Aesthetically it would provide more interesting explanation for why smart people are not too social, and it also leads to advice that differs a lot from advice given from stand-point of "You need to learn this". It makes several "is it okay to manipulate others" -type of questions relevant for practical ethics study. Of course, it most likely is not a real thing.

 

Edit: Also, I was a bit hesitant if I should post this under discussion or wait for that Open Thread to pop up. It's quite lengthy, so I felt discussion post could be appropriate, but dunno, I could and maybe should take this down and wait for Open Thread.

Meetup posts as discussion threads, please

26 Jonii 14 February 2011 11:49AM

As of now, 4 of 10 newest promoted posts are about meetups, as well 4 of 10 newest posts overall. For casual readers like me, having frontpage flooded by this much irrelevant information, _especially promoted-section_, seems really, really discouraging. LW has tendency to contain too much useless meta-discussion compared to the actual rationality-related one, but having frontpage flooded by meta-discussion like this seems rather unbeliveable. Please, let's try to keep at least the promoted-section rationality-related.

The true prisoner's dilemma with skewed payoff matrix

0 Jonii 20 November 2010 08:37PM

Related to The True Prisoner's Dilemma, Let's split the cake, lengthwise, upwise and slantwise, If you don't know the name of the game, just tell me what I mean to you

tl;dr: Playing the true PD, it might be that you should co-operate when expecting the other one to defect, or vice versa, in some situations, against agents that are capable of superrationality. This is because relative weight of outcomes for both parties might vary. This could lead this sort of agents to outperform even superrational ones.

So, it happens that our benevolent Omega has actually an evil twin, that is as trustworthy as his sibling, but abducts people into a lot worse hypothetical scenarios. Here we have one:

You wake up in a strange dimension, and this Evil-Omega is smiling at you, and explains that you're about to play a game with unknown paperclip maximizer from another dimension that you haven't interacted with before and won't interact with ever after. The alien is like GLUT when it comes to consciousness, it runs a simple approximation of rational decision algorithm but nothing that you could think of as "personality" or "soul". Also, since it doesn't have a soul, you have absolutely no reason to feel bad for it's losses. This is true PD.

You are also told some specifics about the algorithm that the alien uses to reach its decision, and likewise told that alien is told about as much about you. At this point I don't want to nail the algorithm the opposing alien uses down to one specific. We're looking for a method that wins when summing up all these possibilities. Next, especially, we're looking at the group of AI's that are capable of superrationality, since against other's the game is trivial.

The payoff matrix is like this:

DD=(lose 3 billion lives and be tortured, lose 4 paperclips), CC=(2 billion lives and be made miserable, lose 2 paperclips), CD=(lose 5 billion lives and be tortured a lot, nothing), DC=(nothing, lose 8 paperclips)

So, what do you do? Opponent is capable of superrationality. In the post "The True Prisoner's Dilemma", it was(kinda, vaguely, implicitly) assumed for simplicity's sake that this information is enough to decide whether to defect or not. Answer, based on this information, could be to co-operate. However, I argue that information given is not enough.

Back to the hypothetical: In-hypothetical you is still wondering about his/her decision, but we zoom out and observe that, unbeknownst to you, Omega has abducted your fellow LW reader and another paperclip maximizer from that same dimension, and is making them play PD. But this time their payoff matrix is like this:

DD=(lose $0.04, make 2 random, small changes to alien's utility function and 200 paperclips lost), CC=(lose $0.02, 1 change, 100 paperclips), CD=(lose $0.08, nothing), DC=(nothing, 4 changes, 400 paperclips)

Now, if it's not "rational" to take the relative loss into account, we're bound to find ourselves in a situation where billions of humans die. You could be regretting your rationality, even. It should become obvious now that you'd wish you could somehow negotiate both of these PD's so that you would defect and your opponent co-operate. You'd be totally willing to take a $0.08 hit for that, maybe paying it in its entirety for your friend. And so it happens, paperclip maximizers would also have an incentive to do this.

But, of course, players don't know about this entire situation, so they might not be able to operate in optimal way in this specific scenario. However, if they take into account how much the other cares about those results, using some unknown method, they just might be able to systematically perform better(if we made more of this sorts of problems, or if we selected payoffs at random for the one-shot game), than "naive" PD-players playing against each other. Naivity here would imply that they simply and blindly co-operate against equally rational opponents. How to achieve that is the open question.

-

Stuart Armstrong, for example, has an actual idea of how to co-operate when the payoffs are skewed, while I'm just pointing out that there's a problem to be solved, so this is not really news or anything. Anyways, I still think that this topic has not been explored as much as it should be.

Edit. Added this bit: You are also told some specifics about the algorithm that the alien uses to reach its decision, and likewise told that alien is told about as much about you. At this point I don't want to nail the algorithm the opposing alien uses down to one specific. We're looking for a method that wins when summing up all these possibilities. Next, especially, we're looking at the group of AI's that are capable of superrationality, since against other sort of agents the game is trivial.

Edit. Corrected some huge errors here and there, like, mixing hypothetical you and hypothetical LW-friend.

Edit. Transfer Discussion -> Real LW complete!