Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: Blueberry 27 July 2010 06:09:02PM 6 points [-]

Consider the tells of eye movement. Up and to the left is imagination, up and to the right is memory, down and to the right is consideration.

That myth again?

Comment author: eirenicon 27 July 2010 06:52:10PM 4 points [-]

I've never heard of this before, and Google suggests it seems to be mainly a component of NLP, with little supporting evidence. Still, I can't find anything that puts paid to it either way, and it's an interesting idea. Has anyone done a reputable study on it? Scholar yields nothing relevant.

Comment author: Nanani 18 May 2010 12:51:52AM 4 points [-]

Terrible analogy.

Video games have a lot of diversity to them and different genres engage very different skills. Small talk all seems to encompass the same stuff, namely social ranking.

Some of us know how to do it but just don't -care-, and that doesn't mean we're in fact bad at it. I think that is the point this comment thread is going for.

Comment author: eirenicon 18 May 2010 03:54:10PM 3 points [-]

It's a bad analogy because there are different kinds of games, but only one kind of small talk? If you don't think pub talk is a different game than a black tie dinner, well, you've obviously never played. Why do people do it? Well, when you beat a video game, you've beat a video game. When you win at social interaction, you're winning at life - social dominance improves your chances of reproducing.

As for rule books: the fact that the 'real' rules are unwritten is part of the fun. Of course, that's true for most video games. Pretty much any modern game's real tactics come from players, not developers. You think you can win a single StarCraft match by reading the manual? Please.

Comment author: Wei_Dai 16 May 2010 02:25:04PM 14 points [-]

Once you tune your radio in, you may find such occasions more exciting.

For me, understanding "what's really going on" in typical social interactions made them even less interesting than when I didn't. At least back then it was a big mystery to be solved. Now I just think, what a big waste of brain cells.

Roko, do you personally find these status and alliance games interesting? Why? I mean, if you play them really well, you'll end up with lots of allies and high status among your friends and acquaintances, but what does that matter in the larger scheme of things? And what do you think of the idea that allies and status were much more important in our EEA (i.e., tribal societies) than today, and as a result we are biased to overestimate their importance?

Comment author: eirenicon 17 May 2010 07:04:33PM *  1 point [-]

do you personally find these status and alliance games interesting? Why?

They're way more interesting than video games, for example. Or watching television. Or numerous other activities people find fun and engaging. Of course, if you're bad at them you aren't going to enjoy them; the same goes for people who can't get past the first stage of Pac-Man.

In response to The Red Bias
Comment author: Blueberry 20 April 2010 03:13:40PM 1 point [-]

I'm curious how this affects racial dynamics. What about people with very dark skin, for instance: how does this apply? I don't think there's any population group with literally red skin, but maybe increased melanin would have a similar effect, by making it more difficult to see blushing or becoming pale.

Of course, there are a great deal of confounding factors there.

In response to comment by Blueberry on The Red Bias
Comment author: eirenicon 20 April 2010 03:44:37PM 1 point [-]

I think there is probably no relation. My guess is that red signalling probably precedes variation in skin colour, perhaps even loss of body-wide hair. It is a thoroughly unconscious bias, and does not apply to pink, or orange, or peach, but red, especially bright, bold baboon-butt red. In any case, I hope the sporting tests were controlled for skin colour, because that does seem like a weighty factor when considering scoring bias.

Comment author: eirenicon 24 March 2010 08:38:05PM 2 points [-]

IIRC Hanson favours panspermia over abiogenesis. Has he reconciled this with his great filter theory?

Comment author: pwno 06 March 2010 08:12:58PM 0 points [-]

To use an extreme example, when the President of the US goes into a small-town diner and chats with the "regular folks" there, he's not lowering his status. He's signaling, "My status is so high, I can pal around with whoever I want." Yes, this raises the status of those he talks to. (It also raises the President's status.)

This is not the best example because a president's institutionally granted power is a function of how likable and popular he is with the people. Imagine, however, that the president was more of a dictator and didn't need his citizen's approval. In this case, he'd be lowering his status by chatting with regular folk. He's signaling he still cares enough to chat with them despite having this unalterable power over them. Consequently, the citizens believe they must have some power over the dictator (however little).

Comment author: eirenicon 06 March 2010 10:23:32PM 4 points [-]

This is not the best example because a president's institutionally granted power is a function of how likable and popular he is with the people.

The President of the US is probably the highest status person in the world. The fact that roughly 20% of Americans voted for Obama is far from the only thing that gives him that status. Keep in mind that it takes extraordinary public disapproval to affect a President; Bush 43's lowest approval rating was one point higher than Nixon's. On the other hand, Clinton's lowest rating was 12 points higher than that, and he was impeached. Public approval is not very meaningful to the Presidency.

Imagine, however, that the president was more of a dictator and didn't need his citizen's approval. In this case, he'd be lowering his status by chatting with regular folk.

Or he'd be signaling that he's a benevolent dictator who, while not requiring the approval of the regular folk, wants them to think he's on their side. Having popular support would obviously raise a dictator's status, domestically and internationally. The people might think that their dictator wasn't such a bad guy if he was willing to talk to them. Anecdotally, when a dictator goes to ground and doesn't make public appearances, it's usually a sign that his regime is in trouble. Don't underestimate what a high-status move it is to be secure about your status.

Comment author: eirenicon 06 March 2010 07:56:50PM 13 points [-]

What Lesswrongers may not realize is how bothering to change your behavior at all towards other people is inherently status lowering. For instance, if you just engage in an argument with someone you’re telling them they’re important enough to use so much of your attention and effort—even if you “act” high status the whole time.

People of high status assume their status generally cannot be affected by people of low status, at least in casual encounters (i.e. not when a cop pulls over your Maybach for going 200). To use an extreme example, when the President of the US goes into a small-town diner and chats with the "regular folks" there, he's not lowering his status. He's signaling, "My status is so high, I can pal around with whoever I want." Yes, this raises the status of those he talks to. (It also raises the President's status.)

If people of high status thought they had something to lose in engaging with someone of low status, they wouldn't engage with them. Of course, that would make them look afraid to lose status, which in itself would lower their status. So they engage with people of lower status in order to make it seem like status isn't important to them, which is a high status signal. In short, engaging with people signals higher status than ignoring them.

I wonder what will be in the random theory hat next time I reach in!

Comment author: gregconen 10 February 2010 04:29:40AM *  7 points [-]

Sadly, I don't think existential risk reduction is sufficiently sympathetic to the general population (and we do need them on board for this to work). And if you have a large basket with stuff like the Methuselah foundation in it, you're likely to have people wondering why they can't put in "The Society for Rare Diseases in Photogenic Puppies".

Ideally, you'd pick something simple and widely acceptable. Obviously, it would be difficult to find a single charity that could productively use a billion extra dollars per year. But the basket should be as simple and uncontroversial (and obviously, productive) as possible).

Edit: Thinking about it, using a trusted intermediary might make the most sense. Using a grant-making agency avoids the appearance that we're funneling the money to our pet causes, it reduces the marketing/lobbying incentives (though it doesn't eliminate them) and it makes the money relatively productive (if we choose a good agency). Givewell may be a poor choice, due to the Metafilter flap, but we could specify, say the MIT Poverty Action Lab or something.

Obviously, we'd need the organizations cooperation, or at least permission.

Comment author: eirenicon 10 February 2010 04:37:41AM 6 points [-]

I think it ought to be something unimaginative but reliable, like clean water or vaccines to third world countries. I can't find it at the moment but there's a highly reputable charity that provides clean drinking water to African communities. IIRC they estimated that every $400 or so saved the life of a child. A billion dollars into such a charity - saving 2.5 million children - isn't a difficult PR sell.

Comment author: JGWeissman 03 February 2010 12:59:41AM 4 points [-]

It's not a hard choice.

I doesn't seem hard to you, because you are making excuses to avoid it, rather than asking yourself what if I know the AI is always truthful, and it promised that upon being let out of the box, it would allow you (and your copies if you like) to live out a normal human life in a healthy stimulating enviroment (though the rest of the universe may burn).

After you find the least convenient world, the choice is between millions of instances of you being tortured (and your expectation as you press the reset button should be to be tortured with very high probability), or to let a probably unFriendly AI loose on the rest of the world. The altruistic choice is clear, but that does not mean it would be easy to actually make that choice.

Comment author: eirenicon 03 February 2010 03:23:45AM *  1 point [-]

It's not that I'm making excuses, it's that the puzzle seems to be getting ever more complicated. I've answered the initial conditions - now I'm being promised that I, and my copies, will live out normal lives? That's a different scenario entirely.

Still, I don't see how I should expect to be tortured if I hit the reset button. Presumably, my copies won't exist after the AI resets.

In any case, we're far removed from the original problem now. I mean, if Omega came up to me and said, "Choose a billion years of torture, or a normal life while everyone else dies," that's a hard choice. In this problem, though, I clearly have power over the AI, in which case I am not going to favour the wellbeing of my copies over the rest of the world. I'm just going to turn off the AI. What follows is not torture; what follows is I survive, and my copies cease to experience. Not a hard choice. Basically, I just can't buy into the AI's threat. If I did, I would fundamentally oppose AI research, because that's a a pretty obvious threat an AI could make. An AI could simulate more people than are alive today. You have to go into this not caring about your copies, or not go into it at all.

Comment author: DanielVarga 03 February 2010 02:38:38AM 2 points [-]

Here is a variant designed to plug this loophole.

Let us assume for the sake of the thought experiment that the AI is invincible. It tells you this: you are either real-you, or one of a hundred perfect-simulations-of-you. But there is a small but important difference between real-world and simulated-world. In the simulated world, not pressing the let-it-free button in the next minute will lead to eternal pain, starting one minute from now. If you press the button, your simulated existence will go on. And - very importantly - there will be nobody outside who tries to shut you down. (How does the AI know this? Because the simulation is perfect, so one thing is for sure: that the sim and the real self will reach the same decision.)

If I'm not mistaken, as a logic puzzle, this is not tricky at all. The solution depends on which world you value more: the real-real world, or the actual world you happen to be in. But still I find it very counterintuitive.

Comment author: eirenicon 03 February 2010 03:16:42AM 1 point [-]

It's kind of silly to bring up the threat of "eternal pain". If the AI can be let free, then the AI is constrained. Therefore, the real-you has the power to limit the AI's behaviour, i.e. restrict the resources it would need to simulate the hundred copies of you undergoing pain. That's a good argument against letting the AI out. If you make the decision not to let the AI out, but to constrain it, then if you are real, you will constrain it, and if you are simulated, you will cease to exist. No eternal pain involved. As a personal decision, I choose eliminating the copies rather than letting out an AI that tortures copies.

View more: Next