Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: Mitchell_Porter 06 August 2012 11:56:42AM 8 points [-]

When Will talks about hell, or anything that sounds like a religious concept, you should suppose that in his mind it also has a computational-transhumanist meaning. I hear that in Catholicism, Hell is separation from God, and for Will, God might be something like the universal moral attractor for all post-singularity intelligences in the multiverse, so he may be saying (in the great-grandparent comment) that if you are insufficiently attentive to the question of right and wrong, your personal algorithm may never be re-instantiated in a world remade by friendly AI. To round out this guide for the perplexed, one should not think that Will is just employing a traditional language in order to express a very new concept, you need to entertain the idea that there really is significant referential overlap between what he's talking about and what people like Aquinas were talking about - that all that medieval talk about essences, and essences of essences, and all this contemporary talk about programs, and equivalence classes of programs, might actually be referring to the same thing. One could also say something about how Will feels when he writes like this - I'd say it sometimes comes from an advanced state of whimsical despair at ever being understood - but the idea that his religiosity is a double reverse metaphor for computational eschatology is the important one. IMHO.

Comment author: SusanBrennan 06 August 2012 12:29:06PM 1 point [-]

Thank you for the clarification, and my apologies to Will. I do have some questions, but writing a full post from the smartphone I am currently using would be tedious. I'll wait until I get to a proper computer.

In response to comment by [deleted] on The curse of identity
Comment author: Will_Newsome 06 August 2012 10:46:10AM *  -2 points [-]

I mean that, and an infinite number of questions more and less like that, categorically, in series and in parallel. (I don't know how to interpret "<gd&rVF!>", but I do know to interpret it that it was part of your point that it is difficult to interpret, or analogous to something that is difficult to interpret, perhaps self-similarly, or in a class of things that is analogous to something or a class of things that is difficult to interpret, perhaps self-similarly; also perhaps it has an infinite number of intended or normatively suggested interpretations more or less like those.)

(This comment also helps elucidate my previous comment, in case you had trouble understanding that comment. If you can't understand either of these comments then maybe you should read more of the Bible, or something, otherwise you stand a decent chance of ending up in hell. This applies to all readers of this comment, not just army1987. You of course have a decent change of ending up in hell anyway, but I'm talking about marginals here, naturally.)

Comment author: SusanBrennan 06 August 2012 11:10:08AM *  0 points [-]

otherwise you stand a decent chance of ending up in hell.

Comments like this are better for creating atheists, as opposed to converting them.

Comment author: Shanya 30 July 2012 01:05:47PM 0 points [-]

A word fails to connect to reality in the first place. Is Socrates a framster? Yes or no?

What does framster mean?

Comment author: SusanBrennan 30 July 2012 01:25:55PM 4 points [-]

That's the point.

Comment author: jacoblyles 18 July 2012 08:59:02PM 1 point [-]

I follow the virtue-ethics approach, I do actions that make me like the person that I want to be. The acquisition of any virtue requires practice, and holding open the door for old ladies is practice for being altruistic. If I weren't altruistic, then I wouldn't be making myself into the person I want to be.

It's a very different framework from util maximization, but I find it's much more satisfying and useful.

Comment author: SusanBrennan 18 July 2012 09:48:14PM *  1 point [-]

It's a very different framework from util maximization, but I find it's much more satisfying and useful

And if it wasn't more satisfying and useful, would you still follow it?

Comment author: VincentYu 18 July 2012 02:03:50PM *  6 points [-]

Unfortunately I've forgotten a very famous social psychology experiment wherein one group (group A) was allowed to dictate their preferred wage difference between their group and and another group (group B). They chose the option which gave them the least in an absolute sense because the option gave them more than group B by comparison. They were divided according to profession. It's a very famous experiment, so I'm sure someone here will know it.

In Irrationality, Sutherland cites Brown (1978, "Divided we fall: An analysis of relations between sections of a factory workforce") and states:

In real life, the rivalry between groups may be so irrational that each may try to do the other down even at its own expense. In an aircraft factory in Britain the toolroom workers received a weekly wage very slightly higher than that of the production workers. In wage negotiations the toolroom shop stewards tried to preserve this differential, even when by so doing they would receive a smaller wage themselves. They preferred a settlement that gave them £67.30 a week and the production workers a pound less, to one that gave them an extra two pounds (£69.30) but gave the production workers more (£70.30).

In a highly-cited review, Tajfel (1982) states:

An intriguing aspect of the early data on minimal categorization was the importance of the strategy maximizing the difference between the awards made to the ingroup and the outgroup even at the cost of giving thereby less to members of the ingroup. This finding was replicated in a field study (Brown 1978) in which shop stewards representing different trades unions in a large factory filled distri­bution matrices which specified their preferred structure of comparative wages for members of the unions involved. It was not, however, replicated in another field study in Britain (Bourhis & Hill 1982) in which similar matrices were completed by polytechnic and university teachers.

A brief look at recent studies seems to suggest a more nuanced relation, but I'm not familiar with the literature. See, e.g., Card et al. (2010).

Comment author: SusanBrennan 18 July 2012 02:09:24PM *  2 points [-]

Bang on! Brown ("Divided we fall") is exactly what I was looking for. Thank you. I regret having only one up-vote to give you.

Comment author: Kaj_Sotala 18 July 2012 11:20:33AM 6 points [-]

You were casting unfairly casting your political opponents not just as wrong, but as morally reprehensible.

Right - and the interesting thing is, I had no idea that I was doing it, and in fact was trying to do the opposite. I did my best to take extreme viewpoints like "eating meat is like committing genocide" and "everyone should be converted or they'll go to hell" and attempted to portray them as psychologically no different from any other belief. But although I think I did okay with that, an uncharitable and exaggerated strawman still managed to slip in earlier on.

For the most part, I think it's just about the general ingroup-outgroup tendency in humans, and the desire to look down on any outgroups. But as for that bias slipping into my writing, even when I was explicitly trying to avoid it - that seems to have more to do with the way that most of our thought and behavior is built on subconscious systems, with conscious thought only playing a small role. Or to use Jonathan Haidt's analogy, the conscious mind is the rider of an elephant:

'm holding the reins in my hands, and by pulling one way or the other I can tell the elephant to turn, to stop, or to go. I can direct things, but only when the elephant doesn't have desires of his own. When the elephant really wants to do something, I'm no match for him.

...The controlled system [can be] seen as an advisor. It's a rider placed on the elephant's back to help the elephant make better choices. The rider can see farther into the future, and the rider can learn valuable information by talking to other riders or by reading maps, but the rider cannot order the elephant around against its will...

...The elephant, in contrast, is everything else. The elephant includes gut feelings, visceral reactions, emotions, and intuitions that comprise much of the automatic system. The elephant and the rider each have their own intelligence, and when they work together well they enable the unique brilliance of human beings. But they don't always work together well.

That elephant is very eager to pick up on all sorts of connotations and biases from its social environment, and if we spend a lot of time in an environment where a specific group (conservatives, say) frequently gets bashed, then we'll start to imitate that behavior ourselves - automatically and almost as a reflex, and sometimes even when we think that we're doing the exact opposite.

It is a pity that this kind of a bias hasn't really been discussed much on LW. Probably because the original sequences drew most heavily upon cognitive psychology and math, whereas this kind of bias has been mostly explored in social psychology and the humanities.

Comment author: SusanBrennan 18 July 2012 11:54:26AM 2 points [-]

I remember coming across this paper during my PhD, and it provides a somewhat game theoretic analysis of in-group out-group bias, which is still fairly easy to follow. The paper is mainly about the implications for conflict resolution, as the authors are lecturers in business an law, so it should be of interest to those seeking to improve their rationality (particularly where keeping ones cool in arguments is involved), which is why we are here after all.

I've been thinking about doing my first mainspace post for LessWrong soon. Perhaps I could use it to address this. Unfortunately I've forgotten a very famous social psychology experiment wherein one group (group A) was allowed to dictate their preferred wage difference between their group and and another group (group B). They chose the option which gave them the least in an absolute sense because the option gave them more than group B by comparison. They were divided according to profession. It's a very famous experiment, so I'm sure someone here will know it.

Comment author: pjeby 15 July 2012 07:28:09PM 12 points [-]

Somebody (possibly an LWer?) proposed showing up to the car dealership without any cash or credit cards, just a check made out for the agreed-upon amount; the dealer now has no choice but to either take the money or forget about the whole deal.

While I don't remember this specific example anywhere on LessWrong, I actually did this last February. I vaguely recall some of the inspiration being discussions of strategy on LW, specifically the one about removing your car's steering wheel in order to win at the game of "Chicken".

(The rest of the inspiration was that I didn't trust the dealer not to screw with something once I got there, and a strong lack of desire to get into any sort of argument about it.)

Comment author: SusanBrennan 15 July 2012 09:45:00PM *  3 points [-]

Is this the post you were thinking of?

EDIT: Never mind. I'm pretty sure Gwern got the right one.

Comment author: johnlawrenceaspden 15 July 2012 03:35:11PM 2 points [-]

Well, I hate to say this for obvious reasons, but if the magic sugar water cured my hayfever just once, I'd try it again, and if it worked again, I'd try it again. And once it had worked a few times, I'd probably keep trying it even if it occasionally failed.

If it consistently worked reliably I'd start looking for better explanations. If no-one could offer one I'd probably start believing in magic.

I guess not believing in magic is something to do with not expecting this sort of thing to happen.

Comment author: SusanBrennan 15 July 2012 04:13:09PM 2 points [-]

The placebo effect strikes me as a decent enough explanation.

Comment author: Multiheaded 12 May 2012 07:55:32AM *  -1 points [-]

I don't care all that much about political democracy; what I meant is that Japan, India or, looking at the relative national conditions, even Turkey did NOT require some particular ruthlessness to modernize.

edit: derp

Comment author: SusanBrennan 12 May 2012 09:36:13PM 4 points [-]

even Turkey did NOT require some particular ruthlessness to modernize.

Could you explain the meaning of this sentence please. I'm not sure I have grasped it correctly. To me it sounds like that you are saying that there was no ruthlessness involved in Atatürk's modernizing reforms. I assume that's not the case, right?

In response to comment by TimS on A sense of logic
Comment author: [deleted] 12 May 2012 06:10:26PM 6 points [-]

Tell that person that feathers are light, what is light cannot be dark, therefore feathers cannot be dark.

In response to comment by [deleted] on A sense of logic
Comment author: SusanBrennan 12 May 2012 09:29:21PM 0 points [-]

This is my favorite response so far.

View more: Next