Comment author: zaph 03 December 2012 08:36:10PM *  0 points [-]

I'm working on adding elements to a report at work that does data visualization on a large scale (the data set is about 1 million data points; it's really not all that impressive of a subject matter, but I can't be terribly specific). The report has all of the "easy" elements I need in it, but now I'm trying to add in the harder elements. My ultimate end goal would be to add in the more complicated data along with system parameters, so I can get a handle on how parameter changes affect the output. I'd love to see Bayes nets and the like make a triumphant entrance at some point. But near as I can tell, I'd be the local expert on all of that, and anything I know about that subject matter I mostly picked up from here.

Comment author: nigerweiss 03 December 2012 12:48:28AM *  6 points [-]

I would strongly disagree.

My interpretation of these experiments is that they make a lot of sense if you consider morality from a system-1 and system-2 perspective. If we actually sit down and think about, humans tend to have somewhat convergent answers to moral dilemmas which tend to the utilitarian (in this case, don't shock the man). That's a system-2 response.

However, in the heat of the moment, faced with a novel situation, we resort to fast, cheap system-1 heuristics for our moral intuitions. Some of those heuristics are 'what is everyone else doing?' 'what is the authority figure telling us to do' and 'what have I done in similar situations in the past?' Normally, these work pretty well. However, in certain corner cases, they produce behavior that system-2 would never condone: lynch mobs, authoritarian cruelty, and the unfortunate results of Milgram's experiments.

People didn't decide, rationally, that it was morally right to torture a man to death for the sake of an experiment they knew nothing about and were paid a few dollars to participate in, and this paper is silly to suggest otherwise. They did it because they were under stress, and the strongest influence in their head was the ancestral heuristic of 'keep your head down, do what you're told, they must know what they're doing.'

There are a number of other possible explanations for that detail. For example:

"The experiment requires that you continue" invokes the larger apparatus of Science. It gives the impression that something much larger than you is at foot, and that ALL of it is expecting you to shut up and do what you're told.

"You have no other choice, you must go on" - that rankles. Of course there's a choice. We pattern match it to a moral choice, and system 2 comes in and makes the right call.

The best lesson you can learn from these experiments, as depressing as they are, is that when you feel rushed, and there's life and death at stake, and you don't feel you have time to breathe, the best possible thing you can do is to stop, sit down on the floor, clear your head, and take a moment to really try to think about what you're doing.

Comment author: zaph 03 December 2012 02:04:21PM 2 points [-]

I believe the article the OP points to is actually more about how system 2 is being engaged in these systems, and is therefore not "blind obedience", i.e. a simple heuristic being engaged. From the conclusion:

On the other hand, it ignores the evidence that those who do heed authority in doing evil do so knowingly not blindly, >actively not passively, creatively not automatically. They do so out of belief not by nature, out of choice not by necessity. >In short, they should be seen—and judged—as engaged followers not as blind conformists

Equally, what is shocking about Milgram's experiments is that rather than being distressed by their actions, participants >could be led to construe them as “service” in the cause of “goodness.”

At root, the fundamental point is that tyranny does not flourish because perpetrators are helpless and ignorant of their >actions. It flourishes because they actively identify with those who promote vicious acts as virtuous [49]. It is this >conviction that steels participants to do their dirty work and that makes them work energetically and creatively to ensure >its success. Moreover, this work is something for which they actively wish to be held accountable—so long as it secures >the approbation of those in power.

To put words into their mouth, I believe they are arguing that people's system 2's are overriding the "don't hurt people" heuristic of system 1, as opposed to system 2 analysis being overridden by a simple obedience heuristic.

Comment author: Viliam_Bur 03 December 2012 12:05:24PM *  7 points [-]

But there is always some metric to be gamed. There is always some causality chain which results in a specific distribution of money to organizations. Just because we close our eyes, it does not make the causality go away.

Instead of organizations wasting money on and time a dysfunctional official metric, there will be organizations wasting money and time on alternative ways (bribery, fraud, advertising, lobbying, propaganda...) of convincing government that it's they who should get a bite from the budget.

Comment author: zaph 03 December 2012 01:07:46PM 2 points [-]

I'm leery of organizations providing their own statistics on how effective they are, which may just be another form of lobbying and propaganda. I'd lean towards carving out from the budget a group that independently assesses effectiveness of each of the organizations. It's admittedly imperfect, but it would be more impartial than what seems to be in place now, and agencies wouldn't necessarily be at a disadvantage for lacking their own internal measurement tools. That still leaves the problem of choosing the right metrics. Something simple like budget percentages and ratios would be a good place to start. There are a lot of hard-to-compare types of services out there; after school programs aren't like Meals on Wheels programs. It's hard to come up outcome based metrics to say which service is better than another when there's so many different categories. Adopting something along the lines of the financial ratings at Charity Navigator could at least get everyone on the same page for controlling costs at their organizations.

Comment author: bbleeker 30 November 2012 09:19:04AM 5 points [-]

Well, because they can see - despite my best attempts at hiding it - that it makes me feel very uncomfortable, and yet they go on doing it. (I'm writing 'me' here, but I bet I'm speaking for the vast majority of women here.) Reading further along, I see that you were thinking that maybe I was assuming bad intent about all men, but that wasn't what I meant at all. But those jerks who shout things about ones breasts or legs, or crude invitations - yes, I have a hard time believing they think it's fun for the woman that is directed at.

Comment author: zaph 30 November 2012 03:41:38PM 4 points [-]

Moreover, no woman is ever going to be drawn to that, at least that I've ever heard. So it doesn't make sense as a grossly misguided pick-up strategy. Thinking about it and reading the thread, the more I think something along the lines of the Berne Games People Play dynamic is at work. It's the most charitable reading you can give to the behavior at least; the jerks taking part in this are getting some kind of attention from the woman they're targeting, even though it's negative attention. Still extremely hurtful behavior, but I can believe (or at least kid myself into believing) that men can gain insight into the behavior, realize what's going on, and stop doing it.

One of the more humiliating moments of my adult life was when two guys were making lewd comments to a female friend of mine across a parking lot. I felt absolutely helpless (I'll be blunt, they were far away and it was obvious they would kick my a__), and I can only imagine what my friend went through. She weathered it, but I'm sure that came at some cost to her psyche that women spend to much time and effort bearing. I can only say it's in the best interests of men and women if this was all curtailed.

Comment author: RichardKennaway 28 November 2012 01:50:59PM 7 points [-]

None of the above.

It's too long since I read the book to recall all of the Games in detail, and the list on the book's home page (linked from the Wiki article) doesn't seem to have this game, but no matter: Berne did not claim to be presenting an exhaustive taxonomy and encouraged his readers to discover more Games.

I recommend the book. I think it's essential reading for anyone confused (as so many LWers profess to be, and there's a Game right there) about aspects of social life that are not usually explicitly described. (The reasons why people don't talk about them form yet more Games.) Its importance is not merely the individual Games, but the idea of what a Game is and why people Play them. Once you have this, what is going on with catcalling will be transparent.

The theoretical background of the book, Transactional Analysis, you can take or leave; it gives Berne a conceptual vocabulary to talk about Games, but one need not make any ontological commitment to TA, to make use of the book.

Here's Kurt Vonnegut's review, from 1965.

Comment author: zaph 28 November 2012 02:44:15PM 6 points [-]

"Its importance is not merely the individual Games, but the idea of what a Game is and why people Play them."

From Berne: "Because there is so little opportunity for intimacy in daily life, and because some forms of intimacy (especially if intense) are psychologically impossible for most people, the bulk of the time in serious social life is taken up with playing games. Hence games are both necessary and desirable, and the only problem at issue is whether the games played by an individual offer the best yield for him."

So, you can debate the validity, but my take on the Berne-ian view would be that the game Catcall is the attempt to create a social boost for males by gaining a female's (albeit negative) attention.

Comment author: Kaj_Sotala 08 November 2012 03:21:03PM 5 points [-]

I think this explanation from Hanson is most likely

That post was authored by Robert Wiblin.

Comment author: zaph 08 November 2012 10:55:50PM 1 point [-]

Thanks, I corrected that.

Comment author: zaph 08 November 2012 01:36:45PM *  1 point [-]

I think this explanation from Wiblin* is most likely: "Nonetheless, I think this is more likely tha[t] a broad pool of Intrade participants [were] being enthusiastic about Romney against all the evidence, and [were] unaware that they could get better odds elsewhere." Is there enough evidence to investigate whether something more sinister is at work? I certainly don't know the details on Intrade and other similar markets, but perhaps there should be more stringent transparency rules to prevent potential manipulations.

Implications: What has always struck me as difficult for prediction markets is the fact that they aren't pricing an underlying "thing"; they're pricing uncertainty itself. Even in a futures or options market, there is an underlying right to purchase at a specific price, even if that right is useless due to the current price of the commodity or stock being lower. While the options or futures contract has a potential value on a certain date, there isn't anything of value being bought and sold on a prediction market. It's just bets based on information that everyone will know on a specific date. So to me, all the pluses you get from other markets that pertain to large groups grappling over how to price good don't apply to prediction markets, because there aren't any underlying assets to price. To me, markets do a good job of rationally arriving at prices because of the constant negotiations and comparisons going on for the underlying assets. People make predictions on where market prices will go, but to me that isn't the same as these being prediction markets. The implication of all this is that for me it's not surprising to hear that there was such a large arbitrage potential between the prediction markets. I'm not such a believer in the efficient market hypothesis to believe that there isn't arbitrage potential in asset markets, but I would predict that these are fewer and smaller than there would be in prediction markets. I don't see this as an unsolvable problem, but to me it shows what prediction markets are good for, which is to keep people honest in their premises and expectations. Take a global warming discussion with a AGW proponent on one side and an AGW skeptic on another. Sans a prediction market, either person could make as dire or as rosy a claim as they like; once they start putting real money on the line, both will likely become more interested in accuracy. If all prediction markets did was to routinely get people to adhere to rational discussion (and thus adhere to Aumann's agreement theorem), arenas such as public policy would improve immensely. So, it doesn't matter that the Intrade bets didn't reflect the best of odds; that just means one of the rational actors hadn't fully adjusted yet; once their account was debited, once would assume that they were in agreement now with the party on the other end of the bet.

*Corrected

Comment author: zaph 05 November 2012 06:09:55PM 0 points [-]

I've been enjoying this series so far, and I found this article to be particularly helpful. I did have a minor suggestion. The turnstile and the logical negation symbols were called out, and I thought it might be useful to explicitly breakdown the probability distribution equation. The current Less Wrong audience had little problem with it, certainly, but if you were showing it to someone new to this for the first time, they might not be acquainted with it. I was thinking something along the lines of this from stattrek:

"Generally, statisticians use a capital letter to represent a random variable and a lower-case letter, to represent one of its values. For example,

X represents the random variable X.
P(X) represents the probability of X.
P(X = x) refers to the probability that the random variable X is equal to a particular value, denoted by x. As an example, P(X = 1) refers to the probability that the random variable X is equal to 1."

(from http://stattrek.com/probability-distributions/probability-distribution.aspx)

Also, page 2 of A Student's Guide to Maxwell's Equations does a great job of diagramming Gauss' law for electrical fields, and I think it would be helpful if this were available to breakdown the right half of the equation, with the beginning reader seeing a breakdown of the equation.

http://books.google.com/books?id=I-x1MLny6y8C&printsec=frontcover#v=onepage&q&f=false

If this all was set aside in the footnote, the overall continuity of the article wouldn't be affected, and someone who might be intimidated at first by equations might see that these aren't so bad. With just a bit of exposition, more readers might be able to follow along with the entire argument, which I think could be introduced to someone with very little background.

Comment author: jhuffman 19 October 2011 05:40:10PM *  0 points [-]

People don't buy cryonics plans for the same reason they don't buy life insurance in the first place or even write a will

I'm not so sure about that. I mean, a LOT more people have an insurance plan than have a cryonics plan. I would agree that the truth of death is so terrible that people develop a complex set of thoughts and behaviors to insulate themselves from this terror. Naturally, they do not want to re-decision this over and over and suffer that pain, so we may even be resentful of the idea that someone thinks that they are not going to die. I think this is part of whats happening in cryonics. But to a larger extent I think people are skeptical of cryonics working, on a number of different levels.

Comment author: zaph 26 October 2011 02:39:40PM 0 points [-]

That's true, my original statement is too broad. To your point, that people are skeptical of cryonics in general, I am in complete agreement, and that's what I was trying to get at in my final point.

Comment author: zaph 14 October 2011 03:07:55PM 2 points [-]

I would disagree with the proposition that people are in any way actually OK with death. I don't think the problem is that people are at peace with death too much; instead, it's an issue that people are so afraid of death that they don't talk or even think realistically about it at all. The quotes you listed above sound like people were speaking hypothetically; if they were in an actual situation where their life was medically threatened but an intervention would likely save them I'm sure they would take the intervention without much consideration (barring depression). Instead, I believe the fear of death is so great, rational thinking about it is pushed aside. It's worse than Stockholm Syndrome, it's learned helplessness. People don't buy cryonics plans for the same reason they don't buy life insurance in the first place or even write a will, in that they're procrastinating in putting together concrete plans for something that makes them feel extremely uncomfortable.

The cryonics specific sales pitch doesn't have any happy early adopters running around saying "Thank Hanson I signed up! A cure was right around the corner, just 3 years away from being on the open market. If I hadn't enrolled I'd be...DEAD!" We're obviously not at that point yet, but if a Peter Thiel (to pull a name out of the air) were to use cryonics and successfully return to the land of the thawed, a hypothetical would become an actual, and people would become much more interested.

View more: Next