An Anthropic Principle Fairy Tale
A robot is going on a one-shot mission to a distant world to collect important data needed to research a cure for a plague that is devastating the Earth. When the robot enters hyperspace, it notices some anomalies in the engine's output, but it is too late to get the engine fixed. The anomalies are of a sort that, when similar anomalies have been observed in other engines, 25% of the time it indicates a fatal problem, such that the engine will explode virtually every time it tries to jump. 25% of the time, it has been a false positive, and the engine exploded only at its normal negligible rate. 50% of the time it has indicated a serious problem, such that each jump was about a 50/50 chance of exploding. Anyway, the robot goes through the ten jumps to reach the distant world, and the engine does not explode. Unfortunately, the jump coordinates for the mission were a little off, and the robot is in a bad data-collecting position. It could try another jump - if the engine doesn't explode, the extra data it collects could save lives. If the engine does explode, however, Earth will get no data from the distant world at all. (The FTL radio is only good for one use, so he can't collect data and then jump.) So how did you program your robot? Did you program your robot to believe that since the engine worked 10 times, the anomaly was probably a false positive, and so it should make the jump? Or did you program your robot to follow the "Androidic Principle" and disregard the so-called "evidence" of the ten jumps, since it could not have observed any other outcome? People's lives are in the balance here. A little girl is too sick to leave her bed, she doesn't have much time left, you can hear the fluid in her lungs as she asks you "are you aware of the anthropic principle?" Well? Are you?
Neil Armstrong died before we could defeat death
The sad news broke tonight : Neil Armstrong, the first human to ever walk another world, died today. We lost him forever. He died before we could defeat death.
Once again the horror of death strikes. This time, in addition from wiping from us forever a hero of humanity, he wiped from us forever a memory that will never exist again. Never again will a human being be able to experience being the first to walk another world. That beautiful experience is lost forever too, along with all the memories, dreams, desires and wishes that made Neil Armstrong.
But thanks to him, humanity made a giant leap. We'll fill the stars and conquer death. The spark of intelligence and sentience will not extinguish. That's the best we can do to honour him.
Source : http://www.reuters.com/article/2012/08/25/us-usa-neilarmstrong-idUSBRE87O0B020120825
[Link] The perils of “reason”
Post by fellow LW reader Razib Khan, who many here probably know from the gnxp site or perhaps from his debate with Eliezer. Somewhat related to a post we also seem to have discussed.
In my post below in regards to Sam Harris’ recent interactions on the web I reasserted by suspicion of reason. This naturally elicited curiosity, or hostility, from some. I’ve talked about this before, but the illustration to the left gets at my primary issue. When individuals are reasoning alone they often have a high degree of uncertainty as to their conclusions. But when individuals are reasoning together they seem to converge very rapidly and with great confidence upon a particular position. What’s going on here? In the second case it isn’t reason at all, but our natural human predisposition toward group conformity. There’s a huge psychological literature on this, so I won’t belabor the point. When people brandish “reason” and “rationality” explicitly I’m somewhat skeptical. If rational conclusions are so plain and self-evident why are we even asserting the primacy of reason? If something really is so clearly reasonable you usually don’t go around trumpeting how reasonable it is.
Another pitfall of reason is that it lulls use into the delusion that we have a transparent understanding of our own motivations and logic, as well as the motivation and logic of others. In my post below I explicitly stated that I disagreed with Harris on the substance of much of what he asserted and assumes in the first paragraph, but multiple people simply imputed to me his views as if they were mine! Even though I declaimed this position very early on, they simply could not generate an coherent framework where I did not agree with either them or Harris. There were only two options conceivable for them which the “reason” engine could operate upon. As I clearly did not agree with them (or so they thought), they simply injected in the axioms which would be appropriate for Sam Harris into my own box, and then began firing the appropriate propositions.
Here we have the problem that reasonable arguments and the self-evident truth of rationality is often only clear among people who already agree on everything of substance. People who agree can confidently assert the rationality and reasonableness of their arguments to those who have the exactly same perspective. So, for example, you have educated people like William F. Buckley, Jr. explaining that there is more evidence of the resurrection of Jesus Christ than that Abraham Lincoln gave the Emancipation Proclamation. This was eminently reasonable to the circles which Buckley moved in. After all, Christ did rise from the dead, everyone knows that! Well, not really. Buckley’s son, Christopher, who is not a believer, has explained that his father had a genuinely difficult time imagining the perspective of those who did not share his beliefs on this matter.
This is not to say that reason and rationality are not without utility. These are humanity’s great cognitive jewels. But great tools can be used to various ends, and true reason and rationality are very difficult. Mathematics for example is undoubtedly true rationality, with crisp and precise inferences being derivable. But most other intellectual structures are not so clearly self-evident as mathematics. Verbal logic and reasoning are riddled with the pitfalls of cognitive bias. Because most people share the same systematic biases it is very difficult for groups of individuals engaging in self-reinforcing masturbatory ‘rationality’ discourses to perhaps step back and wonder about their motivated reasoning. Unfortunately it may be that reason emerged as a human faculty to win arguments, not resolve truth. If this is true we are much more lawyers than mathematicians in our discourse. Does this seem plausible to you? Unfortunately it does seem plausible to me.
Where does this leave us? I think we need to be skeptical of reasoned arguments. This doesn’t lead me down the path of intellectual nihilism. Reason is which leads us to truth is possible. But it may be that this is a very specialized usage of reason, which requires special conditions. ’tis far easier to seem clever than be correct.
Edit: I linked to the wrong article! (~_~;) Fixed!
What are your questions about making a difference?
How can you best use your time to make a difference?
80,000 Hours now has several people working full time on research, and they would like your questions!
We’re happy to consider any questions about how to effectively make a difference, in whatever sphere of your life – volunteering, career or donations. These questions could be at the conceptual or ethical level, or they could concern nitty-gritty practicalities.
We’re particularly interested in questions that are not already well addressed by other groups, and where there's significant opportunity for progress.
The most popular questions will receive the attention of our research team, and their findings will feature in our new careers guide.
Either post your questions below, or send them to careers@80000hours.org
[Link] Admitting to Bias
Summary: Current social psychology research is probably on average compromised by political bias leftward. Conservative researchers are likely discriminated against in at least this field. More importantly papers and research that does not fit a liberal perspective faces greater barriers and burdens.
An article in the online publication inside higher ed on a survey on anti-conservative bias among social psychologists.
Numerous surveys have found that professors, especially those in some disciplines, are to the left of the general public. But those same -- and other -- surveys have rarely found evidence that left-leaning academics discriminate on the basis of politics. So to many academics, the question of ideological bias is not a big deal. Investment bankers may lean to the right, but that doesn't mean they don't provide good service (or as best the economy will permit) to clients of all political stripes, the argument goes.
And professors should be assumed to have the same professionalism.A new study, however, challenges that assumption -- at least in the field of social psychology. The study isn't due to be published until next month (in Perspectives on Psychological Science), and the authors and others are noting limitations to the study. But its findings of bias by social psychologists (even if just a decent-sized minority of them) are already getting considerable buzz in conservative circles. Just over 37 percent of those surveyed said that, given equally qualified candidates for a job, they would support the hiring of a liberal candidate over a conservative candidate. Smaller percentages agreed that a "conservative perspective" would negatively influence their odds of supporting a paper for inclusion in a journal or a proposal for a grant. (The final version of the paper is not yet available, but an early version may be found on the website of the Social Science Research Network.)
To some on the right, such findings are hardly surprising. But to the authors, who expected to find lopsided political leanings, but not bias, the results were not what they expected.
"The questions were pretty blatant. We didn't expect people would give those answers," said Yoel Inbar, a co-author, who is a visiting assistant professor at the Wharton School of the University of Pennsylvania, and an assistant professor of social psychology at Tilburg University, in the Netherlands.
He said that the findings should concern academics. Of the bias he and a co-author found, he said, "I don't think it's O.K."
Discussion of faculty politics extends well beyond social psychology, and humanities professors are frequently accused of being "tenured radicals" (a label some wear with pride). But social psychology has had an intense debate over the issue in the last year.
At the 2011 meeting of the Society for Personality and Social Psychology, Jonathan Haidt of the University of Virginia polled the audience of some 1,000 in a convention center ballroom to ask how many were liberals (the vast majority of hands went up), how many were centrists or libertarians (he counted a couple dozen or so), and how many were conservatives (three hands went up). In his talk, he said that the conference reflected "a statistically impossible lack of diversity,” in a country where 40 percent of Americans are conservative and only 20 percent are liberal. He said he worried about the discipline becoming a "tribal-moral community" in ways that hurt the field's credibility.
The link above is worth following. The problems that arise remind me of the situation with academic and our own ethics in light of this paper.
That speech prompted the research that is about to be published. Members of a social psychologists' e-mail list were surveyed twice. (The group is not limited to American social scientists or faculty members, but about 90 percent are academics, including grad students, and more than 80 percent are Americans.) Not surprisingly, the overwhelming majority of those surveyed identified as liberal on social, foreign and economic policy, with the strongest conservative presence on economic policy. Only 6 percent described themselves as conservative over all.
The questions on willingness to discriminate against conservatives were asked in two ways: what the respondents thought they would do, and what they thought their colleagues would do. The pool included conservatives (who presumably aren't discriminating against conservatives) so the liberal response rates may be a bit higher, Inbar said.
The percentages below reflect those who gave a score of 4 or higher on a 7-point scale on how likely they would be to do something (with 4 being "somewhat" likely).
Percentages of Social Psychologists Who Would Be Biased in Various Ways
Self Colleagues A "politically conservative perspective" by author would have a negative influence on evaluation of a paper 18.6% 34.2% A "politically conservative perspective" by author would have a negative influence on evaluation of a grant proposal 23.8% 36.9% Would be reluctant to extend symposium invitation to a colleague who is "politically quite conservative" 14.0% 29.6% Would vote for liberal over conservative job candidate if they were equally qualified 37.5% 44.1%
I can't help but think that self-assessments are probably too generous. For predictive power of how an individual behaves when the behaviour in question is undesirable, I'm more likely to take their estimate of how "colleagues" behave than their estimate of how they personally do.
The more liberal the survey respondents identified as being, the more likely they were to say that they would discriminate.
The paper notes surveys and statements by conservatives in the field saying that they are reluctant to speak out and says that "they are right to do so," given the numbers of individuals who indicate they might be biased or that their colleagues might be biased in various ways.
Inbar said that he has no idea if other fields would have similar results. And he stressed that the questions were hypothetical; the survey did not ask participants if they had actually done these things.
He said that the study also collected free responses from participants, and that conservative responses were consistent with the idea that there is bias out there. "The responses included really egregious stuff, people being belittled by their advisers publicly for voting Republican."
This shouldn't be surprising to hear since to quote CharlieSheen: "we even have LW posters who have in academia personally experienced discrimination and harassment because of their right wing politics."
Neil Gross, a professor of sociology at the University of British Columbia, urged caution about the results. Gross has written extensively on faculty political issues. He is the co-author of a 2007 report that found that while professors may lean left, they do so less than is imagined and less uniformly across institution type than is imagined.
Gross said it was important to remember that the percentages saying they would discriminate in various ways are answering yes to a relatively low bar of "somewhat." He also said that the numbers would have been "more meaningful" if they had asked about actual behavior by respondents in the last year, not the more general question of whether they might do these things.
At the same time, he said that the numbers "are higher than I would have expected." One theory Gross has is that the questions are "picking up general political animosity as much as anything else."
If you are wondering about the political leanings of the social psychologists who conducted the study, they are on the left. Inbar said he describes himself as "a pretty doctrinaire liberal," who volunteered for the Obama campaign in 2008 and who votes Democrat. His co-author, Joris Lammers of Tilburg, is to Inbar's left, he said.
What most impressed him about the issues raised by the study, Inbar said, is the need to think about "basic fairness."
While I can see Lammers' point that this as disturbing from a fairness perspective to people grinding their way through academia and should serve as warning for right wing LessWrong readers working through the system, I find the issue of how this our heavy reliance on academia for our map of reality might lead to us inheriting such distortions of the map of reality much more concerning. Overall in light of this if a widely accepted conclusion from social psychology favours a "right wing" perspective it is more likely to be correct than if no such biases against such perspectives existed. Conclusions that favour "left wing" perspective are also somewhat less likely to be true than if no such biases existed. We should update accordingly.
I also think there are reasons to think we may have similar problems on this site.
[Link] You Have No Idea How Wrong You Are
https://www.youtube.com/watch?v=E8V8rtdXnLA&feature
Be sure to make it to the last 5 minutes of the lecture as the tone shifts significantly.
In Defense of Tone Arguments
Suppose, for a moment, you're a strong proponent of Glim, a fantastic new philosophy of ethics that will maximize truth, happiness, and all things good, just as soon as 51% of the population accepts it as the true way; once it has achieved majority status, careful models in game theory show that Glim proponents will be significantly more prosperous and happy than non-proponents (although everybody will benefit on average, according to its models), and it will take over.
Glim has stalled, however; it's stuck at 49% belief, and a new countermovement, antiGlim, has arisen, claiming that Glim is a corrupt moral system with fatal flaws which will destroy the country if it has its way. Belief is starting to creep down, and those who accepted the ideas as plausible but weren't ready to commit are starting to turn away from the movement.
In response, a senior researcher of Glim ethics has written a scathing condemnation of antiGlim as unpatriotic, evil, and determined to keep the populace in a state of perpetual misery to support its own hegemony. He vehemently denies that there are any flaws in the moral system, and refuses to entertain antiGlim in a public debate.
In response to this, belief creeps slightly up, but acceptance goes into a freefall.
You immediately ascertain that the negativity was worse for the movement than the criticisms; you write a response, and are accused of attacking the tone and ignoring the substance of the arguments. Glim and antiGlim leadership proceed into protracted and nasty arguments, until both are highly marginalized, and ignored by the general public. Belief in Glim continues, but when the leaders of antiGlim and Glim finally arrive on a bitterly agreed upon conclusion - the arguments having centered on an actual error in the original formulations of Glim philosophy, they're unable to either get their remaining supports to cooperate, or to get any of the public to listen. Truth, happiness, and all things good never arise, and things get slightly worse, as a result of the error.
Tone arguments are not necessarily logical errors; they may be invoked by those who agree with the substance of an argument who nevertheless may feel that the argument, as posed, is counterproductive to its intended purpose.
I have stopped recommending Dawkin's work to people who are on the fence about religion. The God Delusion utterly destroyed his effectiveness at convincing people against religion. (In a world in which they couldn't do an internet search on his name, it might not matter; we don't live in that world, and I assume other people are as likely to investigate somebody as I am.) It doesn't even matter whether his facts are right or not, the way he presents them will put most people on the intellectual defensive.
If your purpose is to convince people, it's not enough to have good arguments, or good facts; these things can only work if people are receptive to those arguments and those facts. Your first move is your most important - you must try to make that person receptive. And if somebody levels a tone argument at you, your first consideration should not be "Oh! That's DH2, it's a fallacy, I can disregard what this person has to say!" It should be - why are they leveling a tone argument at you to begin with? Are they disagreeing with you on the basis of your tone, or disagreeing with the tone itself?
Or, in short, the categorical assessment of "Responding to Tone" as either a logical fallacy or a poor argument is incorrect, as it starts from an unfounded assumption that the purpose of a tone response is, in fact, to refute the argument. In the few cases I have seen responses to tone which were utilized against an argument, they were in fact ad-hominems, of the formulation "This person clearly hates [x], and thus can't be expected to have an unbiased perspective." Note that this is a particularly persuasive ad-hominem, particularly for somebody who is looking to rationalize their beliefs against an argument - and that this inoculation against argument is precisely the reason you should, in fact, moderate your tone.
Challenge: change someone's mind
Pick one (or several) of the following. I used specific examples, therefore anything similar still counts.
1. You have a friendly new acquaintance who is pretty much an average person. He is a theist and doesn't believe Evolution, you have already had a polite debate about that. Convince him to believe in the truth*.
2. One of your friends is very deeply religious - he has devoted his life to already invested a lot of it in religion. Unexpectedly, he is also highly rational (as a personality) and very intelligent, he studies a technical degree (enjoys it), he has read books about critical thinking (he even knows a little about biases) and he says that he will stop believing in religion if you disprove it. Debating with him so far didn't help (also he isn't too good - he isn't aware of expected value and such ideas). For his own good, convince him to change his mind in the direction of the truth. He is wasting a huge potential and that's not only bad for him, but also for humanity. Also, he will feel more comfortable in his new, more sensible beliefs.
3. Your brother dislikes you because of his impression of you that was created several years ago and wasn't updated to reflect the changes in your personality. You easily make impressions to other people that are vastly different from his impression of you. Change his impression, so that he sees you truthfully.
[I have removed 4., because it wasn't about changing the mind of someone who isn't a rationalist, but about coming up with a good psychological mechanism - it deserves an entirely new thread; I suspect that 3 might be too different from 1 and 2, but it's too late to make a so big change to the thread]
I know at least one person for each category. And I haven't been able to change nobody's mind. Have you succeeded in a similar situation? Regardless of whether you have, what strategies do you think would be winning in the 4 situations? If some of them sounds good, I might even try them out and share the results. I'm especially curious about how to approach in #3, because if there is a way, it would come from low-level psychology, which is something I adore.
So, the aim of this thread is for the participants to try and change someone's mind and then tell the story.
(also, I'm willing to accept ideas of other templates for classical situations similar to those, in fact I think I had one or two more ideas, but I can't seem to recall them)
*Needless to say, if at any point, anyone proves to you that his direction is in fact the truth, it would be better to change yourself in that direction instead, but that's outside of the scope of the thread.
Confused about Solomonoff induction
Why wouldn't the probability of two algorithms of different lengths appearing approach the same value as longer strings of bits are searched?
Moderate alcohol consumption inversely correlated with all-cause mortality
My roommate recently sent me a review article that LW might find interesting:
Conclusions: Low levels of alcohol intake (1-2 drinks per day for women and 2-4 drinks per day for men) are inversely associated with total mortality in both men and women. Our findings, while confirming the hazards of excess drinking, indicate potential windows of alcohol intake that may confer a net beneficial effect of moderate drinking, at least in terms of survival.
Personal observation says that LWers tend not to drink very much or often. Perhaps that should change, to the degree suggested by the article?
Full article here.

Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)