Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: [deleted] 27 December 2010 02:05:34AM -1 points [-]

Try to be objective and consider whether a donation to the Singularity Institute is the most efficient charitable "investment"? Here's a simple argument that it's most unlikely. What's the probability that posters would stumble on the very most efficient investment: it requires research. Rationalists don't accede this way to the representativeness heuristic, which leads the donor to choose the recipient readily accessible to consciousness.

Relying on heuristics where their deployment is irrational, however, isn't the main reason the Singularity Institute is an attractive recipient for posters to Less Wrong. The first clue is the celebration of persons who have made donations and the eagerness of the celebrated to disclose their contribution.

Donations are almost entirely signaling. The donations disclosed in comments here signal your values, or more precisely, what you want others to believe are your values. The Singularity Institute is hailed here; donations signal devotion to a common cause. Yes, even donating based on efficiency criteria is signaling and much as other donations. It signals that the donor is devoted to rationality.

The inconsistent over-valuation of the Singularity Institute might be part of the explanation for why rationality sometimes seems not to pay off: the "rational" analyze everyone's behavior but their own. When dealing with their own foibles, rationalists abdicate rationality when evaluating their own altruism,

Comment author: AlexU 27 December 2010 11:31:28PM *  3 points [-]

Why has this comment been downvoted so much? It's well-written and makes some good points. I find it really disheartening every time I come on here to find that a community of "rationalists" is so quick to muffle anyone who disagrees with LW collective opinion.

In response to Were atoms real?
Comment author: AlexU 14 December 2010 02:59:11PM *  10 points [-]

Half the more "philosophical" posts on here seem like they're trying to reinvent the wheel. This issue has been discussed a lot by philosophers and there's already an extensive literature on it. Check out http://plato.stanford.edu/entries/scientific-realism/ for starters. Nothing wrong with talking about things that have already talked about, of course, but it would probably be good at least to acknowledge that this is a well-established area of thought, with a known name, with a lot of sophisticated thinking already underway, rather than having the mindset that Less Wrong is single-handedly inventing Western philosophy from scratch.

In response to Pain
Comment author: AlexU 03 August 2009 02:11:36PM *  0 points [-]

The difficulty of answering this question suggests one possibility: pain might very well be the only intrinsically bad thing there is. Pain is bad simply because it is bad, in a way that nothing else is. It could be argued that the "goodness" or "badness" of everything else is reducible to how much pain qualia it causes or prevents.

Comment author: AlexU 24 July 2009 02:31:16PM *  1 point [-]

Lots of great stuff in this post. Don't have time to comment on anything in particular, but just wanted to say: this is the best-written piece I've ever seen on lesswrong. Keep writing.

Comment author: prase 01 July 2009 08:52:13AM 0 points [-]

I have never understood the difference between weak and strong atheism. Either I think God probably exists or that he probably doesn't, but what's the difference between lack of belief in a proposition and a belief in its converse? Is it that, say, who thinks that God doesn't exist with p=0.8 is a weak atheist while with p=1-10^(-9) he would be a strong one? Or is a weak atheist only who has suspended judgement (what's the difference from an agnostic, then)?

Comment author: AlexU 01 July 2009 01:54:37PM 3 points [-]

Quick: is there an 85 year old former plumber by the name of Saul Morgan eating a breakfast of steak and eggs in a diner in North Side of Chicago right now? Who knows, right? You certainly don't have an affirmative belief that there is, but it's also true that, perhaps up until this moment, you didn't affirmatively believe that there wasn't such a man either. Lacking a belief in something is not the same as believing in its converse. To affirmatively believe in the non-existence of every conceivable entity or the falsity of ever proposition would require an infinite number of beliefs.

Comment author: AlexU 03 May 2009 07:59:47PM 0 points [-]

I eat anything. Make a conscious choice to eat healthy stuff and avoid junk food and simple carbs when convenient. Preferred eating pattern is to basically graze all day long. That, as well as a general indifference toward food (I find eating to be a bit of an irritating necessity, and never have cravings for anything) are enough to keep me trim. Probably worth noting that I wasn't always this way; up through college, I loved eating crap foods, sweets, carbs, soda, etc. Permanent preference changes take time, but can happen.

Most vegetarians/vegans strike me as sanctimonious twits, who are more often than not no healthier than anyone else.

Comment author: jimrandomh 01 May 2009 06:19:49PM 2 points [-]

Suppose that I live on a holodeck but don't know it, such that anything I look at closely follows reductionist laws, but things farther away only follow high-level approximations, with some sort of intelligence checking the approximations to make sure I never notice an inconsistency. Call this the holodeck hypothesis. Suppose I assign this hypothesis probability 10^-4.

Now suppose I buy one lottery ticket, for the first time in my life, costing $1 with a potential payoff of $10^7 with probability 10^-8. If the holodeck hypothesis is false, then the expected value of this is $10^710^-8 - $1 = $-0.90. However, if the holodeck hypothesis is true, then someone outside the simulation might decide to be nice to me, so the probability that it will win is more like 10^-3. (This only applies to the first ticket, since someone who would rig the lottery in this way would be most likely to do so on their first chance, not a later chance.) In that case, the expected payoff is $10^710^-3 - $1 = $10^4. Combining these two cases, the expected payoff for buying a lottery ticket is +$0.10.

At some point in the future, if there is a singularity, it seems likely that people will be born for whom the holodeck hypothesis is true. If that happens, then the probability estimate will go way up, and so the expected payoff from buying lottery tickets will go up, too. This seems like a strong argument for buying exactly one lottery ticket in your lifetime.

Comment author: AlexU 02 May 2009 03:37:51AM -1 points [-]

However, if the holodeck hypothesis is true, then someone outside the simulation might decide to be nice to me, so the probability that it will win is more like 10^-3.

Um, what?

Comment author: AlexU 30 April 2009 01:53:23PM *  6 points [-]

Isn't there an equally well-known bias toward thinking we'll react differently to future events (or behave differently) than most people? That is, we observe that most people don't become happier when they become rich, but we convince ourselves that we're "different" enough that we nonetheless will? I think Dan Gilbert wrote pretty extensively on this in of those recent "happiness studies" books. Anyway, it seems like there's an obvious tension between the two tendencies.

Comment author: Vladimir_Nesov 29 April 2009 03:26:14PM *  3 points [-]

The historical causes of the different kinds of worldviews held by different people may be similar, but it doesn't make the different worldviews themselves similar. The evolution was implemented on the same kind of physics that fires up the stars, yet a snail is nothing like a giant ball of plasma. The answer to "2+2=" doesn't depend on where you place your faith. Even if you zealously believe that the answer is 78, even if that's what you were taught in school, just like the other kids who were taught different answers, the answer is still 4.

And there is a rational reason to believe the global scientific community, once you grow strong enough to pose the question: they are often right, and they self-check their correctness.

Comment author: AlexU 29 April 2009 03:43:28PM 1 point [-]

Of course, different worldviews may be qualitatively very different, but the point I'm making is that our personal reasons for adopting one over the other aren't all that different. My reasons for believing various scientific findings have much more to do with the sociology of my upbringing and current environment than with the actual truth or falsity of those findings. I did some lab experiments in high school and college, but to extrapolate from those personal verifications to the truth of all scientific findings is to make quite an inductive leap.

Comment author: LongInTheTooth 29 April 2009 01:32:01PM 0 points [-]

Yes, this is the crux of the difference between the two scenarios. We accept many things from authority figures at face value, but they fall into two categories, testable and untestable, and we can easily figure out which is which.

Comment author: AlexU 29 April 2009 03:34:22PM *  1 point [-]

I'm not sure those categories are as meaningful as you think. How many scientific findings are you capable of verifying personally, right now? And believing you're capable of verifying them, "in principle," is quite different altogether...

View more: Next