Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: [deleted] 27 December 2010 02:05:34AM -1 points [-]

Try to be objective and consider whether a donation to the Singularity Institute is the most efficient charitable "investment"? Here's a simple argument that it's most unlikely. What's the probability that posters would stumble on the very most efficient investment: it requires research. Rationalists don't accede this way to the representativeness heuristic, which leads the donor to choose the recipient readily accessible to consciousness.

Relying on heuristics where their deployment is irrational, however, isn't the main reason the Singularity Institute is an attractive recipient for posters to Less Wrong. The first clue is the celebration of persons who have made donations and the eagerness of the celebrated to disclose their contribution.

Donations are almost entirely signaling. The donations disclosed in comments here signal your values, or more precisely, what you want others to believe are your values. The Singularity Institute is hailed here; donations signal devotion to a common cause. Yes, even donating based on efficiency criteria is signaling and much as other donations. It signals that the donor is devoted to rationality.

The inconsistent over-valuation of the Singularity Institute might be part of the explanation for why rationality sometimes seems not to pay off: the "rational" analyze everyone's behavior but their own. When dealing with their own foibles, rationalists abdicate rationality when evaluating their own altruism,

Comment author: AlexU 27 December 2010 11:31:28PM *  3 points [-]

Why has this comment been downvoted so much? It's well-written and makes some good points. I find it really disheartening every time I come on here to find that a community of "rationalists" is so quick to muffle anyone who disagrees with LW collective opinion.

In response to Were atoms real?
Comment author: AlexU 14 December 2010 02:59:11PM *  10 points [-]

Half the more "philosophical" posts on here seem like they're trying to reinvent the wheel. This issue has been discussed a lot by philosophers and there's already an extensive literature on it. Check out http://plato.stanford.edu/entries/scientific-realism/ for starters. Nothing wrong with talking about things that have already talked about, of course, but it would probably be good at least to acknowledge that this is a well-established area of thought, with a known name, with a lot of sophisticated thinking already underway, rather than having the mindset that Less Wrong is single-handedly inventing Western philosophy from scratch.

In response to Pain
Comment author: AlexU 03 August 2009 02:11:36PM *  0 points [-]

The difficulty of answering this question suggests one possibility: pain might very well be the only intrinsically bad thing there is. Pain is bad simply because it is bad, in a way that nothing else is. It could be argued that the "goodness" or "badness" of everything else is reducible to how much pain qualia it causes or prevents.

Comment author: AlexU 24 July 2009 02:31:16PM *  1 point [-]

Lots of great stuff in this post. Don't have time to comment on anything in particular, but just wanted to say: this is the best-written piece I've ever seen on lesswrong. Keep writing.

Comment author: prase 01 July 2009 08:52:13AM 0 points [-]

I have never understood the difference between weak and strong atheism. Either I think God probably exists or that he probably doesn't, but what's the difference between lack of belief in a proposition and a belief in its converse? Is it that, say, who thinks that God doesn't exist with p=0.8 is a weak atheist while with p=1-10^(-9) he would be a strong one? Or is a weak atheist only who has suspended judgement (what's the difference from an agnostic, then)?

Comment author: AlexU 01 July 2009 01:54:37PM 3 points [-]

Quick: is there an 85 year old former plumber by the name of Saul Morgan eating a breakfast of steak and eggs in a diner in North Side of Chicago right now? Who knows, right? You certainly don't have an affirmative belief that there is, but it's also true that, perhaps up until this moment, you didn't affirmatively believe that there wasn't such a man either. Lacking a belief in something is not the same as believing in its converse. To affirmatively believe in the non-existence of every conceivable entity or the falsity of ever proposition would require an infinite number of beliefs.

Comment author: AlexU 03 May 2009 07:59:47PM 0 points [-]

I eat anything. Make a conscious choice to eat healthy stuff and avoid junk food and simple carbs when convenient. Preferred eating pattern is to basically graze all day long. That, as well as a general indifference toward food (I find eating to be a bit of an irritating necessity, and never have cravings for anything) are enough to keep me trim. Probably worth noting that I wasn't always this way; up through college, I loved eating crap foods, sweets, carbs, soda, etc. Permanent preference changes take time, but can happen.

Most vegetarians/vegans strike me as sanctimonious twits, who are more often than not no healthier than anyone else.

Comment author: jimrandomh 01 May 2009 06:19:49PM 2 points [-]

Suppose that I live on a holodeck but don't know it, such that anything I look at closely follows reductionist laws, but things farther away only follow high-level approximations, with some sort of intelligence checking the approximations to make sure I never notice an inconsistency. Call this the holodeck hypothesis. Suppose I assign this hypothesis probability 10^-4.

Now suppose I buy one lottery ticket, for the first time in my life, costing $1 with a potential payoff of $10^7 with probability 10^-8. If the holodeck hypothesis is false, then the expected value of this is $10^710^-8 - $1 = $-0.90. However, if the holodeck hypothesis is true, then someone outside the simulation might decide to be nice to me, so the probability that it will win is more like 10^-3. (This only applies to the first ticket, since someone who would rig the lottery in this way would be most likely to do so on their first chance, not a later chance.) In that case, the expected payoff is $10^710^-3 - $1 = $10^4. Combining these two cases, the expected payoff for buying a lottery ticket is +$0.10.

At some point in the future, if there is a singularity, it seems likely that people will be born for whom the holodeck hypothesis is true. If that happens, then the probability estimate will go way up, and so the expected payoff from buying lottery tickets will go up, too. This seems like a strong argument for buying exactly one lottery ticket in your lifetime.

Comment author: AlexU 02 May 2009 03:37:51AM -1 points [-]

However, if the holodeck hypothesis is true, then someone outside the simulation might decide to be nice to me, so the probability that it will win is more like 10^-3.

Um, what?

Comment author: Roko 01 May 2009 05:16:51PM *  0 points [-]

If you are religious (in the theistic sense, which is really what we're likely to encounter and what I'm talking about), you believe that there is a divine agent watching over us. This has obvious false implications concerning the singularity.

Suppose you tell a theist that there's a serious risk that smarter than human AI could wipe out the whole human race.They'll be thinking "this couldn't happen, God would prevent it" or "oh, it's ok, I'll go to heaven if this happens". Wherever the argument goes next, you are talking to someone who has such radically different background assumptions to you that you won't get anything useful out of them.

Why is this differs from most other subjects is that the religious conception of divine intervention is tailored so that it is consistent with our everyday observations. Thus any religious person who is vaguely sane will have some argument as to why God doesn't prevent earthquakes from killing random people. So God allows small injustices and crimes, but the main point is that everything will be OK in the end, i.e. the ultimate fate of our world is not in question.

The debate concerning the Singularity is directly about this question.

Comment author: AlexU 01 May 2009 06:39:46PM 0 points [-]

Your conception of "theism" -- a tremendously broad concept -- is laughably caricatured and narrow, and it pollutes whatever argument you're trying to make: absolutely none of the logic in the above post follows in the way you think it does.

Comment author: Roko 01 May 2009 04:50:17PM 0 points [-]

The fact that a believer in a loving and all powerful god can't really be taken seriously on the singularity is not a claim about their character, and thus doesn't qualify as ad-hominem. It is a claim about the arguments they are going to put forward: in the presence of the background assumption that there's a loving god watching over us, you can't make sensible decisions about the singularity.

Comment author: AlexU 01 May 2009 06:36:01PM *  2 points [-]

Discounting an argument because of the person making it is pretty much the textbook definition of ad hominem fallacy.

Also, it should go without saying that being a theist doesn't automatically mean one believes in a loving and all-powerful god watching over us. And anyway, I still don't follow the logic that being a theist means one can't make sensible decisions about the Singularity (insofar as one can say there are "sensible decisions" to be made about something that's basically a sci-fi construct at this point.)

Comment author: Roko 01 May 2009 07:47:02AM *  -3 points [-]

If they want to come here and talk about prisoners dilemmas or the Singularity or something, then of course we should welcome their opinions.

also disagreeing here. I don't value a religious person's arguments relating to the singularity at all, and whilst I think we should tolerate them in the interest of free speech, this should be done grudgingly and with disclaimers like "this person cannot have a sensible view on the singularity, treat their output on the subject as noise".

This is because, if you are religious (in the theistic sense, which is really what we're likely to encounter and what I'm talking about), you believe that there is a divine agent watching over us. This has obvious false implications concerning the singularity.

Suppose you tell a theist that there's a serious risk that smarter than human AI could wipe out the whole human race. They'll be thinking "this couldn't happen, God would prevent it" or "oh, it's ok, I'll go to heaven if this happens". Wherever the argument goes next, you are talking to someone who has such radically different background assumptions to you that you won't get anything useful out of them.

Why is this differs from most other subjects is that the religious conception of divine intervention is tailored so that it is consistent with our everyday observations. Thus any religious person who is vaguely sane will have some argument as to why God doesn't prevent earthquakes from killing random people. So God allows small injustices and crimes, but the main point is that everything will be OK in the end, i.e. the ultimate fate of our world is not in question. The debate concerning the Singularity is directly about this question.

There are other failure modes which theists will have disproportionately over atheists, of course. To me it seems that an unerring and (essentially) non-evidence based belief that everything will turn out OK is indictment enough.

Amongst the other failure modes: belief in existence of souls and of the divine place of human intelligence is likely to produce skewed beliefs about the possibility of synthetic intelligence. Various results of dark-side epistemology such as disbelief of evolution, belief in "free will", belief in original sin and belief in moral realism ("god given morality") preventing something like CEV. I've heard the following fallacious argument against the transhumanist project from a lot of theists: humans are imperfect, so the only way to improve ourselves is to take advice from a perfect being. Imperfection cannot lead to less-imperfection.

Comment author: AlexU 01 May 2009 04:25:33PM 1 point [-]

You've never heard of the ad hominem fallacy, I take it?

View more: Next