Comment author: KPier 01 August 2012 11:21:36PM 6 points [-]

I was accepted to Stanford this spring. At the welcome weekend, we talked a lot with the admissions representatives about what they're looking for - I'd be happy to share tips and my own essays. PM me.

Comment author: KPier 17 July 2012 03:24:50AM 7 points [-]

The July matching drive was news to me; I wonder how many other readers hadn't even heard about it.

Is there a reason this hasn't been published on LessWrong, i.e. with the usual public-commitment thread?

Also, if a donation is earmarked for CFAR, does the "matching" donation also go to CFAR?

In response to comment by KPier on Rational Ethics
Comment author: OrphanWilde 12 July 2012 08:20:27PM *  0 points [-]

"And one note: I think you're misusing "rational". Spending an hour puzzling over the optimal purchase of chips is not rational; spending an hour puzzling over whether to shoplift the chips is also not rational. You're only getting the counterintuitive result "rationality is not always rational" because you're treating "rational" as synonymous with "logical" or ""optimized" or "thought-through"."

  • I think this encapsulates our disagreement.

First, I challenge you to define rationality while excluding those mechanisms. No, I don't really, just consider how you would do it.

Can we call rationality as "A good decision-making process"? (Borrowing from http://lesswrong.com/lw/20p/what_is_rationality/ )

I think the disconnect is in considering the problem as one decision, or two discrete decisions. "A witch did it" is not a rational explanation for something, I hope we can agree, and I hope I established that one can rationally choose to believe this, even though it is an irrational belief.

The first decision is about what decision-making process to use to make a decision. "Blame the witch" is not a good process - it's not a process at all. But when the decision is unimportant, it may be better to use a bad decision making process than a good one.

Given two decisions, the first about what decision-making process to use, and the second to be the actual decision, you can in fact use a good-decision making process (rationally conclude) that a bad-decision making process (an irrational one) is sufficient for a particular task.

For your examples, picking one to address specifically, I'd suggest that it is ultimately unimportant on an individual basis to most people whether or not to support universal health care; their individual support or lack thereof has almost no effect on whether or not it is implemented. Similarly with abortion and gay marriage.

For effective charities, this decision-making process can be outsourced pretty effectively to somebody who shares your values; most people are religious, and their preacher may make recommendations, for example.

I'm not certain I would consider career choice an ethical decision, per se; I regard that as a case where rationality has a high payoff in almost any circumstances, however, and so agree with there, even if I disagree with its usefulness as an opposing example for the purposes of this debate.

Comment author: KPier 13 July 2012 02:43:58AM 0 points [-]

Instrumental rationality is doing whatever has the best expected outcome. So spending a ton of time thinking about metaethics may or may not be instrumentally rational, but saying "thinking rationally about metaethics is not rational" is using the world two different ways, and is the reason your post is so confusing to me.

On your example of a witch, I don't actually see why believing that would be rational. But if you take a more straightforward example, say, "Not knowing that your boss is engaging in insider training, and not looking, could be rational," then I agree. You might rationally choose to not check if a belief is false.

Why is it necessary to muddy the waters by saying "You might rationally have an irrational belief?"

you can in fact use a good-decision making process (rationally conclude) that a bad-decision making process (an irrational one) is sufficient for a particular task.

Of course. You can decide that learning something has negative expected consequences, and choose not to learn it. Or decide that learning it would have positive expected consequences, but that the value of information is low. Why use the "rational" and "irrational" labels?

Something like half of women will consider an abortion; their support or lack thereof has an enormous impact on whether that particular abortion is implemented. And if you're proposing this as a general policy, the relevant question is whether overall people adopting your heuristic is good, meaning that the question of whether any given one of them can impact politics is less relevant. If lots of people adopt your heuristic, it matters.

For effective charities, everyone who gives to the religious organization selected by their church is orders of magnitude less effective than they could be. Thinking for themselves would allow them to save hundreds of lives over their lifetime.

In response to comment by KPier on Rational Ethics
Comment author: OrphanWilde 12 July 2012 04:12:10PM 0 points [-]

You have a good encapsulation of what I'm trying to say, yes.

I'm not arguing against "all moral reasoning from scratch," however, which I would regard as a strawman representation of rational ethics. (It was difficult to wholly avoid an apparent argument against morality from scratch, however, in establishing that rationality is not always rational, and trying to establish this in ethics as well, so I suspect I failed to some extent there, in particular the bit about the reasons for adopting rational ethics.)

My focus, although it might not have been plain, was primarily on day-to-day decisions; most people might encounter one or two serious Moral Questions in their entire -lives-; whether or not to leave grandma on life support, for example. Societal ethics are more than sufficient for day-to-day decisions; don't shoplift that candy bar, don't drink yourself into a stupor, don't cheat on your math test.

For most people, a rational ethics system costs far more than it provides in benefits. For a few people, it doesn't; either because they (like me) enjoy the act of calculation itself, or because they (say, a priest, or a counselor) are in a position such that they regularly encounter such Moral Questions, and must be capable of answering them sufficiently. We are, in fact, a -part- of society; relying on society therefore doesn't mean leaving Moral Questions unaddressed, but rather means leaving the expensive calculation to others, and evaluating the results (listening to the arguments), a considerably cheaper operation.

Comment author: KPier 12 July 2012 07:49:37PM 0 points [-]

most people might encounter one or two serious Moral Questions in their entire -lives-; whether or not to leave grandma on life support, for example. Societal ethics are more than sufficient for day-to-day decisions; don't shoplift that candy bar, don't drink yourself into a stupor, don't cheat on your math test.

Agree.

For most people, a rational ethics system costs far more than it provides in benefits.

I don't think this follows. Calculating every decision costs far more than it provides in benefits, sure. But having a moral system for when serious questions do arise is definitely worth it, and I think they arrive more often than you realize (donating to effective/efficient charity, choosing a career, supporting/opposing gay marriage or abortion or universal health care).

We are, in fact, a -part- of society; relying on society therefore doesn't mean leaving Moral Questions unaddressed, but rather means leaving the expensive calculation to others, and evaluating the results (listening to the arguments), a considerably cheaper operation.

So are you saying that you agree people ought to spend time considering arguments for various moral systems, but that they shouldn't all bother with metaethics? Agreed. Or are you saying they shouldn't bother with thinking about "morality" at all, and should just consider the arguments for and against (for example) abortion independent of a bigger system?

And one note: I think you're misusing "rational". Spending an hour puzzling over the optimal purchase of chips is not rational; spending an hour puzzling over whether to shoplift the chips is also not rational. You're only getting the counterintuitive result "rationality is not always rational" because you're treating "rational" as synonymous with "logical" or ""optimized" or "thought-through".

I think you could improve the post - and make your point clearer, by replacing "rational" with one of these words.

In response to Rational Ethics
Comment author: KPier 12 July 2012 03:51:52PM 2 points [-]

I think what you're trying to say is:

"Morally as computation" is expensive, and you get pretty much the same results from "morality as doing what everyone else is doing." So it's not really rational to try to arrive at a moral system through precise logical reasoning, for the same reasons it's not a good idea to spend an hour evaluating which brand of chips to buy. Yeah, you might get a slightly better result - but the costs are too high.

If that's right, here are my thoughts:

Obviously you don't need to do all moral reasoning from scratch. There aren't many people (on LessWrong or off) who think that you should. The whole point of Created Already in Motion is that you can't do all moral reasoning from scratch. Or, as Yvain put in in his Consequentialism FAQ, you don't need a complete theory of ballistics to avoid shooting yourself in the foot.

That said, "rely on society" is a flawed enough heuristic that almost everyone ought to do some moral reasoning for themselves. The majority of people tend to reject consequentialism in surveys, but there are compelling logical reasons to accept it. Death is widely consideed to be good, and seeking immortality to be immoral, but doing a bit of ethical reasoning tends to turn up different answers.

Moral questions have far greater consequences than day-to-day decisions; they're probably worth a little more of our attention.

(My main goal here is identifying points of disagreement, if any. Let me know if I've interpreted your post correctly.)

Comment author: ScottMessick 06 July 2012 08:34:16PM 6 points [-]

But this elegant simplicity was, like so many other things, ruined by the Machiguenga Indians of eastern Peru.

Wait, is this a joke, or have the Machiguenga really provided counterexamples to lots of social science hypotheses?

Comment author: KPier 07 July 2012 04:14:18AM 3 points [-]

He also says:

As in so many other areas, our most important information comes from reality television.

I'm guessing both are a joke.

Comment author: James_Miller 02 June 2012 06:17:49AM 13 points [-]

If you will need to convince a professor to someday give you a passing grade on this work I hope you are taking into account that most professors would consider what you are doing to be evil. Never, ever describe this kind of work on any type of graduate school application. Trust me, I know a lot about this kind of thing.

Comment author: KPier 02 June 2012 09:00:12PM 1 point [-]

Your article describes the consequences of being perceived as "right-wing" on American campuses. Is pick-up considered "right wing"? Or is your point more generally that students do not have as much freedom of speech on campus as they think?

I'm specifically curious about the claim that most professors would consider what you are doing to be evil. Is that based on personal experience with this issue?

Comment author: lucent 18 March 2012 05:29:31PM 3 points [-]

Hi. Long time reader, first time poster (under a new name). I posted once before, than quit because I am not good at math and this website doesn't offer many examples of worked out problems of Bayes theorem.

I have looked for a book or website that gives algebraic examples of basic Bayesian updates. While there are many books that cover Bayes, all require calculus, which I have not taken.

In a new article by Kaj_Sotala, fallacies are interpreted in the light of Bayes theorem. I would like to participate in debates and discussion where I can identify common fallacies and try to calculate them using Bayesian methods, which may not require calculus but simple algebra and basic logic of probability.

However, if someone could simply create an article with a few worked examples of Bayesian updating, that would still be very helpful. I have read the explanations but I am just not very good at math. I have passed (A's and B's) in college trig, algebra, and precal, but I flunked out of calculus. Maybe in the future when I am more financially secure I could spend the time to really understand more complicated Bayesian updates.

Right now, I feel like there is a real need to simply have some basic worked out examples. Not long explanations, just problems with the math worked out. Preferably non calculus based problems.

Comment author: KPier 19 March 2012 04:04:48AM 1 point [-]

My favorite explanation of Bayes' Theorem barely requires algebra. (If you don't need the extended explanation, just scroll to the bottom, where the problem is solved.)

Comment author: KPier 16 March 2012 03:32:43AM 7 points [-]

Chapter 79:

I think we're supposed to be able to figure this one out. My mental model of Eliezer says he thinks he's given us more than enough hints, and we have a week to wait despite it being a short, high tension chapter. He makes a big deal out of how Harry only has thirty hours, which isn't enough; he gives us a week, and a lot of information Harry doesn't have.

Who benefits from isolating Harry from both of his friends, and/or making him do something stupid to protect Hermione in front of the most powerful people in the Wizarding World?

Evidence against Quirrell as Hat-and-Cloak: Apart from everything that's already been discussed, he's been trying to strengthen Harry. He chose Draco and Hermione for the armies knowing that the likely outcome would be them getting closer (especially when he set them up against Harry).

Evidence for Quirrell as Hat-and-Cloak: Apart from what has already been discussed, he seemed very interested when Harry mentioned Lucius's threat to set aside everything to protect Draco. And there's that line in the most recent author's note:

anything you think won’t confuse the readers, will.

Which implies we're overthinking this and the obvious answer is the right one.

Quirrell conveniently rescuing Draco after seven hours makes sense if we assume he's also the one who almost killed him.

Evidence I can't sort: Quirrell's admission during interrogation can't have been an accident, and doesn't seem to serve his interests whether he's Hat-and-Cloak or not. If he is, he presumable wants to isolate Harry so he can talk him into stage 2 of the plan - but for that, he needs to be at Hogwarts or otherwise have access to Harry. If he's not Hat-and-Cloak, there's not much reason for him to tie himself up in the Ministry.

Unless he doesn't want Harry to be able to contact him and he wants to have a plausible reason for being unreachable?

I think this makes me update more toward "Quirrell is Hat-and-Cloak," but I'm not convinced.

Comment author: mstevens 12 March 2012 01:36:45PM 7 points [-]

Possible reference for the Chapter 78 title:

http://faculty.bschool.washington.edu/ryalch/M581/Postmodern/McGraw-Tetlock.pdf

Taboo Trade-Offs, Relational Framing, and the Acceptability of Exchanges A. Peter McGraw University of Colorado, Boulder Philip E. Tetlock University of California, Berkeley

Comment author: KPier 13 March 2012 12:32:51AM *  8 points [-]

It's also mentioned in Circular Altruism.

This matches research showing that there are "sacred values", like human lives, and "unsacred values", like money. When you try to trade off a sacred value against an unsacred value, subjects express great indignation (sometimes they want to punish the person who made the suggestion).

My favorite anecdote along these lines - though my books are packed at the moment, so no citation for now - comes from a team of researchers who evaluated the effectiveness of a certain project, calculating the cost per life saved, and recommended to the government that the project be implemented because it was cost-effective. The governmental agency rejected the report because, they said, you couldn't put a dollar value on human life. After rejecting the report, the agency decided not to implement the measure.

Trading off a sacred value (like refraining from torture) against an unsacred value (like dust specks) feels really awful. To merely multiply utilities would be too cold-blooded - it would be following rationality off a cliff...

I'm sure there's a hint in there, but I don't know what it is.

View more: Prev | Next