I suspect that clearly defining open rationality problems would act as a focusing lens for action, not a demotivator. Please do publish your list of open rationality problems. Do for us what Hilbert did for mathematicians. But you don't have to talk about 'drowning.' :)
Second the need for a list of the most important problems.
How do you record your findings for future use, and how do you make sure you don't forget the important parts?
This essay by David Friedman is probably the best treatment of the subject of Schelling points in human relations:
http://www.daviddfriedman.com/Academic/Property/Property.html
Applying these insights to the fat man/trolley problem, we see that the horrible thing about pushing the man is that it transgresses the gravest and most terrible Schelling point of all: the one that defines unprovoked deadly assault, whose violation is understood to give the other party the licence to kill the violator in self-defense. Normally, humans see such crucial Schelling points as sacrosanct. They are considered violable, if at all, only if the consequentialist scales are loaded to a far more extreme degree than in the common trolley problem formulations. Even in the latter case, the act will likely cause serious psychological damage. This is probably an artifact of additional commitment not to violate them, which may also be a safeguard against rationalizations.
Now, the utilitarian may reply that this is just human bias, an unfortunate artifact of evolutionary psychology, and we’d all be better off if people instead made decisions according to pure utilitarian calculus. However, even ignoring all the other fatal problems of utilitarianism, this view is utterly myopic. Humans are able to coordinate and cooperate because we pay respect to the Schelling points (almost) no matter what, and we can trust that others will also do so. If this were not so, you would have to be constantly alert that anyone might rob, kill, cheat, or injure you at any moment because their cost-benefit calculations have implied doing so, even if these calcualtions were in terms of the most idealistic altruistic utilitarianism. Clearly, no organized society could exist in that case: even if with unlimited computational power and perfect strategic insight you could compute that cooperation is viable, this would clearly be impractical.
It is however possible in practice for humans to evaluate each other’s personalities and figure out if others’, so to say, decision algorithms follow these constraints. Think of how people react when they realize that someone has a criminal history or sociopathic tendencies. This person is immediately perceived as creepy and dangerous, and with good reason: people realize that his decision algorithm lacks respect for the conventional Schelling points, so that normal trust and relaxed cooperation with him is impossible, and one must be on the lookout for nasty surprises. Similarly, imagine meeting someone who was in the fat man/trolley situation and who mechanically made the utilitarian decision and pushed the man without a twitch of guilt. Even the most zealous utilitarian will in practice be creeped out by such a person, even though he should theoretically perceive him as an admirable hero. (As always when it comes to ideology, people may be big on words but usually know better when their own welfare is at stake.)
(This comment is also cursory and simplified, and an alert reader will likely catch multiple imprecisions and oversimplifications. This is unfortunately unavoidable because of the complexity of the topic. However, the main point stands regardless. In particular, I haven’t addressed the all too common cases where cooperation between people breaks down and all sorts of conflict ensue. But this analysis would just reinforce the main point that cooperation critically depends on mutual recognition of near-unconditional respect for Schelling points.)
Can you explain why this analysis renders directing away from the five and toward the one permissible?
What, you mean in mainstream philosophy? I don't think mainstream philosophers think that way, even Quineans. The best ones would say gravely, "Yes, goals are important" and then have a big debate with the rest of the field about whether goals are important or not. Luke is welcome to prove me wrong about that.
I actually don't think this is about right. Last time I asked a philosopher about this, they pointed to an article by someone (I.J. Good, I think) about how to choose the most valuable experiment (given your goals), using decision theory.
If GiveWell really does influence a substantial amount of philanthropy, then I would consider it as a public good charity with the multiplier that implies. Is there data on its influence and projected influence?
I recall a while back that Vasser was talking with GiveWell about rating SIAI. Has anything come of that?
Here's another one: what I call the layshadow heuristic: could an intelligent layperson produce passable, publishable work [1] in that field after a few days of self-study? It's named after the phenomenon in which someone with virtually no knowledge of the field sells the service of writing papers for others who don't want to do the work, and are never discovered, with their clients being granted degrees.
The heuristic works because passing it implies very low inferential distance and therefore very little knowledge accumulation.
[1] specifically, work that unsuspecting "experts" in the field cannot distinguish from that produced by "serious" researchers with real "experience" and "education" in that field.
For how many fields do you think this is possible?
Epic.
Agreed. And I'm skeptical of both. You?
Hard to be confident about these things, but I don't see the problem with external reasons/oughts. Some people seem to have some kind of metaphysical worry...harder to reduce or something. I don't see it.
Categorical oughts and reasons have always confused me. What do you see as the difference, and which type of each are you thinking of? The types of categorical reasons or reasons with which I'm most familiar are Kant's and Korsgaard's.
R is a categorical reason for S to do A iff R counts in favor doing A for S, and would so count for other agents in a similar situation, regardless of their preferences. If it were true that we always have reasons to benefit others, regardless of what we care about, that would be a categorical reason. I don't use the term "categorical reason" any differently than "external reason".
S categorically ought to do A just when S ought to do A, regardless of what S cares about, and it would still be true that S ought to do A in similar situations, regardless of what S cares about. The rule: always maximize happiness, would, if true, ground a categorical ought.
I see very little reason to be more or less skeptical of categorical reasons or categorical oughts than the other.
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
IAWYC, but would like to hear more about why you think the last sentence is supported by the previous sentence. I don't see an easy argument from "X is a terminal value for many people" to "X should be promoted by the FAI." Are you supposing a sort of idealized desire fulfilment view about value? That's fine--it's a sensible enough view. I just wouldn't have thought it so obvious that it would be a good idea to go around invisibly assuming it.