Comment author: jkaufman 04 December 2013 10:06:31PM 12 points [-]

What honest arguments could the blind person use?

This sounds like motivated cognition. "How can I use EA to justify what I already want to do?"

Comment author: atucker 05 December 2013 07:57:49AM 1 point [-]

It seems that "donate to a guide dog charity" and "buy me a guide dog" are pretty different w/r/t the extent that it's motivated cognition. EAs are still allowed to do expensive things for themselves, or even as for support in doing so.

Comment author: satt 03 December 2013 02:58:41AM *  1 point [-]

investing in asteroid detection may not be the most important long-term thing, but it's at least plausibly related to x-risk (and would be confusing for it to be actively harmful), whereas third-world health has confusing long-term repercussions, but is definitely not directly related to x-risk.

I'm inclined to agree. A possible counterargument does come to mind, but I don't know how seriously to take it:

  1. Global pandemics are an existential risk. (Even if they don't kill everyone, they might serve as civilizational defeaters that prevent us from escaping Earth or the solar system before something terminal obliterates humanity.)

  2. Such a pandemic is much more likely to emerge and become a threat in less developed countries, because of worse general health and other conditions more conducive to disease transmission.

  3. Funding health improvements in less developed countries would improve their level of general health and impede disease transmission.

  4. From the above, investing in the health of less developed countries may well be related to x-risk.

  5. Optional: asteroid detection, meanwhile, is mostly a solved problem.

Point 4 seems to follow from points 1-3. To me point 2 seems plausible; point 3 seems qualitatively correct, but I don't know whether it's quantitatively strong enough for the argument's conclusion to follow; and point 1 feels a bit strained. (I don't care so much about point 5 because you were just using asteroids as an easy example.)

Comment author: atucker 03 December 2013 05:35:05AM *  2 points [-]

Though, I can come up with a pretty convincing argument for the opposite.

Diseases only become drug-resistant as a result of natural selection in an environment in which drugs which try to treat the disease are used.

Third world countries have issues with distributing drugs/treatments to everyone in the society, and so it is likely that diseases will not be completely eradicated, but instead exist in an environment with drugs in use. Even in individuals, there are problems with consistently treating the disease, and so it's likely to pressure the disease without curing it.

On the other hand, diseases rarely become drug-resistant when they're not exposed to the drugs.

Therefore, treating people in third-world countries increases the probability of producing drug-resistant strains of existing diseases.

Comment author: maia 02 December 2013 06:39:17PM 2 points [-]

This is my main problem with the idea that we should have a far-future focus. I just have no idea at all how to get a grip on far-future predictions, and so it seems absurdly unlikely that my predictions will be correct, making it therefore also absurdly unlikely that I (or even most people) will be able to make a difference except in a very few cases by pure luck.

Comment author: atucker 02 December 2013 07:44:26PM *  3 points [-]

It seems easier to evaluate "is trying to be relevant" than "has XYZ important long-term consequence". For instance, investing in asteroid detection may not be the most important long-term thing, but it's at least plausibly related to x-risk (and would be confusing for it to be actively harmful), whereas third-world health has confusing long-term repercussions, but is definitely not directly related to x-risk.

Even if third world health is important to x-risk through secondary effects, it still seems that any effect on x-risk it has will necessarily be mediated through some object-level x-risk intervention. It doesn't matter what started the chain of events that leads to decreased asteroid risk, but it has to go through some relatively small family of interventions that deal with it on an object level.

Insofar as current society isn't involved in object-level x-risk interventions, it seems weird to think that bringing third-world living standards closer to our own will lead to more involvement in x-risk intervention without there being some sort of wider-spread availability of object-level x-risk intervention.

(Not that I care particularly much about asteroids, but it's a particularly easy example to think about.)

Comment author: benkuhn 02 December 2013 03:28:48AM *  1 point [-]

That deflates that criticism. For the object-level social dynamics problem, I think that people will not actually care about those problems unless they are incentivised to care about those problems, and it's not clear to me that is possible to do.

Is epistemology the real failing, here? This may just be the communism analogy, but I'm not seeing how the incentive structure of EA is lined up with actually getting things done rather than pretending to actually get things done. Do you have a good model of the incentive structure of EA?

I don't think EA has to worry about incentive structure in the same way that communism does, because EA doesn't want to take over countries (well, if it does, that's a different issue). Fundamentally we rely on people deciding to do EA on their own, and thus having at least some sort of motivation (or, like, coherent extrapolated motivation) to actually try. (Unless you're arguing that EA is primarily people who are doing it entirely for the social feedback from people and not at all out of a desire to actually implement utilitarianism. This may be true; if it is, it's a separate problem from incentives.)

The problem is more that this motivation gets co-opted by social-reward-seeking systems and we aren't aware of that when it happens. One way to fix this is to fix incentives, it's true, but another way is to fix the underlying problem of responding to social incentives when you intended to actually implement utilitarianism. Since the reason EA started was to fix the latter problem (e.g. people responding to social incentives by donating to the Charity for Rare Diseases in Cute Puppies), I think that that route is likely to be a better solution, and involve fewer epicycles (of the form where we have to consciously fix incentives again whenever we discover other problems).

I'm also not entirely sure this makes sense, though, because as I mentioned, social dynamics isn't a comparative advantage of mine :P

(Responding to the meta-point separately because yay threading.)

Comment author: atucker 02 December 2013 06:22:04AM *  2 points [-]

Social feedback is an incentive, and the bigger the community gets the more social feedback is possible.

Insofar as Utilitarianism is weird, negative social feedback is a major reason to avoid acting on it, and so early EAs must have been very strongly motivated to implement utilitarianism in order to overcome it. As the community gets bigger, it is less weird and there is more positive support, and so it's less of a social feedback hit.

This is partially good, because it makes it easier to "get into" trying to implement utilitarianism, but it's also bad because it means that newer EAs need to care about utilitarianism relatively less.

It seems that saying that incentives don't matter as long as you remove social-approval-seeking ignores the question of why the remaining incentives would actually push people towards actually trying.

It's also unclear what's left of the incentives holding the community together after you remove the social incentives. Yes, talking to each other probably does make it easier to implement utilitarian goals, but at the same time it seems that the accomplishment of utilitarian goals is not in itself a sufficiently powerful incentive, otherwise there wouldn't be effectiveness problems to begin with. If it were, then EAs would just be incentivized to effectively pursue utilitarian goals.

Comment author: atucker 02 November 2013 02:25:41PM -1 points [-]

My guess is just that the original reason was that there were societal hierarchies pretty much everywhere in the past, and they wanted some way to have nobles/high-status people join the army and be obviously distinguished from the general population, and to make it impossible to be demoted far down enough so as to be on the same level. Armies without the officer/non-officer distinction just didn't get any buy-in from the ruling class, and so they wouldn't exist.

I think there's also a pretty large difference in training -- becoming an officer isn't just about skills in war, but also involves socialization to the officer culture, through the different War Colleges and whatnot.

Comment author: RomeoStevens 27 September 2013 03:36:24AM 7 points [-]

"everything is bad" is only a crappy thinking mode when unaccompanied by the obvious next step of "optimize all the things."

Comment author: atucker 29 September 2013 02:35:04PM *  -1 points [-]

You would want your noticing that something is bad to, in some way, indicate what would be a better way to make the thing better. You want to know what in particular is bad and can be fixed, rather than the less informative "everything". If your classifier triggers on everything, it tells you less on average about any given thing.

Comment author: atucker 08 September 2013 11:22:34PM *  0 points [-]

My personal experience (going to Harvard, talking to students and admissions counselors) suggests that at one of the following is true:

Teacher recommendations and the essays that you submit to the colleges are also important in admissions, and the main channel through which human capital not particularly captured by grades, and personal development are signaled.

There are particularly known-to-be-good schools that colleges disproportionately admit students from, and for slightly different reasons that they admit students from other schools.

I basically completely ignored signalling while in high school, and often prioritized taking more interesting non-AP classes over AP classes, and focused on a couple of extracirricular relationships rather than diversifying and taking many. My grades and standardized test scores also suffered as a result of my investment in my robotics team.

Comment author: Lukas_Gloor 28 July 2013 08:50:09PM *  12 points [-]

Your view seems consistent. All I can say is that I don't understand why intelligence is relevant for whether you care about suffering. (I'm assuming that you think human infants can suffer, or at least don't rule it out completely, otherwise we would only have an empirical disagreement.)

I would. Similarly if I were going to undergo torture I would be very glad if my capacity to form long term memories would be temporarily disabled.

Me too. But we can control for memories by comparing the scenario I outlined with a scenario where you are first tortured (in your normal mental state) and then have the memory erased.

Speciesism has always seemed like a straw-man to me. How could someone with a reductionist worldview think that species classification matters morally?

You're right, it's not a big deal once you point it out. The interesting thing is that even a lot of secular people will at first (and sometimes even afterwards) bring arguments against the view that animals matter that don't stand the test of the argument of species overlap. It seems like they simply aren't thinking through all the implications of what they are saying, as if it isn't their true rejection. Having said that, there is always the option of biting the bullet, but many people who argue against caring about nonhumans don't actually want to do that.

Comment author: atucker 29 July 2013 03:41:50AM 3 points [-]

All I can say is that I don't understand why intelligence is relevant for whether you care about suffering.

Intelligence is relevant for the extent to which I expect alleviating suffering to have secondary positive effects. Since I expect most of the value of suffering alleviation to come through secondary effects on the far future, I care much more about human suffering than animal suffering.

As far as I can tell, animal suffering and human suffering are comparably important from a utility-function standpoint, but the difference in EV between alleviating human and animal suffering is huge -- the difference in potential impact on the future between a suffering human vs a non-suffering human is massive compared to that between a suffering animal and a non-suffering animal.

Basically, it seems like alleviating one human's suffering has more potential to help the far future than alleviating one animal's suffering. A human who might be incapacitated to say, deal with x-risk might become helpful, while an animal is still not going to be consequential on that front.

So my opinion winds up being something like "We should help the animals, but not now, or even soon, because other issues are more important and more pressing".

Comment author: Qiaochu_Yuan 14 June 2013 08:58:35PM *  8 points [-]

I asked this question before in the politics thread and didn't get any answers: what does political instrumental rationality look like? What kind of political actions is it feasible to take, and how do I evaluate which one to take in a given political situation? Most political discussion among LW types seems to be about political epistemic rationality (figuring out what political positions are more or less likely to have something to do with reality) but I see very little discussion of political instrumental rationality, so I have a very poor understanding of what it's possible to do politically, and consequently I try not to spend time thinking about politics because I don't expect those thoughts to ever translate into actions.

So, a meta-project: figure out what political actions are feasible to take, what kind of resources are necessary to take them, what kind of payoff can be expected from taking them, how much good effective political action does relative to effective altruism and x-risk reduction, etc.

Comment author: atucker 15 June 2013 10:10:49AM *  5 points [-]

Political instrumental rationality would be about figuring out and taking the political actions that would cause particular goals to happen. Most of this turns out to be telling people compelling things that you know that they don't happen to, and convincing different groups that their interests align (or can align in a particular interest) when it's not obvious that they do.

Political actions are based on appeals to identity, group membership, group bounding, group interests, individual interests, and different political ideas in order to get people to shift allegiances and take action toward a particular goal.

For any given individual, the relative importance of these factors will vary. For questions of identity and affiliation, they will weigh those factors based on meaning being reinforced, and memory-related stuff (i.e. clear memories of meaningful experiences count, but so do not-particularly meaningful but happens every day stuff). For actual action, it will be based on various psychological factors, as well as simply options being available and salient while they have the opportunity to act in a way that reinforces their affiliations/meaning/standing with others in the group/personal interests.

As a result, political instrumental rationality is going to be incredibly contingent on local circumstances -- who talks to who, who believes what how strongly, who's reliable, who controls what, who wants what, who hears about what, etc.

A more object level example takes place in The Wire, when a pastor is setting up various public service programs in an area where drug dealing is effectively legalized.

The pastor himself is able to appeal to his community on the basis of religious solidarity in order to get money, and so he can fund some stuff. He cares about public health and the fate of the now unemployed would-be drug runners who are no longer necessary for drug dealing because of Christian reasons (since drugs are legal, the gang members don't bother with various steps that ensure that none of them can be photographed handing someone drugs for money -- the dealer gets the money then the runner (typically a child) goes to the stash to give the buyer drugs). Further, he knows people from various community/political events in Baltimore.

So far, so good. He controls some resources (money), has a goal (public health, child development), and knows some people.

One of the first people he talks to is a doctor who has been trying to do STD prevention for a while, but hasn't had the funding or organizational capacity to do much of anything. The pastor points out to him that there are a lot of at-risk people who are now concentrated in a particular location so that the logistics of getting services to people is much simpler. In this case, the pastor simply had information (through his connections) that the doctor didn't, and got the doctor to cooperate by pointing out the opportunity to do something that the doctor had wanted.

He gets the support of the police district chief who decided to selectively enforce drug laws by appealing to the police chief's desire for improving the district under his command (he was initially trying to shift drug trafficking away from more populated areas, and decrease violence by decreasing competition over territory), and it more or less worked.

That being said, I have more or less no idea what kinds of large-scale political action ought to be possible/is desirable.

I totally have the intuition though that step one of any plan is to become personally acquainted with people who have some sort of influence over the areas that you're interested in, or to build influence by getting people who have some control over what you're interested in to pay more attention to you. Borderline, if you can't name names, and can't point at groups of people involved in the action, then you can't do anything particularly useful politically.

Comment author: Epiphany 15 June 2013 01:49:02AM *  0 points [-]

I see. The existence of the specific example caused me to interpret your post as being about a specific method, not a general strategy.

To the strategy, I say:

I've heard that defense is more difficult than offense. If the strategy you have defined is basically:

Original drones are offensive and counter-drones are defensive (to prevent them from attacking, presumably).

Then if what I heard was correct, this would fail. If not at first, then likely over time as technology advanced and new offensive strategies are used with the drones.

I'm not sure how to check to see if what I heard was true but if defense worked that well, we wouldn't have war.

Comment author: atucker 15 June 2013 08:51:25AM -1 points [-]

This distinction is just flying/not-flying.

Offense has an advantage over defense in that defense needs to defend against more possible offensive strategies than offense needs to be capable of doing, and offense only needs one undefended plan in order to succeed.

I suspect that not-flying is a pretty big advantage, even relative to offense/defense. At the very least, moving underground (and doing hydroponics or something for food) makes drones just as offensively helpful as missles. Not flying additionally can have more energy and matter supplying whatever it is that it's doing than flying, which allows for more exotic sensing and destructive capabilities.

View more: Prev | Next