All of Utilitarian's Comments + Replies

Jonah, I agree with what you say at least in principle, even if you would claim I don't follow it in practice. A big advantage of being Bayesian is that you retain probability mass on all the options rather than picking just one. (I recall many times being dismayed with hacky approximations like MAP that let you get rid of the less likely options. Similarly when people conflate the Solomonoff probability of a bitstring with the shortest program that outputs it, even though I guess in that case, the shortest program necessarily has at least as much probabil... (read more)

3JonahS
I raise (at least) two different related points in my post: 1. "When an argument seems very likely to be wrong but could be right with non-negligible probability, classify it as such, rather than classifying it as false." I think that you're pretty good on this point, and better than I had been. 2. The other is one that you didn't mention in your comment, and one that I believe that you and I have both largely missed in the past. This is that one doesn't need a relatively strong argument to be confident in a subtle judgment call — all that one needs is ~4-8 independent weak arguments. (Note that generating and keeping track of these isn't computationally infeasible.) This is a very crucial point, as it opens up the possibility of no longer needing to rely on single relatively strong arguments that aren't actually too strong. I believe that the point in #2 is closely related to what people call "common sense" or "horse sense" or "physicists' intuition." In the past, I had thought that "common sense" meant, specifically, "don't deviate too much from conventional wisdom, because views that are far from mainstream are usually wrong." Now I realize that it refers to something quite a bit deeper, and not specifically about conventional wisdom. I'd suggest talking about these things with miruto. Our chauffeur from last weekend has recently been telling to me that physicists generally use the "many weak arguments" approach. For example, the math used in quantum field theory remains without a rigorous foundation, and its discovery was analogous to Euler's heuristic reasoning concerning the product formula for the sine function. He also referred to scenarios in which (roughly speaking) you have a physical system with many undetermined parameters, and you have ways of bounding different collections of them and that by looking at all of the resulting bounds, you can bound the individual parameters sufficiently tightly so that the whole model is accurate.

I used to eat a lot of chicken and eggs before I read Peter Singer. After that, I went cold turkey (pardon the expression).

Some really creative ideas, ChristianKl. :)

Even with what you describe, humans wouldn't become extinct, barring other outcomes like really bad nuclear war or whatever.

However, since the AI wouldn't be destroyed, it could bide its time. Maybe it could ally with some people and give them tech/power in exchange for carrying out its bidding. They could help build the robots, etc. that would be needed to actually wipe out humanity.

Obviously there's a lot of conjunction here. I'm not claiming this scenario specifically is likely. But it helps to stimulate the imagination to work out an existence proof for the extinction risk from AGI.

-2ChristianKl
Some AI's already do this today. The outsource work they can't do to Amazon's mechanical turk where humans get payed money to do tasks for the AI. Other humans take on job on rentacoder where they never see the human that's hiring them. Human's wouldn't get extinct in a short time frame but if the AGI has decades of time than it can increase it's own power over time and decrease it's dependence on humans. Sooner or later the humans wouldn't be useful for the AGI anymore and then go extinct.

It's not at all clear that a AGI will be human-like, anyone than humans are dog-like.

Ok, bad wording on my part. I meant "more generally intelligent."

How do you fight the AGI past that point?

I was imagining people would destroy their computers, except the ones not connected to the Internet. However, if the AGI is hiding itself, it could go a long way before people realized what was going on.

Interesting scenarios. Thanks!

2ChristianKl
Exactly. On the one hand the AGI doesn't try to let humans get wind of it's plans. On the other hand it's going to produce distractions. You have to remember how delusional some folks are. Imaging trying to convince the North Korean's that they have to destroy their computers because those computer are infested with an evil AI. Even in the US nearly half of the population still believes in creationism. How many of them can be convinced that the evil government is trying to take away their computers to establish a dictatorship? Before the government goes attempts to trash the computer the AI sent an email to a conspiracy theory website, where it starts revealing some classified documents it aquired through hacking that show government misbehavior. Then it sents an email to the same group saying that the US government is going to shut down all civilian computers because freedom of speech is to dangerous to the US government and that the US government will be using the excuse that the computers are part of a Chinese botnet. In our time you need computers to stock supermarket shelves with goods. Container ships need GPS and see charts to navigate. People start fighting each other. Some are likely to blame the people who wanted to thrash the computers as responsible for the mess. Even if you can imagine shutting of all computer in 2013, in 2033 most cars will be computers in which the AI can rest. A lot of military firepower will be in drones that the AI can control.

As we begin seeing robots/computers that are more human-like, people will take the possibility of AGIs getting out of control more seriously. These things will be major news stories worldwide, people will hold natural-security summits about them, etc. I would assume the US military is already looking into this topic at least a little bit behind closed doors.

There will probably be lots of not-quite-superhuman AIs / AGIs that cause havoc along the road to the first superhuman ones. Yes, it's possible that FOOM will take us from roughly a level like where we ... (read more)

0ikrase
Agreed: While I am doubtful about the 'incredibly low budget nano bootstrap', I would say that uncontained foomed AIs are very dangerous if they are interested in performing almost any action whatsoever.
2DaFranker
Then the AI does precisely nothing other than hide its presence and do the following: Send one email to a certain nano-something research scientist whom the AI has identified as "easy to bribe into building stuff he doesn't know about in exchange for money". The AI hacks some money (or maybe even earns it "legitimately"), sends it to the scientist, then tells the scientist to follow some specific set of instructions for building a specific nanorobot. The scientist builds the nanorobot. The nanorobot proceeds to slowly and invisibly multiply until it has reached 100% penetration to every single human-inhabited place on Earth. Then it synchronously begins a grey goo event where every human is turned into piles of carbon and miscellaneous waste, and every other thinghy required for humans (or other animals) to survive on earth is transformed into more appropriate raw materials for the AI to use next. And I'm only scratching the surface of a limited sample of some of the most obvious ways an AI could cause an extinction event from the comfort of only a few university networks, let alone every single computer connected to the Internet.
2ChristianKl
It's not at all clear that a AGI will be human-like, anyone than humans are dog-like. How do you fight the AGI past that point? It controls total global communication flow. It can play out different humans against each other till it effectively rules the world. After it has total political control it can move more and more resources to itself. That's not even needed. You just need to set up a bunch of convincing false flag attacks that implicate Pakistan attacking India. A clever AI might provoke such conflicts to distract humans from fighting it. Don't underrate how a smart AI can fight conflicts. Having no Akrasia, no need for sleep, the ability to self replicate your mind and being able to plan very complex conflicts rationally are valuable for fighting conflicts. For the AGI it's even enough to get political control over a few countries will the other countries have their economy collapse due to lack of computers the AGI could help those countries that it controls to overpower the others over the long run.

This is a good point. :) I added an additional objection to the piece.

As an empirical matter, extinction risk isn't being funded as much as you suggest it should be if almost everyone has some incentives to invest in the issue.

There's a lot of "extinction risk" work that's not necessarily labeled as such: Biosecurity, anti-nuclear proliferation, general efforts to prevent international hostility by nation states, general efforts to reduce violence in society and alleviate mental illnesses, etc. We don't necessarily see huge investments in AI sa

... (read more)
2ChristianKl
Once we see an out of control AI it's to late to do AI safety. Given current computer security the AI could hack itself into every computer in the world and resist easy shutdown. When it comes to low probability high impact events waiting for small problem to cause awareness of the issue is just dangerous.

Thanks, Luke. See also this follow-up discussion to Ord's essay.

As you suggest with your "some" qualifier, my essay that benthamite shared doesn't make any assumptions about negative utilitarianism. I merely inserted parentheticals about my own views into it to avoid giving the impression that I'm personally a positive-leaning utilitarian.

1lukeprog
Nice discussion there! Thanks for the link.

Thanks, Jabberslythe! You got it mostly correct. :)

The one thing I would add is that I personally think people don't usually take suffering seriously enough -- at least not really severe suffering like torture or being eaten alive. Indeed, many people may never have experienced something that bad. So I put high importance on preventing experiences like these relative to other things.

Interesting story. Yes, I think our intuitions about what kinds of computations we want to care about are easily bent and twisted depending on the situation at hand. In analogy with Dennett's "intentional stance," humans have a "compassionate stance" that we apply to some physical operations and don't apply to others. It's not too hard to manipulate these intuitions by thought experiments. So, yes, I do fear that other people may differ (perhaps quite a bit) in their views about what kinds of computations are suffering that we should avoid.

I bet there are a lot more people who care about animals' feelings and who care a lot more, than those who care about the aesthetics of brutality in nature.

Well, at the moment, there are hundreds of environmental-preservation organizations and basically no organizations dedicated to reducing wild-animal suffering. Environmentalism as a cause is much more mainstream than animal welfare. Just like the chickens that go into people's nuggets, animals suffering in nature "are out of sight, and the connection between [preserving pristine habitats] and an... (read more)

This is what you value, what you chose.

Yes. We want utilitarianism. You want CEV. It's not clear where to go from there.

Not the hamster's one.

FWIW, hamsters probably exhibit fairness sensibility too. At least rats do.

Do you think the typical person advocating ecological balance has evaluated how the tradeoffs would change given future technology?

Good point. Probably not, and for some, their views would change with new technological options. For others (environmentalist types especially), they would probably retain their old views.

That said, the future-technology sword cuts both ways: Because most people aren't considering post-human tech, they're not thinking of (what some see as) the potential astronomical benefits from human survival. If 10^10 humans were only goi... (read more)

Thanks, JGWeissman. There are certainly some deep ecologists, like presumably Hettinger himself, who have thought long and hard about the scale of wild-animal suffering and still support preservation of ecology as is. When I talk with ecologists or environmentalists, almost always their reply is something like, "Yes, there's a lot of suffering, but it's okay because it's natural for them." One example:

As I sit here, thinking about the landscape of fear, I watch a small bird at my bird feeder. It spends more time looking around than it does eati

... (read more)
2JGWeissman
The argument seems to be less that the suffering is OK because it is natural than any intervention we can make to remove it would cause nature to not work, as in removing predator species results in more herbivores, which leads to vegetation being over consumed, which leads to ecological collapse. I am sympathetic to this argument. On a large enough scale, this means no breathable atmosphere. So while I think that wild animal suffering is a bad thing, I will accept it for now as a cost of supporting human life. (Maybe you could remove all animals not actually symbiotic with plants, but this seems like a hell of a gamble, we would likely regret the unintended consequences, and it could be difficult to undo.) Once humanity can upload and live in simulations, we have more options. Do you think the typical person advocating ecological balance has evaluated how the tradeoffs would change given future technology? CEV is supposed to figure out what people would want if they were more rational. If rationalists tend to discard that intuition, it is not likely to have a strong effect on CEV. (Though if people without such strong intuitions are likely to become more rational, this would not be strong evidence. It may be useful to try raising the sanity waterline among people who demonstrate the intuition and see what happens.) I am completely against giving up the awesomeness of a good singularity because it is not obvious that the resulting society won't devote some tiny fraction of their computing power to simulations in which animals happen to suffer. The suffering is bad, but there are other values to consider here, that the scenario includes in far greater quantities.

Thanks, Benito. Do we know that we shouldn't have a lot of chicken feed? My point in asking this is just that we're baking in a lot of the answer by choosing which minds we extrapolate in the first place. Now, I have no problem baking in answers -- I want to bake in my answers -- but I'm just highlighting that it's not obvious that the set of human minds is the right one to extrapolate.

BTW, I think the "brain reward pathways" between humans and chickens aren't that different. Maybe you were thinking about the particular, concrete stimuli that are found to be rewarding rather than the general architecture.

Why not include primates, dolphins, rats, chickens, etc. into the ethics?

-4see
What would that mean? How would the chicken learn or follow the ethics? Does it seem even remotely reasonable that social behavior among chickens and social behavior among humans should follow the same rules, given the inherent evolutionary differences in social structure and brain reward pathways? It might be that CEV is impossible for humans, but there's at least enough basic commonality to give it a chance of being possible.

Future humans may not care enough about animal suffering relative to other things, or may not regard suffering as being as bad as I do. As noted in the post, there are people who want to spread biological life as much as possible throughout the galaxy. Deep ecologists may actively want to preserve wild-animal suffering (Ned Hettinger: "Respecting nature means respecting the ways in which nature trades values, and such respect includes painful killings for the purpose of life support.") Future humans might run ancestor sims that happen to include ... (read more)

3JGWeissman
How many people? How much of this is based on confusion and not actually confronting the scale of suffering involved? (Note that CEV is supposed to account for this, giving us not what we say we want, but what we would want if we were smarter and knew more.) I am not convinced that insects are sentient (though an FAI that understands how sentience works could tell me I'm wrong and I'd believe it). If insects do turn out to be sentient, it would not be hard (and would actually take fewer computational resources) to replace the insect's sentience with an abstract model of its behavior. Sure, if we are stupid about it, but we are already working on how not to be stupid about it. And seriously, a successful singularity should give us far more interesting things to do than running such simulations (or eating hamburgers).

Preventing suffering is what I care about, and I'm going to try to convince other people to care about it. One way to do that is to invent plausible thought experiments / intuition pumps for why it matters so much. If I do, that might help with evangelism, but it's not the (original) reason why I care about it. I care about it because of experience with suffering in my own life, feeling strong empathy when seeing it in others, and feeling that preventing suffering is overridingly important due to various other factors in my development.

2vallinder
Thanks, Brian. I know this is your position, I'm wondering if it's benthamite's as well.

My understanding is that CEA exists in order to simplify the paperwork of multiple projects. For example, Effective Animal Activism is not its own charity; instead, you donate to CEA and transfer the money to EAA. As bryjnar said, there's not really any overhead in doing this. Using CEA as an umbrella much simpler than trying to get 501(c)(3) status for EAA on its own, which would be painstaking process.

0JaySwartz
I am disappointed that my realistic and fact based observation generated a down vote. At the risk of an additional down vote, but in the interest of transparent honest exchange, I am pointing out a verifiable fact, however unsavory it may be interpreted. If over time the time cost of intermediaries (additional handling and overhead costs) remains below the cost of the steps to eliminate intermediaries (the investment required to establish a 501(c)(3)) then I stand corrected. While an improbable situation, it could well be possible.

I appreciate personal anecdote. Sometimes I think anecdotes are the most valuable parts of an essay. It all depends on the style and the preferences of the audience. I don't criticize HPMOR on the grounds that it focuses too much on Harry and not enough on rationality concepts...

4Zando
I have asked him to "move beyond" not "eliminate". Personal anecdote obviously has its place; but it doesn't dominate on lesswrong, nor should it. As for HPMOR: different form, different purpose. (Though I do occasionally yearn for a bit more conceptualizing there too, - but that's just personal preference and not grounds for criticism) Frank genuinely seems to want - and need - to improve his posts: my comments are blunt but not unfair.

Three friends independently pointed me to Overcoming Bias in fall/winter 2006.

Agreed. I'm often somewhat embarrassed to mention SIAI's full name, or the Singularity Summit, because of the term "singularity" which, in many people's minds -- to some extent including my own -- is a red flag for "crazy".

Honestly, even the "Artificial Intelligence" part of the name can misrepresent what SIAI is about. I would describe the organization as just "a philosophy institute researching hugely important fundamental questions."

3ata
Agreed; I've had similar thoughts. Given recent popular coverage of the various things called "the Singularity", I think we need to accept that it's pretty much going to become a connotational dumping ground for every cool-sounding futuristic prediction that anyone can think of, centered primarily around Kurzweil's predictions. I disagree somewhat there. Its ultimate goal is still to create a Friendly AI, and all of its other activities (general existential risk reduction and forecasting, Less Wrong, the Singularity Summit, etc.) are, at least in principle, being carried out in service of that goal. Its day-to-day activities may not look like what people might imagine when they think of an AI research institute, but that's because FAI is a very difficult problem with many prerequisites that have to be solved first, and I think it's fair to describe SIAI as still being fundamentally about FAI (at least to anyone who's adequately prepared to think about FAI). Describing it as "a philosophy institute researching hugely important fundamental questions" may give people the wrong impressions, if it's not quickly followed by more specific explanation. When people think of "philosophy" + "hugely important fundamental questions", their minds will probably leap to questions which are 1) easily solved by rationalists, and/or 2) actually fairly silly and not hugely important at all. ("Philosophy" is another term I'm inclined toward avoiding these days.) When I've had to describe SIAI in one phrase to people who have never heard of it, I've been calling it an "artificial intelligence think-tank". Meanwhile, Michael Vassar's Twitter describes SIAI as a "decision theory think-tank". That's probably a good description if you want to address the current focus of their research; it may be especially good in academic contexts, where "decision theory" already refers to an interesting established field that's relevant to AI but doesn't share with "artificial intelligence" the connotat

I like the way you phrased your concern for "subjective experience" -- those are the types of characteristics I care about as well.

But I'm curious: What does ability to learn simple grammar have to do with subjective experience?

the lives of the cockroaches are irrelevant

I'm not so sure. I'm no expert on the subject, but I suspect cockroaches may have moderately rich emotional lives.

If only for the cheap signaling value.

My point was that the action may have psychological value for oneself, as a way of getting in the habit of taking concrete steps to reduce suffering -- habits that can grow into more efficient strategies later on. One could call this "signaling to oneself," I suppose, but my point was that it might have value in the absence of being seen by others. (This is over and above the value to the worm itself, which is surely not unimportant.)

I'm surprised by Eliezer's stance. At the very least, it seems the pain endured by the frogs is terrible, no? For just one reference on the subject, see, e.g., KL Machin, "Amphibian pain and analgesia," Journal of Zoo and Wildlife Medicine, 1999.

Rain, your dilemma reminds me of my own struggles regarding saving worms in the rain. While stepping on individual worms to put them out of their misery is arguably not the most efficient means to prevent worm suffering, as a practical matter, I think it's probably an activity worth doing, because it buil... (read more)

-1Blueberry
Maybe so, but the question is why we should care. If only for the cheap signaling value.

Sure. Then what I meant was that I'm an emotivist with a strong desire to see suffering reduced and pleasure increased in the manner that a utilitarian would advocate, and I feel a deep impulse to do what I can to help make that happen. I don't think utilitarianism is "true" (I don't know what that could possibly mean), but I want to see it carried out.

Indeed. While still a bit muddled on the matter, I lean toward hedonistic utilitarianism, at least in the sense that the only preferences I care about are preferences regarding one's own emotions, rather than arbitrary external events.

Environmental preservationists... er, no, I won't try to make any fully general accusations about them. But if they succeed in preserving the environment in its current state, that would involve massive amounts of suffering, which would be bad!

Indeed. It may be rare among the LW community, but a number of people actually have a strong intuition that humans ought to preserve nature as it is, without interference, even if that means preserving suffering. As one example, Ned Hettinger wrote the following in his 1994 article, "Bambi Lovers versus Tree ... (read more)

0thomblake
That is inconsistent. Utilitarianism has to assume there's a fact about the good; otherwise, what are you maximizing? Emotivism insists that there is not a fact about the good. For example, for an emotivist, "You should not have stolen the bread." expresses the exact same factual content as "You stole the bread." (On this view, presumably, indicating "mere disapproval" doesn't count as factual information).
0PeerInfinity
checking out the wikipedia article... hmm... I think I agree with emotivism too, to some degree. I already have a habit of saying "but that's just my opinion", and being uncertain enough about the validity (validity according to what?) of my preferences, to not dare to enforce them if other people disagree. And emotivism seems like a formalization of the "but that's just my opinion". That could be useful. good point. and yeah, that's that's one of the main issues that's causing me to doubt whether SIAI has any hope of achieving their mission. good point. Have you had any contact with Metafire yet? He strongly agrees with you on this. Just recently he started posting to LW. oh, and "quixotic", that's the word I was looking for, thanks :) heh, yeah, that "significantly less than 50%" was actually meant as an extremely sarcastic understatement. I need to learn how to express stuff like this more clearly. good point! This suggests the possibility of requiring people to go through regular mental health checkups after the Singularity. Preferably as unobtrusively as possible. Giving them a chance to release themselves from any restrictions they tried to place on their future selves. Though the question of what qualifies as "mentally healthy" is... complex and controversial.

Bostrom's estimate in "Astronomical Waste" is "10^38 human lives [...] lost every century that colonization of our local supercluster is delayed," given various assumptions. Of course, there's reason to be skeptical of such numbers at face value, in view of anthropic considerations, simulation-argument scenarios, etc., but I agree that this consideration probably still matters a lot in the final calculation.

Still, I'm concerned not just with wild-animal suffering on earth but throughout the cosmos. In particular, I fear that post-humans... (read more)

3Nick_Tarleton
Who are you thinking of? (Eliezer is frequently accused of this, but has disclaimed it. Note the distinction between total convergence, and sufficient coherence for an FAI to act on.)
3PeerInfinity
(edit: The version of utilitarianism I'm talking about in this comment is total hedonic utilitarianism. Maximize the total amount of pleasure, minimize the total amount of pain, and don't bother keeping track of which entity experiences the pleasure or pain. A utilitronium shockwave scenario based on preference utilitarianism, and without any ethical restrictions, is something that even I would find very disturbing.) I totally agree!!! Astronomical waste is bad! (or at least, severely suboptimal) Wild-animal suffering is bad! (no, there is nothing "sacred" or "beautiful" about it. Well, ok, you could probably find something about it that triggers emotions of sacredness or beauty, but in my opinion the actual suffering massively outweighs any value these emotions could have.) Panspermia is bad! (or at least, severely suboptimal. Why not skip all the evolution and suffering and just create the end result you wanted? No, "This way is more fun", or "This way would generate a wider variety of possible outcomes" are not acceptable answers, at least not according to utilitarianism.) Lab-universes have great potential for bad (or good), and must be created with extreme caution, if at all! Environmental preservationists... er, no, I won't try to make any fully general accusations about them. But if they succeed in preserving the environment in its current state, that would involve massive amounts of suffering, which would be bad! I also agree with your concerns about CEV. Though of course we're talking about all this as if there is some objective validity to Utilitarianism, and as Eliezer explained: (warning! the following sentence is almost certainly a misinterpretation!) You can't explain Utilitarianism to a rock, therefore Utilitarianism is not objectively valid. Or, more accurately, our belief in utilitarianism is a fact about ourselves, not a fact about the universe. Well, indirectly it's a fact about the universe, because these beliefs were generated by a proc

PeerInfinity, I'm rather struck by a number of similarities between us:

  • I, too, am a programmer making money and trying to live frugally in order to donate to high-expected-value projects, currently SIAI.
  • I share your skepticism about the cause and am not uncomfortable with your 1% probability of positive Singularity. I agree SIAI is a good option from an expected-value perspective even if the mainline-probability scenario is that these concerns won't materialize.
  • As you might guess from my user name, I'm also a Utilitronium-supporting hedonistic utilitar
... (read more)
3PeerInfinity
Hi Utilitarian! um... are you the same guy who wrote those essays at utilitarian-essays.com? If you are, we have already talked about these topics before. I'm the same Peer Infinity who wrote that "interesting contribution" on Singularitarianism in that essay about Pascal's Wager, the one that tried to compare the different religions to examine which of them would be the best to Wager on. And, um... I used to have some really nasty nightmares about going to the christian hell. But then, surprisingly, these nightmares somehow got replaced with nightmares of a hell caused by an Evil AI. And then these nightmares somehow got replaced with nightmares about the other hells that modal realism says must already exist in other universes. I totally agree with you that the suffering of humans is massively outweighed by the suffering of other animals, and possibly insects, by a few orders of magnitude, I forget how many exactly, but I think it was less than 10 orders of magnitude. But I also believe that the amount of positive utility that could be achieved through a positive Singularity is... I think it was about 35 orders of magnitude more than all of the positive or negative utility that has been experienced so far in the entire history of Earth. But I don't remember the details of the math. For a few years now I was planning to write about that, but somehow never got around to it. Well, actually, I did make one feeble attempt to do the math, but that post didn't actually make any attempt to estimate how many orders of magnitude were involved Oh, and I totally share your concerns about the possible implications of CEV. Specifically, that it might end up generating so much negative utility that it outweighs the positive utility, which would mean that a universe completely empty of life would be preferable. Oh, and I know one other person who shares your belief that promoting good memes like concern about wild animals would be more cost effective than donating to Friendl

the largest impact you can make would be to simply become a vegetarian yourself.

You can also make a big impact by donating to animal-welfare causes like Vegan Outreach. In fact, if you think the numbers in this piece are within an order of magnitude of correct, then you could prevent the 3 or 4 life-years of animal suffering that your meat-eating would cause this year by donating at most $15 to Vegan Outreach. For many people, it's probably a lot easier to offset their personal contribution to animal suffering by donating than by going vegetarian.

Of cou... (read more)

3Larks
I wish they would make editions available without the horrible pictures; I'm already aware conditions are bad, and I neither want the pictures to hijack my decision making process while reading, nor to experience the neg-utils from seeing them.

Actually, you're right -- thanks for the correction! Indeed, in general, I want altruistic equal consideration of the pleasure and pain of all sentient organisms, but this need have little connection with what I like.

As it so happens, I do often feel pleasure in taking utilitarian actions, but from a utilitarian perspective, whether that's the case is basically trivial. A miserable hard-core utilitarian would be much better for the suffering masses than a more happy only-sometimes-utilitarian (like myself).

0Pablo
Thanks for the clarification. :-)

I am the kind of donor who is much more motivated to give by seeing what specific projects are on offer. The reason boils down to the fact that I have slightly different values (namely, hedonistic utilitarianism focused on suffering) than the average of the SIAI decision-makers and so want to impose those values as much as I can.

Great post! I completely agree with the criticism of revealed preferences in economics.

As a hedonistic utilitarian, I can't quite understand why we would favor anything other than the "liking" response. Converting the universe to utilitronium producing real pleasure is my preferred outcome. (And fortunately, there's enough of a connection between my "wanting" and "liking" systems that I want this to happen!)

1Pablo
I agree that this is a great post. (I'm sorry I didn't make that clear in my previous comment.) I can't quite understand your parenthetical remark. I though your position was that you wanted, rather than liked, experiences of liking to be maximized. Since you can want this regardless of whether you like it, I don't see why the connection you note between your 'wanting' and 'liking' systems is actually relevant.

Agreed. And I think it's important to consider just how small 1% really is. I doubt the fuzzies associated with using the credit card would actually be as small as 1% of the fuzzies associated with a 100% donation -- fuzzies just don't have high enough resolution. So I would fear, a la scope insensitivity, people getting more fuzzies from the credit card than are actually deserved from the donation. If that's necessary in order for the fuzzies to exceed a threshold for carrying out the donation, so be it; but usually the problem is in the other direction: People get too many fuzzies from doing too little and so end up not doing enough.

What criteria do you use to decide upon the class of algorithms / computations / chemicals / physical operations that you consider "conscious" in the sense of "having experiences" that matter morally? I assume it includes many non-human animals (including wild animals)? Might it include insects? Is it weighted by some correlate of brain / hardware size? Might it include digital computers? Lego Turing machines? China brains? Reinforcement-learning algorithms? Simple Python scripts that I could run on my desktop? Molecule movements in the... (read more)

6timtyler
That's 14 questions! ;-)

I like all of the responses to the value-of-nature arguments you give in your second paragraph. However, as a hedonistic utilitarian, I would disagree with your claim that nature has value apart from its value to organisms with experiences. And I think we have a obligation to change nature in order to avert the massive amounts of wild-animal suffering that it contains, even if doing so would render it "unnatural" in some ways.

The 12-billion-utils example is similar to one I mention on this page under "What about Isolated Actions?" I agree that our decision here is ultimately arbitrary and up to us. But I also agree with the comments by others that this choice can be built into the standard expected-utility framework by changing the utilities. That is, unless your complaint is, as Nick suggests, with the independence axiom's constraint on rational preference orderings in and of itself (for instance, if you agreed -- as I don't -- that the popular choices in the Allais paradox should count as "rational").

0Stuart_Armstrong
No, I don't agree that the Allais paradox should count as rational - but I don't need to use the independence axiom to get to this. I'll re-explain in a subsequent post.

Indeed. Gaverick Matheny and Kai M. A. Chan have formalized that point in an excellent paper, "The Illogic of the Larder."

For example if you claim to prefer non-existence of animals to them being used as food, then you clearly must support destruction of all nature reserves, as that's exactly the same choice. And if you're against animal suffering, you'd be totally happy to eat cows genetically modified not to have pain receptors. And so on. All positions never taken by any vegetarians.

I think most animal-welfare researchers would agree that animals on the nature reserve suffer less than those in factory farms, where conditions run contrary to the animals' evolved insti... (read more)

0[anonymous]
If animals in nature lead lives that are worth not living, is there a case here for somehow making sure that if humanity goes extinct, the rest of the biosphere goes with it (think doomsday device with ten thousand year timer, or some other more serious way)? Also depends on whether we'd expect any intelligent species arising after humanity to evolve into a better or worse than average (or than zero) civilization, I guess.

I like Peter Singer's "drowning child" argument in "Famine, Affluence, and Morality" as a way to illustrate the imperative to donate and, by implication, the value of money. As he says, "we ought to give the money [spent on fancy clothes] away, and it is wrong not to do so."

I do think there's a danger, though, in focusing on the wrongness of frivolous spending, which is relatively easy to criticize. It's harder to make people think about the wrongness of failing to make money that they could have donated. Opportunity costs are always harder to feel viscerally.

8JohnH
The evidence is that giving money in the form of aid to countries is almost useless in helping the average person in that country become better off. However, getting rid of trade restrictions and showing the people in a country what things they can produce has been very effective in lifting people and countries out of poverty. This means that buying fancy chocolate for instance will likely have a greater effect on helping poor people in Africa than donating an equal amount of money in aid. To continue this example people in Africa are unable to vote in US (or Europe) the two places that have high tariffs that if reduced would greatly change the welfare of most of the developing world. In the US HFCS is used instead of sugar due to high sugar tariffs, ethanol is made from corn instead of importing it from Brazil, and tropical fruits are restricted to favored trading partners. This is highly beneficial if one grows sugar in Florida and moderately beneficial if one grows corn but poverty inducing if one lives in a country where the main export is Sugar. Therefore if we really did care about the welfare of people in the rest of the world then we should be donating to a PAC that has the purpose of repealing such tariffs as this would be the most cost effective way of reducing world wide poverty. Instead we donate shoes and clothes that drive the local textile industries out of business, we donate money for food that creates artificial famines as local farmers have no reason to plant crops, and we condition a lot of donations on things such as "green" technology that is more expensive and less useful for development then the alternative (as well as non-producible in Africa meaning most of the money gets sent back to the western countries that are "helping"). Then we wonder why one of the prevailing views in Africa is that the west wants Africa to remain poor.

I intuitively sympathize with the complaints of status-quo bias, though it's of course also true that more changes from evolution's current local optimum entail more risk.

Here is another interesting reference on one form of congenital lack of pain.

it seems in some raw intuitive sense, that if the universe is large enough for everyone to exist somewhere, then we should mainly be worried about giving babies nice futures rather than trying to "ensure they get born".

That's an interesting intuition, but one that I don't share. I concur with Steven and Vladimir. The whole point of the classical-utilitarian "Each to count for one and none for more than one" principle is that the identity of the collection of atoms experiencing an emotion is irrelevant. What matters is increasing the num... (read more)

As a human, I try to abide by the deontological prohibitions that humans have made to live in peace with one another. [...] I don't go around pushing people into the paths of trains myself, nor stealing from banks to fund my altruistic projects.

It seems a strong claim to suggest that the limits you impose on yourself due to epistemological deficiency line up exactly with the mores and laws imposed by society. Are there some conventional ends-don't-justify-means notions that you would violate, or non-socially-taboo situations in which you would restrain yourself?

Also, what happens when the consequences grow large? Say 1 person to save 500, or 1 to save 3^^^^3?

2thrawnca
If 3^^^^3 lives are at stake, and we assume that we are running on faulty or even hostile hardware, then it becomes all the more important not to rely on potentially-corrupted "seems like this will work".

The future--what will happen--is necessarily "fixed". To say that it isn't implies that what will happen may not happen, which is logically impossible.

Pablo, I think the debate is over whether there is such a thing as "what will happen"; maybe that question doesn't yet have an answer. In fact, I think any good definition of libertarian free will would require that it not have an answer yet.

So, can someone please explain just exactly what "free will" is such that the question of whether I have it or not has meaning?

As I see it,... (read more)

Pascal's wager type arguments fail due to their symmetry (which is preserved in finite cases).

Even if our priors are symmetric for equally complex religious hypotheses, our posteriors almost certainly won't be. There's too much evidence in the world, and too many strong claims about these matters, for me to imagine that posteriors would come out even. Besides, even if two religions are equally probable, there may be certainly be non-epistemic reasons to prefer one over the other.

However, if after chugging through the math, it didn't balance out and still t... (read more)