Some really creative ideas, ChristianKl. :)
Even with what you describe, humans wouldn't become extinct, barring other outcomes like really bad nuclear war or whatever.
However, since the AI wouldn't be destroyed, it could bide its time. Maybe it could ally with some people and give them tech/power in exchange for carrying out its bidding. They could help build the robots, etc. that would be needed to actually wipe out humanity.
Obviously there's a lot of conjunction here. I'm not claiming this scenario specifically is likely. But it helps to stimulate the imagination to work out an existence proof for the extinction risk from AGI.
It's not at all clear that a AGI will be human-like, anyone than humans are dog-like.
Ok, bad wording on my part. I meant "more generally intelligent."
How do you fight the AGI past that point?
I was imagining people would destroy their computers, except the ones not connected to the Internet. However, if the AGI is hiding itself, it could go a long way before people realized what was going on.
Interesting scenarios. Thanks!
As we begin seeing robots/computers that are more human-like, people will take the possibility of AGIs getting out of control more seriously. These things will be major news stories worldwide, people will hold natural-security summits about them, etc. I would assume the US military is already looking into this topic at least a little bit behind closed doors.
There will probably be lots of not-quite-superhuman AIs / AGIs that cause havoc along the road to the first superhuman ones. Yes, it's possible that FOOM will take us from roughly a level like where we ...
This is a good point. :) I added an additional objection to the piece.
...As an empirical matter, extinction risk isn't being funded as much as you suggest it should be if almost everyone has some incentives to invest in the issue.
There's a lot of "extinction risk" work that's not necessarily labeled as such: Biosecurity, anti-nuclear proliferation, general efforts to prevent international hostility by nation states, general efforts to reduce violence in society and alleviate mental illnesses, etc. We don't necessarily see huge investments in AI sa
Thanks, Luke. See also this follow-up discussion to Ord's essay.
As you suggest with your "some" qualifier, my essay that benthamite shared doesn't make any assumptions about negative utilitarianism. I merely inserted parentheticals about my own views into it to avoid giving the impression that I'm personally a positive-leaning utilitarian.
Thanks, Jabberslythe! You got it mostly correct. :)
The one thing I would add is that I personally think people don't usually take suffering seriously enough -- at least not really severe suffering like torture or being eaten alive. Indeed, many people may never have experienced something that bad. So I put high importance on preventing experiences like these relative to other things.
Interesting story. Yes, I think our intuitions about what kinds of computations we want to care about are easily bent and twisted depending on the situation at hand. In analogy with Dennett's "intentional stance," humans have a "compassionate stance" that we apply to some physical operations and don't apply to others. It's not too hard to manipulate these intuitions by thought experiments. So, yes, I do fear that other people may differ (perhaps quite a bit) in their views about what kinds of computations are suffering that we should avoid.
I bet there are a lot more people who care about animals' feelings and who care a lot more, than those who care about the aesthetics of brutality in nature.
Well, at the moment, there are hundreds of environmental-preservation organizations and basically no organizations dedicated to reducing wild-animal suffering. Environmentalism as a cause is much more mainstream than animal welfare. Just like the chickens that go into people's nuggets, animals suffering in nature "are out of sight, and the connection between [preserving pristine habitats] and an...
This is what you value, what you chose.
Yes. We want utilitarianism. You want CEV. It's not clear where to go from there.
Not the hamster's one.
FWIW, hamsters probably exhibit fairness sensibility too. At least rats do.
Do you think the typical person advocating ecological balance has evaluated how the tradeoffs would change given future technology?
Good point. Probably not, and for some, their views would change with new technological options. For others (environmentalist types especially), they would probably retain their old views.
That said, the future-technology sword cuts both ways: Because most people aren't considering post-human tech, they're not thinking of (what some see as) the potential astronomical benefits from human survival. If 10^10 humans were only goi...
Thanks, JGWeissman. There are certainly some deep ecologists, like presumably Hettinger himself, who have thought long and hard about the scale of wild-animal suffering and still support preservation of ecology as is. When I talk with ecologists or environmentalists, almost always their reply is something like, "Yes, there's a lot of suffering, but it's okay because it's natural for them." One example:
...As I sit here, thinking about the landscape of fear, I watch a small bird at my bird feeder. It spends more time looking around than it does eati
Thanks, Benito. Do we know that we shouldn't have a lot of chicken feed? My point in asking this is just that we're baking in a lot of the answer by choosing which minds we extrapolate in the first place. Now, I have no problem baking in answers -- I want to bake in my answers -- but I'm just highlighting that it's not obvious that the set of human minds is the right one to extrapolate.
BTW, I think the "brain reward pathways" between humans and chickens aren't that different. Maybe you were thinking about the particular, concrete stimuli that are found to be rewarding rather than the general architecture.
Future humans may not care enough about animal suffering relative to other things, or may not regard suffering as being as bad as I do. As noted in the post, there are people who want to spread biological life as much as possible throughout the galaxy. Deep ecologists may actively want to preserve wild-animal suffering (Ned Hettinger: "Respecting nature means respecting the ways in which nature trades values, and such respect includes painful killings for the purpose of life support.") Future humans might run ancestor sims that happen to include ...
Preventing suffering is what I care about, and I'm going to try to convince other people to care about it. One way to do that is to invent plausible thought experiments / intuition pumps for why it matters so much. If I do, that might help with evangelism, but it's not the (original) reason why I care about it. I care about it because of experience with suffering in my own life, feeling strong empathy when seeing it in others, and feeling that preventing suffering is overridingly important due to various other factors in my development.
My understanding is that CEA exists in order to simplify the paperwork of multiple projects. For example, Effective Animal Activism is not its own charity; instead, you donate to CEA and transfer the money to EAA. As bryjnar said, there's not really any overhead in doing this. Using CEA as an umbrella much simpler than trying to get 501(c)(3) status for EAA on its own, which would be painstaking process.
Agreed. I'm often somewhat embarrassed to mention SIAI's full name, or the Singularity Summit, because of the term "singularity" which, in many people's minds -- to some extent including my own -- is a red flag for "crazy".
Honestly, even the "Artificial Intelligence" part of the name can misrepresent what SIAI is about. I would describe the organization as just "a philosophy institute researching hugely important fundamental questions."
the lives of the cockroaches are irrelevant
I'm not so sure. I'm no expert on the subject, but I suspect cockroaches may have moderately rich emotional lives.
If only for the cheap signaling value.
My point was that the action may have psychological value for oneself, as a way of getting in the habit of taking concrete steps to reduce suffering -- habits that can grow into more efficient strategies later on. One could call this "signaling to oneself," I suppose, but my point was that it might have value in the absence of being seen by others. (This is over and above the value to the worm itself, which is surely not unimportant.)
I'm surprised by Eliezer's stance. At the very least, it seems the pain endured by the frogs is terrible, no? For just one reference on the subject, see, e.g., KL Machin, "Amphibian pain and analgesia," Journal of Zoo and Wildlife Medicine, 1999.
Rain, your dilemma reminds me of my own struggles regarding saving worms in the rain. While stepping on individual worms to put them out of their misery is arguably not the most efficient means to prevent worm suffering, as a practical matter, I think it's probably an activity worth doing, because it buil...
Sure. Then what I meant was that I'm an emotivist with a strong desire to see suffering reduced and pleasure increased in the manner that a utilitarian would advocate, and I feel a deep impulse to do what I can to help make that happen. I don't think utilitarianism is "true" (I don't know what that could possibly mean), but I want to see it carried out.
Environmental preservationists... er, no, I won't try to make any fully general accusations about them. But if they succeed in preserving the environment in its current state, that would involve massive amounts of suffering, which would be bad!
Indeed. It may be rare among the LW community, but a number of people actually have a strong intuition that humans ought to preserve nature as it is, without interference, even if that means preserving suffering. As one example, Ned Hettinger wrote the following in his 1994 article, "Bambi Lovers versus Tree ...
Bostrom's estimate in "Astronomical Waste" is "10^38 human lives [...] lost every century that colonization of our local supercluster is delayed," given various assumptions. Of course, there's reason to be skeptical of such numbers at face value, in view of anthropic considerations, simulation-argument scenarios, etc., but I agree that this consideration probably still matters a lot in the final calculation.
Still, I'm concerned not just with wild-animal suffering on earth but throughout the cosmos. In particular, I fear that post-humans...
PeerInfinity, I'm rather struck by a number of similarities between us:
the largest impact you can make would be to simply become a vegetarian yourself.
You can also make a big impact by donating to animal-welfare causes like Vegan Outreach. In fact, if you think the numbers in this piece are within an order of magnitude of correct, then you could prevent the 3 or 4 life-years of animal suffering that your meat-eating would cause this year by donating at most $15 to Vegan Outreach. For many people, it's probably a lot easier to offset their personal contribution to animal suffering by donating than by going vegetarian.
Of cou...
Actually, you're right -- thanks for the correction! Indeed, in general, I want altruistic equal consideration of the pleasure and pain of all sentient organisms, but this need have little connection with what I like.
As it so happens, I do often feel pleasure in taking utilitarian actions, but from a utilitarian perspective, whether that's the case is basically trivial. A miserable hard-core utilitarian would be much better for the suffering masses than a more happy only-sometimes-utilitarian (like myself).
I am the kind of donor who is much more motivated to give by seeing what specific projects are on offer. The reason boils down to the fact that I have slightly different values (namely, hedonistic utilitarianism focused on suffering) than the average of the SIAI decision-makers and so want to impose those values as much as I can.
Great post! I completely agree with the criticism of revealed preferences in economics.
As a hedonistic utilitarian, I can't quite understand why we would favor anything other than the "liking" response. Converting the universe to utilitronium producing real pleasure is my preferred outcome. (And fortunately, there's enough of a connection between my "wanting" and "liking" systems that I want this to happen!)
Agreed. And I think it's important to consider just how small 1% really is. I doubt the fuzzies associated with using the credit card would actually be as small as 1% of the fuzzies associated with a 100% donation -- fuzzies just don't have high enough resolution. So I would fear, a la scope insensitivity, people getting more fuzzies from the credit card than are actually deserved from the donation. If that's necessary in order for the fuzzies to exceed a threshold for carrying out the donation, so be it; but usually the problem is in the other direction: People get too many fuzzies from doing too little and so end up not doing enough.
What criteria do you use to decide upon the class of algorithms / computations / chemicals / physical operations that you consider "conscious" in the sense of "having experiences" that matter morally? I assume it includes many non-human animals (including wild animals)? Might it include insects? Is it weighted by some correlate of brain / hardware size? Might it include digital computers? Lego Turing machines? China brains? Reinforcement-learning algorithms? Simple Python scripts that I could run on my desktop? Molecule movements in the...
I like all of the responses to the value-of-nature arguments you give in your second paragraph. However, as a hedonistic utilitarian, I would disagree with your claim that nature has value apart from its value to organisms with experiences. And I think we have a obligation to change nature in order to avert the massive amounts of wild-animal suffering that it contains, even if doing so would render it "unnatural" in some ways.
The 12-billion-utils example is similar to one I mention on this page under "What about Isolated Actions?" I agree that our decision here is ultimately arbitrary and up to us. But I also agree with the comments by others that this choice can be built into the standard expected-utility framework by changing the utilities. That is, unless your complaint is, as Nick suggests, with the independence axiom's constraint on rational preference orderings in and of itself (for instance, if you agreed -- as I don't -- that the popular choices in the Allais paradox should count as "rational").
Indeed. Gaverick Matheny and Kai M. A. Chan have formalized that point in an excellent paper, "The Illogic of the Larder."
For example if you claim to prefer non-existence of animals to them being used as food, then you clearly must support destruction of all nature reserves, as that's exactly the same choice. And if you're against animal suffering, you'd be totally happy to eat cows genetically modified not to have pain receptors. And so on. All positions never taken by any vegetarians.
I think most animal-welfare researchers would agree that animals on the nature reserve suffer less than those in factory farms, where conditions run contrary to the animals' evolved insti...
I like Peter Singer's "drowning child" argument in "Famine, Affluence, and Morality" as a way to illustrate the imperative to donate and, by implication, the value of money. As he says, "we ought to give the money [spent on fancy clothes] away, and it is wrong not to do so."
I do think there's a danger, though, in focusing on the wrongness of frivolous spending, which is relatively easy to criticize. It's harder to make people think about the wrongness of failing to make money that they could have donated. Opportunity costs are always harder to feel viscerally.
I intuitively sympathize with the complaints of status-quo bias, though it's of course also true that more changes from evolution's current local optimum entail more risk.
Here is another interesting reference on one form of congenital lack of pain.
it seems in some raw intuitive sense, that if the universe is large enough for everyone to exist somewhere, then we should mainly be worried about giving babies nice futures rather than trying to "ensure they get born".
That's an interesting intuition, but one that I don't share. I concur with Steven and Vladimir. The whole point of the classical-utilitarian "Each to count for one and none for more than one" principle is that the identity of the collection of atoms experiencing an emotion is irrelevant. What matters is increasing the num...
As a human, I try to abide by the deontological prohibitions that humans have made to live in peace with one another. [...] I don't go around pushing people into the paths of trains myself, nor stealing from banks to fund my altruistic projects.
It seems a strong claim to suggest that the limits you impose on yourself due to epistemological deficiency line up exactly with the mores and laws imposed by society. Are there some conventional ends-don't-justify-means notions that you would violate, or non-socially-taboo situations in which you would restrain yourself?
Also, what happens when the consequences grow large? Say 1 person to save 500, or 1 to save 3^^^^3?
The future--what will happen--is necessarily "fixed". To say that it isn't implies that what will happen may not happen, which is logically impossible.
Pablo, I think the debate is over whether there is such a thing as "what will happen"; maybe that question doesn't yet have an answer. In fact, I think any good definition of libertarian free will would require that it not have an answer yet.
So, can someone please explain just exactly what "free will" is such that the question of whether I have it or not has meaning?
As I see it,...
Pascal's wager type arguments fail due to their symmetry (which is preserved in finite cases).
Even if our priors are symmetric for equally complex religious hypotheses, our posteriors almost certainly won't be. There's too much evidence in the world, and too many strong claims about these matters, for me to imagine that posteriors would come out even. Besides, even if two religions are equally probable, there may be certainly be non-epistemic reasons to prefer one over the other.
However, if after chugging through the math, it didn't balance out and still t...
Jonah, I agree with what you say at least in principle, even if you would claim I don't follow it in practice. A big advantage of being Bayesian is that you retain probability mass on all the options rather than picking just one. (I recall many times being dismayed with hacky approximations like MAP that let you get rid of the less likely options. Similarly when people conflate the Solomonoff probability of a bitstring with the shortest program that outputs it, even though I guess in that case, the shortest program necessarily has at least as much probabil... (read more)