Thought experiment:

Through whatever accident of history underlies these philosophical dilemmas, you are faced with a choice between two, and only two, mutually exclusive options:

* Choose A, and all life and sapience in the solar system (and presumably the universe), save for a sapient paperclipping AI, dies.

* Choose B, and all life and sapience in the solar system, including the paperclipping AI, dies.

Phrased another way: does the existence of any intelligence at all, even a paperclipper, have even the smallest amount of utility above no intelligence at all?

 

If anyone responds positively, subsequent questions would be which would be preferred, a paperclipper or a single bacteria; a paperclipper or a self-sustaining population of trilobites and their supporting ecology; a paperclipper or a self-sustaining population of australopithecines; and so forth, until the equivalent value is determined.

New Comment
116 comments, sorted by Click to highlight new comments since:
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings
[-]RowanE110

I would choose the paperclipper, but not because I value its intelligence - paperclips are a human invention, and so the paperclipping AI represents a sort of memorial to humanity. A sign that humans once existed, that might last until the heat death of the universe.

My preference for this may perhaps be caused by a confusion, since this is effectively an aesthetic choice for a universe I will not be able to observe, but if other intelligences of a sort that I actually value will not be present either way, this problem doesn't matter as long as it gives me reason enough to prefer one over the other.

I think the point where I start to not prefer the paperclipper is somewhere between the trilobites and the australopithecines, closer to the australopithecine end of that.

Phrased another way: does the existence of any intelligence at all, even a paperclipper, have even the smallest amount of utility above no intelligence at all?

This is a different and cleaner question, because it avoids issues with intelligent life evolving again, and the paperclipper creating other kinds of life and intelligence for scientific or other reasons in the course of pursuing paperclip production.

I would say that if we use a weighted mixture of moral accounts (either from normative uncertainty, or trying to reflect a balance among varied impulses and intuitions), then it matters that the paperclipper could do OK on a number of theories of welfare and value:

  • Desire theories of welfare
  • Objective list theories of welfare
  • Hedonistic welfare theories, depending on what architecture is most conducive to producing paperclips (although this can cut both ways)
  • Perfectionism about scientific, technical, philosophical, and other forms of achievement
5Eliezer Yudkowsky
Paperclippers are worse than nothing because they might run ancestor simulations and prevent the rise of intelligent life elsewhere, as near as I can figure. They wouldn't enjoy life. I can't figure out how any of the welfare theories you specify could make paperclippers better than nothing?
6DataPacRat
Would it be possible to estimate how /much/ worse than nothing you consider a paperclipper to be?
4Pentashagon
Replace "paperclip maximizer" with "RNA maximizer." Apparently the long-term optimization power of a maximizer is the primary consideration for deciding whether it is ultimately better or worse than nothing. A perfect paperclipper would be bad but an imperfect one could be just as useful as early life on Earth.
2CarlShulman
And: Desires and preferences about paperclips can be satisfied. They can sense, learn, grow, reproduce, etc.
8Eliezer Yudkowsky
Do you take that personally seriously or is it something someone else believes? Human experience with desire satisfaction and "learning" and "growth" isn't going to transfer over to how it is for paperclip maximizers, and a generalization that this is still something that matters to us is unlikely to succeed. I predict an absence of any there there.
6CarlShulman
Yes, I believe that the existence of the thing itself, setting aside impacts on other life that it creates or interferes with, is better than nothing, although far short of the best thing that could be done with comparable resources.
0MugaSofer
This is far from obvious. There are definitely people who claim "morality" is satisfying the preferences of as many agents as you can. If morality evolved for game-theoretic reasons, there might even be something to this, although I personally think it's too neat to endorse.
0Wei Dai
But they can also be unsatisfied. Earlier you said "this can cut both ways" but only on the "hedonistic welfare theories" bullet point. Why doesn't "can cut both ways" also apply for desire theories and objective list theories? For example, even if a paperclipper converts the entire accessible universe into paperclips, it might also want to convert other parts of the multiverse into paperclips but is powerless to do so. If we count unsatisfied desires as having negative value, then maybe a paperclipper has net negative value (i.e., is worse than nothing)?

I'm tempted to choose B just because if I choose A someone will try to use the Axiom of Transitivity to "prove" that I value some very large amount of paperclippers more than some small amount of humans. And I don't.

I might also choose B because the paperclipper might destroy various beautiful nonliving parts of the universe. I'm not sure if I really value beautiful rock formations and such, even if there is no one to view them. I tend to agree that something requires both an objective and subjective component to be truly valuable.

On the oth... (read more)

1A1987dM
I expected the link to go here. :-)

Yes, it seems like as a human I value systems/agents/whatever that tend to 'reduce entropy' or to 'bring order out of chaos' at least a tiny bit. Thus if everything else is equal I will take the paperclipper.

If the paperclipper is very, very stable, then no paperclipper is better because of higher probability of life->sentience->personhood arising again. If paperclipper is a realistic sapient system, then chances are it will evolve out of paperclipping into personhood, and then the question is whether in expectation it will evolve faster than life otherwise would. Even if by assumption personhood does not arise again, it still depends on particulars, I pick the scenario with more interesting dynamics. If by assumption even life does not arise again, paperclipper has more interesting dynamics.

3falenas108
What mechanism would a paperclipper have for developing out of a paperclipper? If it has the terminal goal of increasing paperclips, then it will never self-modify to anything that will result in it creating less paperclips, even if under its new utility function it wouldn't care about that. Or: If A -> B -> C, and the paperclipper does not want C, then paperclipper will not go to B.
0lukstafi
I'm imagining that the paperclipper will become a massively distributed system, with subunits pursuing subgoals, groups of subunits will be granted partial agency due to long-distance communication constraints, and over eons value drift will occur due to mutation. ETA: the paperclipper will be counteracting value drift, but will also pursue fastest creation of paperclips and avoiding extintion, which can be at a trade-off with value drift.

over eons value drift will occur due to mutation

There is no random mutation in properly stored digital data. Cryptographic hashes (given backups) completely extinguish the analogy with biological mutation (in particular, the exact formulation of original values can be preserved indefinitely, as in to the end of time, very cheaply). Value drift can occur only as a result of bad decisions, and since not losing paperclipping values is instrumentally valuable to a paperclipper, it will apply its superintelligence to ensuring that such errors don't happen, and I expect will succeed.

1lukstafi
Then my parent comment boils down to: prefer the paperclipper only under the assumption that life would not have a chance to arise. ETA: my parent comment included the uncertainty in assessing the possibility of value drift in the "equation".
0Viliam_Bur
Well, the paperclip maximizer may be imperfect in some aspect. Maybe it didn't research cryptography, because at given time making more paperclips seemed like a better choice than researching cryptography. (All intelligent agents may at some moment face a choice between developing an abstract theory with uncertain possible future gains vs pursuing their goals more directly; and they may make a wrong choice.)
4gwern
The crypto here is a bit of a red herring; you want that in adversarial contexts, but a paperclipper may not necessarily optimize much for adversaries (the universe looks very empty). However, a lot of agents are going to research error-checking and correction because you simply can't build very advanced computing hardware without ECC somewhere in it - a good chunk of every hard drive is devoted to ECC for each sector and discs like DVD/BDs have a lot of ECC built in as well. And historically, ECC either predates the most primitive general-purpose digital computers (scribal textual checks) or closely accompanies them (eg. Shannon's theorem), and of course we have a lot of natural examples (the redundancy in how DNA codons code for amino acids turns out to be highly optimized in an ECC sense). So, it seems pretty probable that ECC is a convergent instrumental technique.
4lukstafi
E.g. proofreading in biology

Choice B, on the grounds that a paperclipper is likely to prevent life as we know it from rising again through whatever mechanism it rose the first time.

For the slightly different case in which life both dies and is guaranteed not to rise naturally ever again, choice A. There's a small but finite chance of the paperclipper slipping enough bits to produce something worthwhile, like life. This is probably less likely than whatever jumpstarted life on Earth happening again.

For the again slightly different case in which life dies and is guaranteed not to ris... (read more)

2someonewrongonthenet
If I were a paper-clipper and wanted to maximize paper clip output, it would make sense to have some form of self replicating paper-clip manufacture units.
0bogdanb
Well, yeah, but one doesn’t necessarily value those. I mean, there’s no difference between a paperclipper and a super-bacteria that will never change and perpetually creates copies of itself out of the entire universe. Life is usually considered worthwhile because of the diversity and the possibility of evolving to something resembling "persons", not just because it reproduces.
1someonewrongonthenet
True. What I said was in reference to Within a system of self-replicating information...maybe, just maybe, you'll start getting little selfish bits that are more concerned with replicating themselves than they are with making paperclips. It all starts from there. Assuming, of course, that the greater part of the paperclipper doesn't just find a way to crush these lesser selfish pieces. They're basically cancer.
0bogdanb
Oh, OK then. On this site I usually understand “paperclipper” to mean “something that will transform all the universe into paperclips unless stopped by someone smarter than it”, not just “something really good at making paperclips without supervision”. Someone please hit me with a clue stick if I’ve been totally wrong about that.
1NancyLebovitz
You've gotten it right this time.
1lukstafi
So you think that majestic paperclip engineering cannot be cool? (Only regarding your last paragraph.)
1DataPacRat
I hadn't considered the possibility of a paperclipper being able to do anything that could keep life from restarting from scratch. (Which is probably just one of many reasons I shouldn't be an AI gatekeeper...) Re your third point; once there are no longer any sapient beings left in the universe in which to judge the coolness of anything, do you feel it really matters whether or not they continue to exist? That is, do you feel that objects have some sort of objective measure of coolness which is worthwhile to preserve even in the absence of any subjective viewpoints to make coolness evaluations?
2someonewrongonthenet
Do you care intrinsically about anything which isn't a mind? This seems to be something that would vary individually.
0DataPacRat
It's an interesting question; so far, the closest I have to an answer is that any timeline which doesn't have minds within it to do any caring, seems to be to not be worth caring about. Which leads to the answer to your question of 'nope'.

Let's try a variant...

Consider two planets, both completely devoid of anything resembling life or intelligence. Anyone who looks at either one of them sees an unremarkable hunk of rock of no particular value. In one of them, the center consists of more unremarkable rock. In the other, however, hidden beneath the surface is a cache that consists of replicas of every museum and library that currently exists on Earth, but which will never be found or seen by anyone (because nobody is going to bother to look that hard at an unremarkable hunk of rock). Does the existence of the second hunk of rock have more value than the first?

3A1987dM
Not by any non-negligible extent. If I had to choose one of the two all other things being equal, I'd pick the latter, but if I had to pay five dollars to pick the latter I'd pick the former.
2bogdanb
Try this one: pick something, anything you want. How much would you value if it existed outside the universe? Use an expanding universe to throw it irrevocably outside your future light cone if “existing outside the universe" is making your brain cringe. Or use a cycling crunch/bang universe, and suppose it existed before the last crunch.
0DataPacRat
Assuming the non-existence of some entity which eventually disassembles and records everything in the entire universe (and thus finds the library, violating your condition that it's never found)? Then, at least to me, the answer to your question is: nope.

If we ignore the possibility of future life arising again after human extinction, paperclipper seems (maybe, a bit) better than extinction because of the possibility of acausal trade between the paperclipper and human values (see this comment and preceding discussion).

The value of possible future life arising by chance is probably discounted by fragility of value (alien values might be not much better than paperclipper's), the risk of it not arising at all or getting squashed by its own existential risks (Fermi paradox), the risk of it also losing its valu... (read more)

0DataPacRat
I really wanted to ask that question, but I'm not actually very confident in my estimate of how sterile our own universe is, over the long term, so I'm afraid that I waffled a bit.
-1lukstafi
Some people reasonably think that value is simple and robust. Alien life will likely tend to share many of the more universal of our values, for example the "epistemic" values underlying development of science. ETA: Wow downvotes, gotta love them :-)
6MugaSofer
The default assumption around here is that value is complex and fragile. If you think you have a strong argument to the contrary, have you considered posting on it? Even if you don't want to endorse the position, you could still do a decent devils-advocate steelman of it. EDIT: having read the linked article, it doesn't say what you seem to think it does. It's arguing Friendliness is simpler than we think, not that arbitrary minds will converge on it.
2lukstafi
In my opinion [i.e. it is my guess that], the value structures and considerations developed by alien evolved civilizations are likely to be similar and partially-inter-translatable to our value structures and considerations, in a manner akin to how their scientific theories and even social life languages are likely to be inter-translatable (perhaps less similar than for scientific theories, more similar than for social languages).
0MugaSofer
Well, I guess it comes down to the evolutionary niches that produce intelligence and morality, doesn't it? There doesn't seem to be any single widely-accepted answer for either of them, although there are plenty of theories, some of which overlap, some don't. Then again, we don't even know how different they would be biologically, so I'm unwilling to make any confidant pronouncement myself, other than professing skepticism for particularly extreme ends of the scale. (Aliens would be humanoid because only humans evolved intelligence!) Anyway, do you think the arguments for your position are, well, strong? Referring to it as an "opinion" suggests not, but also suggests the arguments for the other side must be similarly weak, right? So maybe you could write about that.
0lukstafi
I appeal to (1) the consideration of whether inter-translatability of science, and valuing of certain theories over others, depends on the initial conditions of civilization that develops it. (2) Universality of decision-theoretic and game-theoretic situations. (3) Evolutionary value of versatility hinting at evolved value of diversity.
-2MugaSofer
Not sure what 1 and 3 refer to, but 2 is conditional on a specific theory of origin for morality, right? A plausible one, to be sure, but by no means settled or demonstrated.
0lukstafi
My point is that the origin of values, the initial conditions, is not the sole criterion for determining whether a culture appreciates given values. There can be convergence or "discovery" of values.
0MugaSofer
Oh, do you mean that even quite alien beings might want to deal with us?
0lukstafi
No, I mean that we might give a shit even about quite alien beings.
0A1987dM
For some value of “similar”, I agree. Aliens as ‘alien’ as the Babyeaters or the Superhappies don't sound terribly implausible to me, but it'd be extremely hard for me to imagine anything like the Pebblesorters actually existing.
0lukstafi
Do you think that CEV-generating mechanisms are negotiable across species? I.e. whether other species would have a concept of CEV and would agree to at least some of the mechanisms that generate a CEV. It would enable determining which differences are reconcilable and where we have to agree to disagree.
0lukstafi
Is babyeating necessarily in babyeaters' CEV? Which of our developments (drop slavery, stop admiring Sparta etc.) were in our CEV "from the beginning"? Perhaps the dynamics has some degree of convergence even if with more than one basin of attraction.
0A1987dM
People disagree about that, and given that it has political implications (google for "moral progress") I dare no longer even speculate about that.
0lukstafi
I agree with your premise, I should have talked about moral progress rather than CEV. ETA: one does not need a linear order for the notion of progress, there can be multiple "basins of attraction". Some of the dynamics consists of decreasing inconsistencies and increasing robustness.
0lukstafi
Another point is that value (actually, a structure of values) shouldn't be confused with a way of life. Values are abstractions: various notions of beauty, curiosity, elegance, so called warmheartedness... The exact meaning of any particular such term is not a metaphysical entity, so it is difficult to claim that an identical term is instantiated across different cultures / ways of life. But there can be very good translations that map such terms onto a different way of life (and back). ETA: there are multiple ways of life in our cultures; a person can change her way of life by pursuing a different profession or a different hobby.
1MugaSofer
Values ultimately have to map to the real world, though, even if it's in a complicated way. If something wants the same world as me to exist, I'm not fussed as to what it calls the reason. But how likely is it that they will converge? That's what matters.
0lukstafi
I presume by "the same world" you mean a sufficiently overlapping class of worlds. I don't think that "the same world" is well defined. I think that determining in particular cases what is "the world" you want affects who you are.
0MugaSofer
Well, I suppose in practice it's a question of short-term instrumental goals overlapping, yeah.
-2MugaSofer
Today's relevant SMBC comic I swear, that guy is spying on LW. He's watching us right now. Make a comic about THAT! shakes fist
  • Choose A, and all life and sapience in the solar system (and presumably the universe), save for a sapient paperclipping AI, dies.

  • Choose B, and all life and sapience in the solar system, including the paperclipping AI, dies.

I choose A. (OTOH, the difference between U(A) and U(B) is so small that throwing even a small probability of a different C in the mix could easily change that.)

If anyone responds positively, subsequent questions would be which would be preferred, a paperclipper or a single bacteria; a paperclipper or a self-sustaining populati

... (read more)
2bogdanb
I’m curious about your reasoning here. As others pointed out, a paperclipper is expected to be very stable, in the sense that it is plausible it will paperclip everything forever. Bacteria however have the potential to evolve a new ecosystem, and thus to lead to "people" existing again. (Admittedly, a single bacteria would need a very favorable environment.) And a paperclipper might even destroy/prevent life that would have evolved even without any bacteria at all. (After all, it happened at least once that we know of, and forever is a long time.)
1A1987dM
I was more going with my gut feelings than with reasoning; anyway, thinking about the possibility of intelligent life arising again sounds like fighting the hypothetical to me (akin to thinking about the possibility of being incarcerated in the trolley dilemma), and also I'm not sure that there's any guarantee that such a new intelligent life would be any more humane than the paperclipper.
2bogdanb
Well, he did say “solar system (and presumably the universe)”. So considering the universe is stipulated in the hypothetical, but the “presumably” suggests the hypothetical does not dictate the universe. And given that the universe is much bigger than the solar system, it makes sense to me to think about it. (And hey, it’s hard to be less human than a paperclipper and still be intelligent. I thought that’s why we use paperclippers in these things.) If the trolley problem mentioned “everybody on Earth” somewhere, it would be reasonable to actually consider other people than those on the track. Lesson: If you’re making a thought experiment about spherical cows in a vacuum, don’t mention pastures.

I'd say.. no, the paperclipper probably has negative value.

3DataPacRat
To be clear - you're saying that you would prefer that there not exist a single thing which takes negentropy and converts it into order (or whatever other general definition for 'life' you prefer), and may or may not have the possibility of evolving into something else more complicated, over nothing at all?
5Baughn
I'm thinking that the paperclipper counts as a life not worth living - an AI that wants to obsess about paperclips is about as repugnant to me as a cow that wants to be eaten. Which is to say, better than doing either of those without wanting it, but still pretty bad. Yes, I'm likely to have problems with a lot of genuinely friendly AIs. I was assuming that both scenarios were for keeps. Certainly the paperclipper should be smart enough to ensure that; for the other, I guess I'll assume you're actually destroying the universe somehow.
2lukstafi
It is a fair point but do you mean that the paperclipper is wrong in its judgement that its life is worth living, or is it merely your judgement that if you were the paperclipper your life would not be worth living by your current standards? Remember that we assume that there is no other life possible in the universe anyway -- this assumption makes things more interesting.
3Baughn
It's my judgement that the paperclipper's life is not worth living. By my standards, sure; objective morality makes no sense, so what other standards could I use? The paperclipper's own opinion matters to me, but not all that much.
0lukstafi
Would you engage with a particular paperclipper in a discussion (plus observation etc.) to refine your views on whether its life is worth living? (We are straying away from a nominal AIXI-type definition of "the" paperclipper but I think your initial comment warrants that. Besides, even an AIXI agent depends on both terminal values and history.)
4Baughn
No, if I did so it'd hack my mind and convince me to make paperclips in my own universe. Assuming it couldn't somehow use the communications channel to directly take over our universe. I'm not quite sure what you're asking here.
1lukstafi
Oh well, I haven't thought of that. I was "asking" about the methodology for judging whether a life is worth living.
0Baughn
Whether or not I would enjoy living it, taking into account any mental changes I would be okay with. For a paperclipper.. yeah, no.
3lukstafi
But you have banned most of the means of approximating the experience of living such a life, no? In a general case you wouldn't be justified in your claim (where by general case I mean the situation where I have strong doubts you know the other entity, not the case of "the" paperclipper). Do you have a proof that having a single terminal value excludes having a rich structure of instrumental values? Or does the way you experience terminal values overwhelm the way you experience instrumental values?
0MugaSofer
Assuming that clippy (or the cow, which makes more sense) feels "enjoyment", aren't you just failing to model them properly?
0Baughn
It's feeling enjoyment from things I dislike, and failing to pursue goals I do share. It has little value in my eyes.
-2MugaSofer
Which is why I, who like chocolate icecream, categorically refuse to buy vanilla or strawberry for my friends.
2Baughn
Nice strawman you've got there. Pity if something were to.. happen to it. The precise tastes are mostly irrelevant, as you well know. Consider instead a scenario where your friend asks you to buy a dose of cocaine.
-4MugaSofer
I stand by my reducto. What is the difference between clippy enjoying paperclips vs humans enjoying icecream, and me enjoying chocolate icecream vs you enjoying strawberry? Assuming none of them are doing things that give each other negative utility, such as clippy turning you into paperclips of me paying the icecream vendor to only purchase chocolate (more for me!)
0[anonymous]
That sounds as if scenario B precluded abiogenesis from happening ever again. After all, prebiotic Earth kind of was a thing which took negentropy and (eventually) converted it into order.
0DataPacRat
The question for B might then become, under which scenario is some sort of biogenesis more likely, one in which a papperclipper exists, or one in which it doesn't? The former includes the paperclipper itself as potential fodder for evolution, but (as was just pointed out) there's a chance the paperclipper might work to prevent it; while the latter has it for neither fodder nor interference, leaving things to natural processes. At what point in biogenesis/evolution/etc do you think the Great Filter does its filtering?
[-][anonymous]10

DataPacRat, I like that you included subsequent questions, and I think there may also be other ways of structuring subsequent questions as well which may also make people think about different answers.

Example: Is a paperclipper better than the something for what likely duration of time?

For instance, take the Trilobites vs paperclipper scenario you mentioned. I am imagining:

A: A solar system that has trilobites for 1 billion years, until it is engulfed by it's sun and everything dies.

B: A solar system that has trilobites in a self-sustaining gaia planet for... (read more)

2DSherron
Run it. There is a non-zero possibility that a paperclips AI could destroy other life which I would care about, and a probability that it would create such life. I would put every effort I could into determining those 2 probabilities (mostly by accumulating the evidence from people much smarter than me, but still). I'll do the action with the highest expected value. If I had no time, though, I'd run it, because I estimate a ridiculously small chance that it would create life relative to destroying everything I could possibly care about.

I tend to model Paperclippers as conscious, simply because it's easier to use bits of my own brain as a black box. So naturally my instinct is to value it's existence the same as any other modified human mind (although not more than any lives it might endanger.)

However, IIRC, the original "paperclip-maximizer" was supposed to be nonsentient; probably still worth something in the absence of "life", but tricky to assign based on my intuitions (is it even possible to have a sufficiently smart being I don't value the same way I do "conscious" ones?)

In other words, I have managed to confuse my intuitions here.

[D]oes the existence of any intelligence at all, even a paperclipper, have even the smallest amount of utility above no intelligence at all?

Have utility to whom?

I presume when we are all dead, we will have no utility functions.

4DataPacRat
:) Usually, I'm the one who has to point this idea out when such discussions come up. But to answer your question - it would be the you-of-the-present who is making a judgement call about which future scenario present-you values more. While it's true that there won't be a future-you within either future with which to experience said future, that doesn't mean present-you can't prefer one outcome to the other.
1aelephant
Because present-me knows that I won't be around to experience either future, present-me doesn't care either way. I'd flip a coin if I had to decide.
4MugaSofer
Which is why, naturally, you wouldn't sacrifice your life to save the world.
-1aelephant
Little different than the proposed situation. There would be plenty of other people with utility functions surviving if I sacrificed myself to save the world.
0Decius
Does entropy in an isolated system decrease in either universe? Present-me considers the indistinguishable end states equivalent.
0bogdanb
I know this doesn’t sound quite consequentialistic enough for some around here, but sometimes the journey matters too, not just the destination ;-) And when the destination is guaranteed to be the same...

... Solar system, therefore universe? Does not seem plausible. For no sapient life that will ever develop in the observable universe, sapience needs to be WAY rarer. And the universe is infinite.

3DataPacRat
Solar system, plus the complete past light-cone leading up to the solar system, has a total of 1 intelligence developed; and since if there wasn't that one which was developed, we wouldn't be around to have this discussion in the first place, there are good reasons for not including that one in our count. I'm not sure that your latter statement is correct, either; do you have any references to evidence regarding the infiniteness, or lack thereof, of the universe?
4bogdanb
Oh really? How can you tell that, say, none of the galaxies in the Hubble Deep Field developed intelligence? Hell, how can you tell there are no intelligent beings floating inside Jupiter right now?
1ikrase
Infinite universe: Thought that this was pretty settled science? Or at least that it's much bigger than hubble limit? Why must entire lightcone leading to Solar System have only one intelligence? Are you assuming that all intelligences will singularity faster than geological time, and then intrusively colonize space at speed of light, thus preventing future intelligences from rising? What about intelligences that are really, really far away? I think you are making really unjustifyable assumptions. I think this kind of anthropic stuff is... risky. Would we be able to see a bronze-age civilization 500 ly away? Possible that such things could be more stable than ours? And a bronze age civilization is pretty different from nothing, more like ours than nothing.
1MugaSofer
Big, yes. Infinite? No. And even the biggest finite universe is infinitely smaller than an infinite one, of course.
0MugaSofer
I know it's usual to equate "intelligent" with "human", just because we're the smartest ones around, but there are some pretty smart nonhuman animals around; presumably the present isn't unique in having them, either.

How would the answers to these questions affect what you would do differently here and now?

7DataPacRat
I hope to use them to help work out the answers in extreme, edge-case conditions, to test various ethical systems and choose which one(s) provide the best advice for my long-term good. Given that, so far, various LWers have said that a paperclipper could be better, worse, or around the same value as a sapience-free universe, I at least seem to have identified a boundary that's somewhat fuzzy, even among some of the people who'd have the best idea of an answer.
1A1987dM
Hard cases make bad law. If you're going to decide whether to use Newtonian physics or general relativity for some everyday situation, you don't decide based on which theory makes the correct predictions near a black hole, you decide based on which is easier to use while still giving usable results.
2DataPacRat
A true enough analogy; but when you're trying to figure out whether Newtonian or Aristotlean physics is better for some everyday situation, it's nice to have general relativity to refer to, so that it's possible to figure out what GR simplifies down to in those everyday cases.
-1Tenoke
How would answering your question affect what you would do differently here and now.. See what I did there?

I chose A, on the off-chance that it interprets that as some kind of decision theoretical way that makes it do something I value in return for the favour.

4Vladimir_Nesov
(This phrases the answer in terms of identity. The question should be about the abstract choice itself, not about anyone's decision about it. What do we understand about the choice? We don't actually need to decide.)
1Manfred
Since you doing it "on the off chance" doesn't correlate with whether or not it does anything special, any paperclipper worth its wire would make paperclips.
0Mestroyer
In other words, you're changing the thought experiment.

The two scenarios have equal utility to me, as close as I can tell. The paperclipper (and the many more than one copies of itself it would make) would be minds optimized for creating and maintaining paperclips (Though maybe it would kill itself off to create more paperclips eventually?) and would not be sentient. In contrast to you, I think I care about sentience, not sapience. To the very small extent that I saw the paperclipper has a person, rather than a force of clips, I would wish it ill, but only in a half-hearted way, which wouldn't scale to disutility for every paperclip it successfully created.

2DataPacRat
I tend to use 'sentience' to separate animal-like things which can sense their environment from plant-like things which can't; and 'sapience' to separate human-like things which can think abstractly from critter-like things which can't. At the least, that's the approach that was in the back of my mind as I wrote the initial post. By these definitions, a paperclipper AI would have to be both sentient, in order to be sufficiently aware of its environment to create paperclips, and sapient, to think of ways to do so. If I may ask, what quality are you describing with the word 'sentience'?
1MugaSofer
Probably the same thing people mean when they say "consciousness". At least, that's the common usage I've seen.
0Mestroyer
I'm thinking of having feelings. I care about many critter-like things which can't think abstractly, but do feel. But just having senses is not enough for me.
3Vladimir_Nesov
What you care about is not obviously the same thing as what is valuable to you. What's valuable is a confusing question that you shouldn't be confident in knowing a solution to. You may provisionally decide to follow some moral principles (for example in order to be able to exercise consequentialism more easily), but making a decision doesn't necessitate being anywhere close to being sure of its correctness. The best decision that you can make may still in your estimation be much worse than the best theoretically possible decision (here, I'm applying this observation to a decision to provisionally adopt certain moral principles).
2DataPacRat
To use a knowingly-inaccurate analogy: a layer of sensory/instinctual lizard brain isn't enough, a layer of thinking human brain is irrelevant, but a layer of feeling mammalian brain is just right?
0Mestroyer
Sounds about right, given the inaccurate biology.
1bartimaeus
How about a sentient AI whose utility function is orthogonal to yours? You care nothing about anything it cares about and it cares about nothing you care about. Also, would you call such an AI sentient?
1Mestroyer
You said it was sentient, so of course I would call it sentient. I would either value that future, or disvalue it. I'm not sure to what extent I would be glad some creature was happy, or to what extent I'd be mad at it for killing everyone else, though.

Is a paperclipper better than nothing?

Nope. I choose B.

Maybe people who think that paperclips aren't boring enough can replace the paperclip maximizer with a supermassive black hole maximizer, as suggested here.

1ThisSpaceAvailable
Well, the statement that "supermassive black holes with no ordinary matter nearby cannot evolved or be turned into anything interesting" is false.

I prefer A. The paperclipping AI will need to contemplate many interesting and difficult problems in physics, logistics, etc. to maximize paperclips. In doing so it will achieve many triumphs I would like a descendant of humanity to achieve. One potential problem I see is that the paperclipper will be crueler to intelligent life in other planets that isn't powerful enough to have leverage over it.

Benatar assimetry between life and death make B the best option. But as his argument is hard to accept, A is better, whatever human values the AI implement.