In response to comment by [deleted] on Not Taking Over the World
Comment author: Uni 30 March 2011 05:05:15AM *  0 points [-]

Thanks for explaining!

I didn't mean to split hairs at all. I'm surprised that so many here seem to take it for granted that, if one had unlimited power, one would choose to let other people remain having some say, and some autonomy. If I would have to listen to anybody else in order to be able to make the best possible decision about what to do with the world, this would mean I would have less than unlimited power. Someone who has unlimited power will always do the right thing, by nature.

And besides:

Suppose I'd have less than unlimited power but still "rather complete" power over every human being, and suppose I'd create what would be a "utopia" only to some, but without changing anybody's mind against their will, and suppose some people would then hate me for having created that "utopia". Then why would they hate me? Because they would be unhappy. If I'd simply make them constantly happy by design - I wouldn't even have to make them intellectually approve of my utopia to do that - they wouldn't hate me, because a happy person doesn't hate.

Therefore, even in a scenario where I had not only "taken over the world", but where I would also be seen as having taken over the world, nobody would still hate me.

Comment author: Uni 30 March 2011 05:54:27AM *  1 point [-]

Suppose you'd say it would be wrong of me to make the haters happy "against their will". Why would that be wrong, if they would be happy to be happy once they have become happy? Should we not try to prevent suicides either? Not even the most obviously premature suicides, not even temporarily, not even only to make the suicide attempter rethink their decision a little more thoroughly?

Making a hater happy "against his will", with the result that he stops hating, is (I think) comparable to preventing a premature suicide in order to give that person an opportunity to reevaluate his situation and come to a better decision (by himself). By respecting what a person wants right now only, you are not respecting "that person including who he will be in the future", you are respecting only a tiny fraction of that. Strictly speaking, even the "now" we are talking about is in the future, because if you are now deciding to act in someone's interest, you should base your decision on your expectation of what he will want by the time your action would start affecting him (which is not exactly now), rather than what he wants right now. So, whenever you respect someone's preferences, you are (or at least should be) respecting his future preferences, not his present ones.

(Suppose for example that you strongly suspect that, in one second from now, I will prefer a painless state of mind, but that you see that right now, I'm trying to cut off a piece of wood in a way that you see will make me cut me in my leg in one second if you don't interfere. You should then interfere, and that can be explained by (if not by anything else) your expectation of what I will want one second from now, even if right now I have no other preference than getting that piece of wood cut in two.)

I suggest one should respect another persons (expected) distant future preferences more than his "present" (that is, very close future) ones, because his future preferences are more numerous (since there is more time for them) than his "present" ones. One would arguably be respecting him more that way, because one would be respecting more of his preferences - not favoring any one of his preferences over any other one just because it happens to take place at a certain time.

This way, hedonistic utilitarianism can be seen as compatible with preference utilitarianism.

Comment author: [deleted] 29 March 2011 09:23:05PM 2 points [-]

Perhaps they see you as splitting hairs between being seen as taking over the world, and actually taking over the world. In your scenario you are not seen as taking over the world because you eliminate the ability to see that - but that means that you've actually taken over the world (to a degree greater than anyone has ever achieved before).

But in point of fact, you're right about the claim as stated. As for the downvotes - voting is frequently unfair, here and everywhere else.

In response to comment by [deleted] on Not Taking Over the World
Comment author: Uni 30 March 2011 05:05:15AM *  0 points [-]

Thanks for explaining!

I didn't mean to split hairs at all. I'm surprised that so many here seem to take it for granted that, if one had unlimited power, one would choose to let other people remain having some say, and some autonomy. If I would have to listen to anybody else in order to be able to make the best possible decision about what to do with the world, this would mean I would have less than unlimited power. Someone who has unlimited power will always do the right thing, by nature.

And besides:

Suppose I'd have less than unlimited power but still "rather complete" power over every human being, and suppose I'd create what would be a "utopia" only to some, but without changing anybody's mind against their will, and suppose some people would then hate me for having created that "utopia". Then why would they hate me? Because they would be unhappy. If I'd simply make them constantly happy by design - I wouldn't even have to make them intellectually approve of my utopia to do that - they wouldn't hate me, because a happy person doesn't hate.

Therefore, even in a scenario where I had not only "taken over the world", but where I would also be seen as having taken over the world, nobody would still hate me.

Comment author: Uni 29 March 2011 10:33:12AM 2 points [-]

If you use your unlimited power to make everyone including yourself constantly happy by design, and reprogram the minds of everybody into always approving of whatever you do, nobody will complain or hate you. Make every particle in the universe cooperate perfectly to maximize the amount of happiness in all future spacetime (and in the past as well, if time travel is possible when you have unlimited power). Then there would be no need for free will or individual autonomy for anybody anymore.

Comment author: Uni 29 March 2011 09:12:35PM 3 points [-]

Why was that downvoted by 3?

What I did was, I disproved Billy Brown's claim that "If you implement any single utopian vision everyone who wanted a different one will hate you". Was it wrong of me to do so?

Comment author: Will_Sawin 28 March 2011 11:46:04PM 1 point [-]

How much resources should you devote to the next day vs. the next month vs. the next year? If each additional second of existence is a vast improbability, for simplicity you may assume a few moments of existence, but no longer.

If, OTOH, once you live, say, 3 seconds, it's as likely as not that you'll live a few more years - there's some sort of bimodality - then such a stance is justified. Bimodality would only work if there were some sort of theoretical justification.

Comment author: Uni 29 March 2011 08:55:32PM *  2 points [-]

If everything that can happen, happens (sooner or later) - which is assumed - there will be continuations (not necessarily at the same spot in spacetime, but somewhere) of whatever brief life I have for a few seconds or planck times now, and continuations of those continuations too, and so on, without an end, meaning I'm immortal, given that identity is not dependent on the survival of any particular atoms (as opposed to patterns in which atoms, any atoms, are arranged, anywhere). This means that what I achieve during the short existences that are most common in the universe will only be parts of what I will have achieved in the long run, when all those short existences are "put together" (or thought of as one continuous life). Therefore, I should care about what my life will be like in a few years, in a few centuries, in a few googol years, et cetera, together, that is, my whole infinitely long future, more than I should care about any one short existence at any one place in spacetime. If I can maximize my overall happiness over my infinite life only by accepting a huge lot of suffering for a hundred years beginning now, I should do just that (if I'm a rational egoist).

My life may very well consist of predominantly extremely short-lived Boltzmann-brains, but I don't die just because these Boltzmann-brains die off one by one at a terrific rate.

Comment author: James_D._Miller 16 December 2008 03:05:37AM 3 points [-]

"Eliezer, I'd advise no sudden moves; think very carefully before doing anything."

But about 100 people die every minute!

Comment author: Uni 29 March 2011 10:46:07AM *  2 points [-]

100 people is practically nothing compared to the gazillions of future people whose lives are at stake. I agree with Robin Hanson, think carefully for very long. Sacrifice the 100 people per minute for some years if you need to. But you wouldn't need to. With unlimited power, it should be possible to freeze the world (except yourself, and your computer and the power supply and food you need, et cetera) to absolute zero temperature for indefinite time, to get enough time to think about what to do with the world.

Or rather: with unlimited power, you would know immediately what to do, if unlimited power implies unlimited intelligence and unlimited knowledge by definition. If it doesn't, I find the concept "unlimited power" poorly defined. How can you have unlimited power without unlimited intelligence and unlimited knowledge?

So, just like Robin Hanson says, we shouldn't spend time on this problem. We will solve in the best possible way with our unlimited power as soon as we have got unlimited power. We can be sure the solution will be wonderful and perfect.

Comment author: Billy_Brown 15 December 2008 11:19:07PM 5 points [-]

This is a great device for illustrating how devilishly hard it is to do anything constructive with such overwhelming power, yet not be seen as taking over the world. If you give each individual whatever they want you’ve just destroyed every variety of collectivism or traditionalism on the planet , and those who valued those philosophies will curse you. If you implement any single utopian vision everyone who wanted a different one will hate you, and if you limit yourself to any minimal level of intervention everyone who wants larger benefits than you provide will be unhappy.

Really, I doubt that there is any course you can follow that won’t draw the ire of a large minority of humanity, because too many of us are emotionally committed to inflicting various conflicting forms of coercion on each other.

Comment author: Uni 29 March 2011 10:33:12AM 2 points [-]

If you use your unlimited power to make everyone including yourself constantly happy by design, and reprogram the minds of everybody into always approving of whatever you do, nobody will complain or hate you. Make every particle in the universe cooperate perfectly to maximize the amount of happiness in all future spacetime (and in the past as well, if time travel is possible when you have unlimited power). Then there would be no need for free will or individual autonomy for anybody anymore.

Comment author: Uni 29 March 2011 09:56:38AM 1 point [-]

Eliezer, you may not feel ready to be a father to a sentient AI, but do you agree that many humans are sufficiently ready to be fathers and mothers to ordinary human kids? Or do you think humans should stop procreating, for the sake of not creating beings that can suffer? Why care more about a not yet existing AI's future suffering than about not yet existing human kids' future suffering?

From a utilitarian perspective, initially allowing, for some years, suffering to occur in an AI that we build is a low prize to pay for making possible the utopia that future AI then may become able to build.

Eliezer, at some point you talk of our ethical obligations to AI as if you believed in rights, but elsewhere you have said you think you are an average utilitarian. Which is it? If you believe in rights only in the utilitarianism derived sense, don't you think the "rights" of some initial AI can, for some time, be rightfully sacrificed for the utilitarian sake of minimising Existential Risk, given that that would indeed minimise Existential Risk? (Much like the risk of incidents of collateral damage in the form of killed and wounded civilians should be accepted in some wars (for example when "the good ones" had to kill some innocents in the process of fighting Hitler)?)

Isn't the fundamentally more important question rather this: which one of creating sentient AI and creating nonsentient AI can be most expected to minimise Existential Risk?

Comment author: Uni 28 March 2011 11:29:49PM 0 points [-]

Eliezer_Yudkowsky wrote: "I want to reply, "But then most people don't have experiences this ordered, so finding myself with an ordered experience is, on your hypothesis, very surprising."

One will feel surprised by winning a million dollar on the lottery too, but that doesn't mean that it would be rational to assume that just because one won a million dollar on the lottery most people win a million dollar on the lottery.

Maybe most of us exist only for a fraction of a second, but in that case, what is there to lose by (probably falsely, but maybe maybe maybe correctly) assuming that we exist much longer than that, and living accordingly? There is potentially something to gain by assuming that, and nothing to lose, so it may very well be rational to assume that, even though it is very unlikely to be the case!

Comment author: Uni 28 March 2011 10:26:57PM *  2 points [-]

When we have gained total control of all the matter down to every single particle within, say, our galaxy, and found out exactly what kinds of combinations we need to put particles together in to maximize the amount of happiness produced per particle used (and per spacetime unit), then what if we find ourselves faced with the choice between 1) maximizing happiness short term but not getting control over more of the matter in the universe at the highest possible rate (in other words, not expanding maximally fast in the universe), and 2) maximizing said expansion rate at the cost of short term happiness maximation. What if this trade-off problem persists forever?

We might find ourselves in the situation where we, time after time, can either use all of our matter for maximizing the pace at which we take control over more and more matter, creating no short term happiness at all, or creating any non-zero amount of happiness short term at the expense of our ability to expand our ability to get us much more happiness in the future instead. We might find that, hey, if we postpone being happy for one year, we can be ten times as happy next year as we would otherwise be able to be, and that's clearly better. And next year, we are again in the same situation: postponing being happy one more year again seems rational. Next year, same thing. And so on.

Suppose that kind of development would never end, unless we ended it by "cashing in" (choosing short term happiness before maximum development). Then when should we "cash in"? After how many years? Any finite number of years seems too small, since you could always add one extra year to further improve the expected long term happiness gain. On the other hand, the answer "in infinitely many years from now" is not appealing either, as an infinity of years never passes, by definition, meaning we would never choose to be happy. So, when would you "cash in" and choose to be happy? After how many years?

Comment author: Uni 28 March 2011 10:05:22PM 0 points [-]

Eliezer_Yudkowsky wrote: "We don't want the AI's models of people to be people - we don't want conscious minds trapped helplessly inside it. So we need how to tell that something is definitely not a person, and in this case, maybe we would like the AI itself to not be a person, which would simplify a lot of ethical issues if we could pull it off. Creating a new intelligent species is not lightly to be undertaken from a purely ethical perspective; if you create a new kind of person, you have to make sure it leads a life worth living."

I do want AI models of people to be people. If an AI model of a human lacks the important part of the person that we call emotions, there is the risk that it will be a dangerously faulty model of that person. If it can't suffer, it can't understand why humans so eagerly want to get rid of suffering, and so it may in many situations think it knows better than us what is most important to achieve. It may regard the diminishing of suffering in humans just as irrational as we would regard turning the planet into paper clips.

From a utilitarian perspective, even from a negative utilitarian perspective, building AI models of humans and equipping then with human emotions and ability to suffer would not necessarily be unethical. It might be what would in the long run minimize the suffering. Trying to minimize suffering too fast can be dangerous; it might take away something we really value, along with the suffering, even if we don't value the suffering in and of itself. Future superhuman AI will know how to get rid of suffering in a completely safe way, once they have become in every relevant way immensely smarter than humans - but perhaps only if the first transhuman AI we build are equipped with the ability to suffer just like humans. You often have to know a problem well before you can safely solve it, and you can't really know suffering unless you actually feel it. So let's not hesitate to create AI that can suffer (once we can create such AI).

Happiness will grow immensely and suffering will and should be abolished later on, but let's not rush it during the process of creating AI models of humans.

View more: Prev | Next