All of Uni's Comments + Replies

Uni00

I'm a Swedish reader. A meetup in Stockholm would be great!

Uni30

The probability that the universe only has finite space is not exactly 1, is it? Much more might exist than our particular Hubble volume, no? What probability do the, say, world's top 100 physicists assign, on average, to the possibiliy that infinitely much matter exists? And on what grounds?

To my understanding, the universe might be so large that everything that could be described with infinitely many characters actually exists. That kind of "TOE" actually passes the Ockham's razor test excellently; if the universe is that large, then it could ... (read more)

1jwoodward48
"The probability that the universe only has finite space is not exactly 1, is it?" Nooooo, that's not it. The probability that the reachable space from a particular point within a certain time is finite is effectively one. So it doesn't matter how large the universe is - the aliens a few trillion ly away cannot have killed Bob.
Uni00

So which form of good should altruists, governments, FAIs, and other agencies in the helping people business respect?

Governments should give people what people say they want, rather than giving people what the governments think will make people happier, whenever they can't do both. But this is not because it's intrinsically better for people to get what they want than to get what makes them happier (it isn't), it's because people will resent what they percieve as paternalism in governments and because they won't pay taxes and obey laws in general if th... (read more)

Uni00

Nothing which you have written appears to show that it's impossible or even unlikely that people try to get things they want (which sometimes include pleasure, and which sometimes include saving the world), and that successful planning just feels good.

I'm not trying to show that. I agree that people try to get things they want, as long as with "things they want" we mean "things that they are tempted to go for because the thought of going for those things is so pleasurable".

(something about X) --> (I like the thought of X) --&g

... (read more)
1nshepperd
You're missing the point, or perhaps I'm missing your point. A paperclip maximiser implemented by having the program experience subjective pleasure when considering an action that results in lots of paperclips, and which decides by taking the action with the highest associated subjective pleasure, is still a paperclip maximiser. So, I think you're confusing levels. On the decision making level, you can hypothesise that decisions are made by attaching a "pleasure" feeling to each option and taking the one with highest pleasure. Sure, fine. But this doesn't mean it's wrong for an option which predictably results in less physical pleasure later to feel less pleasurable during decision making. The decision system could have been implemented equally well by associating options with colors and picking the brightest or something, without meaning the agent is irrational to take an action that physically darkens the environment. This is just a way of implementing the algorithm, which is not about the brightness of the environment or the light levels observed by the agent. This is what I mean by "(I like the thought of X) would seem to be an unnecessary step". The implementation is not particularly relevant to the values. Noticing that pleasure is there at a step in the decision process doesn't tell you what should feel pleasurable and what shouldn't, it just tells you a bit about the mechanisms. Of course I believe that pleasure has intrinsic value. We value fun; pleasure can be fun. But I can't believe pleasure is the only thing with intrinsic value. We don't use nozick's pleasure machine, we don't choose to be turned into orgasmium, we are willing to be hurt for higher benefits. I don't think any of those things are mistakes.
Uni00

In trusting your own judgment, that building an AI based on how humans currently are would be a bad thing, you implicitly trust human nature, because you are a human and so presumable driven by human nature. This undermines your claim that a super-human AI that is "merely more of on everything that it is to be human" would be a worse thing than a human.

Sure, humans with power often use their power to make other humans suffer, but power imbalances would not, by themselves, cause humans to suffer were not human brains such that they can very easily... (read more)

Uni00

No, I didn't just try to say that "people like the thought of getting what they want". The title of the article says "not for the sake of pleasure alone". I tried to show that that is false. Everything we do, we do for pleasure alone, or to avoid or decrease suffering. We never make a decision based on a want that is not in turn based on a like/dislike. All "wants" are servile consequences of "likes"/"dislikes", so I think "wants" should be treated as mere transitional steps, not as initial causes of our decisions.

1nshepperd
You've just shown that wanting and liking go together, and asserted that one of them is more fundamental. Nothing which you have written appears to show that it's impossible or even unlikely that people try to get things they want (which sometimes include pleasure, and which sometimes include saving the world), and that successful planning just feels good. And nevertheless, people still don't just optimize for pleasure, since they would take the drug mentioned, despite the fact that doing so is far less pleasurable than the alternative, even if the "pleasure involved in deciding to do so" is taken into account. Sure, you can say that only the "pleasure involved in deciding" or "liking the thought of" is relevant, upon which your account of decision making reduces to (something about X) --> (I like the thought of X) --> (I take action X), where (I like the thought of X) would seem to be an unnecessary step where the same result would be obtained by eliminating it, and of course you still haven't looked inside the black box (something about X). Or you can suggest that people are just mistaken about how pleasurable the results will be of any action they take that doesn't maximise pleasure. But at that point you're trying to construct sensible preferences from a mind that appears to be wrong about almost everything including the blatantly obvious, and I have to wonder exactly what evidence in this mind points toward the "true" preferences being "maximal pleasure".
Uni-10

The pleasure machine argument is flawed for a number of reasons:

1) It assumes that, despite having never been inside the pleasure machine, but having lots of experience of the world outside of it, you could make an unbiased decision about whether to enter the pleasure machine or not. It's like asking someone if he would move all his money from a bank he knows a lot about to a bank he knows basically nothing about and that is merely claimed to make him richer than his current bank. I'm sure that if someone would build a machine that, after I stepped into it... (read more)

Uni00

Going for what you "want" is merely going for what you like the thought of. To like the thought of something is to like something (in this case the "something" that you like is the thought of something; a thought is also something). This means that wanting cannot happen unless there is liking that creates the wanting. So, of wanting and liking, liking is the only thing that can ever independently make us make any choice we make. Wanting which is not entirely contingent on liking never makes us make any decisions, because there is no suc... (read more)

0nshepperd
Isn't this just a way of saying that people like the thought of getting what they want? Indeed, it would be rather odd if expecting to get what we want made us unhappy. See also here, I guess.
Uni10

Wrong compared to what? Compared to no sympathies at all? If that's what you mean, doesn't that imply that humans must be expected to make the world worse rather than better, whatever they try to do? Isn't that a rather counterproductive belief (assuming that you'd prefer that the world became a better place rather than not)?

AI with human sympathies would at least be based on something that is tested and found to work throughout ages, namely the human being as a whole, with all its flaws and merits. If you try to build the same thing but without those trai... (read more)

0HoverHell
-
Uni00

I recommend reading this sequence.

Thanks for recommending.

Suffice it to say that you are wrong, and power does not bring with it morality.

I have never assumed that "power brings with it morality" if we with power mean limited power. Some superhuman AI might very well be more immoral than humans are. I think unlimited power would bring with it morality. If you have access to every single particle in the universe and can put it wherever you want, and thus create whatever is theoretically possible for an almighty being to create, you will kno... (read more)

-2xxd
This is a cliche and may be false but it's assumed true: "Power corrupts and absolute power corrupts absolutely". I wouldn't want anybody to have absolute power not even myself, the only possible use of absolute power I would like to have would be to stop any evil person getting it. To my mind evil = coercion and therefore any human who seeks any kind of coercion over others is evil. My version of evil is the least evil I believe. EDIT: Why did I get voted down for saying "power corrupts" - the corrollary of which is rejection of power is less corrupt whereas Eliezer gets voted up for saying exactly the same thing? Someone who voted me down should respond with their reasoning.
6wedrifid
I'm not sure about 'proof' but hedonistic utilitarianism can be casually dismissed out of hand as not particularly desirable and the idea that giving a being ultimate power will make them adopt such preferences is absurd.
8ameriver
What I got out of this sentence is that you believe someone (anyone?), given absolute power over the universe, would be imbued with knowledge of how to maximize for human happiness. Is that an accurate representation of your position? Would you be willing to provide a more detailed explanation? Not everyone is a hedonistic utilitarian. What if the person/entity who ends up with ultimate power enjoys the suffering of others? Is your claim is that their value system would be rewritten to hedonistic utilitarianism upon receiving power? I do not see any reason why that should be the case. What are your reasons for believing that a being with unlimited power would understand that?
Uni10

Suppose you'd say it would be wrong of me to make the haters happy "against their will". Why would that be wrong, if they would be happy to be happy once they have become happy? Should we not try to prevent suicides either? Not even the most obviously premature suicides, not even temporarily, not even only to make the suicide attempter rethink their decision a little more thoroughly?

Making a hater happy "against his will", with the result that he stops hating, is (I think) comparable to preventing a premature suicide in order to give th... (read more)

Uni00

Thanks for explaining!

I didn't mean to split hairs at all. I'm surprised that so many here seem to take it for granted that, if one had unlimited power, one would choose to let other people remain having some say, and some autonomy. If I would have to listen to anybody else in order to be able to make the best possible decision about what to do with the world, this would mean I would have less than unlimited power. Someone who has unlimited power will always do the right thing, by nature.

And besides:

Suppose I'd have less than unlimited power but still &quo... (read more)

2TheOtherDave
This is certainly true. If you have sufficient power, and if my existing values, preferences, beliefs, expectations, etc. are of little or no value to you, but my approval is, then you can choose to override my existing values, preferences, beliefs, expectations, etc. and replace them with whatever values, preferences, beliefs, expectations, etc. would cause me to approve of whatever it is you've done, and that achieves your goals.
1Uni
Suppose you'd say it would be wrong of me to make the haters happy "against their will". Why would that be wrong, if they would be happy to be happy once they have become happy? Should we not try to prevent suicides either? Not even the most obviously premature suicides, not even temporarily, not even only to make the suicide attempter rethink their decision a little more thoroughly? Making a hater happy "against his will", with the result that he stops hating, is (I think) comparable to preventing a premature suicide in order to give that person an opportunity to reevaluate his situation and come to a better decision (by himself). By respecting what a person wants right now only, you are not respecting "that person including who he will be in the future", you are respecting only a tiny fraction of that. Strictly speaking, even the "now" we are talking about is in the future, because if you are now deciding to act in someone's interest, you should base your decision on your expectation of what he will want by the time your action would start affecting him (which is not exactly now), rather than what he wants right now. So, whenever you respect someone's preferences, you are (or at least should be) respecting his future preferences, not his present ones. (Suppose for example that you strongly suspect that, in one second from now, I will prefer a painless state of mind, but that you see that right now, I'm trying to cut off a piece of wood in a way that you see will make me cut me in my leg in one second if you don't interfere. You should then interfere, and that can be explained by (if not by anything else) your expectation of what I will want one second from now, even if right now I have no other preference than getting that piece of wood cut in two.) I suggest one should respect another persons (expected) distant future preferences more than his "present" (that is, very close future) ones, because his future preferences are more numerous (since there is more tim
7Alicorn
I recommend reading this sequence. Suffice it to say that you are wrong, and power does not bring with it morality. What is your support for this claim? (I smell argument by definition...)
Uni50

Why was that downvoted by 3?

What I did was, I disproved Billy Brown's claim that "If you implement any single utopian vision everyone who wanted a different one will hate you". Was it wrong of me to do so?

9nshepperd
While you are technically correct, the spirit of the original post and a charitable interpretation was, as I read it, "no matter what you decide to do with your unlimited power, someone will hate your plan". Of course if you decide to use your unlimited power to blow up the earth, no one will complain because they're all dead. But if you asked the population of earth what they think of your plan to blow up the earth, the response will be largely negative. The contention is that no matter what plan you try to concoct, there will be someone such that, if you told them about the plan and they could see what the outcome would be, they would hate it.
1[anonymous]
Perhaps they see you as splitting hairs between being seen as taking over the world, and actually taking over the world. In your scenario you are not seen as taking over the world because you eliminate the ability to see that - but that means that you've actually taken over the world (to a degree greater than anyone has ever achieved before). But in point of fact, you're right about the claim as stated. As for the downvotes - voting is frequently unfair, here and everywhere else.
Uni10

If everything that can happen, happens (sooner or later) - which is assumed

  • there will be continuations (not necessarily at the same spot in spacetime, but somewhere) of whatever brief life I have for a few seconds or planck times now, and continuations of those continuations too, and so on, without an end, meaning I'm immortal, given that identity is not dependent on the survival of any particular atoms (as opposed to patterns in which atoms, any atoms, are arranged, anywhere). This means that what I achieve during the short existences that are most com
... (read more)
0Will_Sawin
I said "how much" not "if". My point is that you should care vastly more about the next few seconds then a few years from now.
Uni10

100 people is practically nothing compared to the gazillions of future people whose lives are at stake. I agree with Robin Hanson, think carefully for very long. Sacrifice the 100 people per minute for some years if you need to. But you wouldn't need to. With unlimited power, it should be possible to freeze the world (except yourself, and your computer and the power supply and food you need, et cetera) to absolute zero temperature for indefinite time, to get enough time to think about what to do with the world.

Or rather: with unlimited power, you would kno... (read more)

2Houshalter
The entire point of this was an analogy for creating Friendly AI. The AI would have absurd amounts of power, but we have to decide what we want it to do using our limited human intelligence. I suppose you could just ask the AI for more intelligence first, but even that isn't a trivial problem. Would it be ok to alter your mind in such a way that it changes your personality or your values? Is it possible to increase your intelligence without doing that? And tons of other issues trying to specify such a specific goal.
Uni-10

If you use your unlimited power to make everyone including yourself constantly happy by design, and reprogram the minds of everybody into always approving of whatever you do, nobody will complain or hate you. Make every particle in the universe cooperate perfectly to maximize the amount of happiness in all future spacetime (and in the past as well, if time travel is possible when you have unlimited power). Then there would be no need for free will or individual autonomy for anybody anymore.

2christopherj
Incidentally, it is currently possible to achieve total happiness, or perhaps a close approximation. A carefully implanted electrode to the right part of the brain, will be more desirable than food to a starving rat, for example. While this part of the brain is called the "pleasure center", it might rather be about desire and reward instead. Nevertheless, pleasure and happiness are by necessity mental states, and it should be possible to artificially create these. Why should a man who is perfectly content, bother to get up to eat, or perhaps achieve something? He may starve to death, but would be happy to do so. And such a man will be content with his current state, which of course is contentment, and not at all resent his current state. Even a less invasive case, where a man is given almost everything he wants, yet not so much so that he does not eventually become dissatisfied with the amount of food in his belly and decide to put more in, even so there will be higher level motivations this man will lose. While I consider myself a utilitarian, and believe the best choices are those that maximize the values of everyone, I cannot agree with the above situation. For now, this is no problem because people in their current state would not choose to artificially fulfill their desires via electrode implants, nor is it yet possible to actually fulfill everyone's desires in the real world. I shall now go and rethink why I choose a certain path, if I cannot abide reaching the destination.
5Uni
Why was that downvoted by 3? What I did was, I disproved Billy Brown's claim that "If you implement any single utopian vision everyone who wanted a different one will hate you". Was it wrong of me to do so?
Uni10

Eliezer, you may not feel ready to be a father to a sentient AI, but do you agree that many humans are sufficiently ready to be fathers and mothers to ordinary human kids? Or do you think humans should stop procreating, for the sake of not creating beings that can suffer? Why care more about a not yet existing AI's future suffering than about not yet existing human kids' future suffering?

From a utilitarian perspective, initially allowing, for some years, suffering to occur in an AI that we build is a low prize to pay for making possible the utopia that fut... (read more)

Uni00

Eliezer_Yudkowsky wrote: "I want to reply, "But then most people don't have experiences this ordered, so finding myself with an ordered experience is, on your hypothesis, very surprising."

One will feel surprised by winning a million dollar on the lottery too, but that doesn't mean that it would be rational to assume that just because one won a million dollar on the lottery most people win a million dollar on the lottery.

Maybe most of us exist only for a fraction of a second, but in that case, what is there to lose by (probably falsely, but m... (read more)

1Will_Sawin
How much resources should you devote to the next day vs. the next month vs. the next year? If each additional second of existence is a vast improbability, for simplicity you may assume a few moments of existence, but no longer. If, OTOH, once you live, say, 3 seconds, it's as likely as not that you'll live a few more years - there's some sort of bimodality - then such a stance is justified. Bimodality would only work if there were some sort of theoretical justification.
Uni30

When we have gained total control of all the matter down to every single particle within, say, our galaxy, and found out exactly what kinds of combinations we need to put particles together in to maximize the amount of happiness produced per particle used (and per spacetime unit), then what if we find ourselves faced with the choice between 1) maximizing happiness short term but not getting control over more of the matter in the universe at the highest possible rate (in other words, not expanding maximally fast in the universe), and 2) maximizing said expa... (read more)

4rkyeun
The maximum happy area for a happy rectangle is when both its happy sides are of equal happy length, forming a happy square.
0DSimon
This is an interesting problem. The correct solution probably lies somewhere in the middle: allocate X of our resources to expansion, and 1-X of our resources to taking advantage of our current scope.
Uni00

Eliezer_Yudkowsky wrote: "We don't want the AI's models of people to be people - we don't want conscious minds trapped helplessly inside it. So we need how to tell that something is definitely not a person, and in this case, maybe we would like the AI itself to not be a person, which would simplify a lot of ethical issues if we could pull it off. Creating a new intelligent species is not lightly to be undertaken from a purely ethical perspective; if you create a new kind of person, you have to make sure it leads a life worth living."

I do want A... (read more)

9nshepperd
We're talking about giving the models subjective experience, not just "emotions". You want the AI to create conscious minds inside itself and torture them to find out whether torture is bad? And then again every time it makes a decision where torture is a conceivable outcome? I'd hope we can give the AI a model that accurately predicts how humans react to stimuli without creating a conscious observer. Humans seem to be able to do that, at least.. Beware of anthropomorphizing AIs. A Really Powerful Optimization Process shouldn't need to "suffer" for us to tell it what suffering is, and that we would like less of it.
Uni00

mtraven, Why we are "bothering to be rational or to do anything at all" (rather than being nihilists), if nihilism seems likely to be valid? Well, as long as there is a chance, say, only a .0000000000000001 chance, that nihilism is invalid, there is nothing to lose and possibly something to gain from assuming that nihilism is invalid. This refutes nihilism completely as a serious alternative.

I think basically the same is true about Yudkowsky's fear that there are infinitely many copies of each person. Even if there is only a .0000000000000001 ch... (read more)