The probability that the universe only has finite space is not exactly 1, is it? Much more might exist than our particular Hubble volume, no? What probability do the, say, world's top 100 physicists assign, on average, to the possibiliy that infinitely much matter exists? And on what grounds?
To my understanding, the universe might be so large that everything that could be described with infinitely many characters actually exists. That kind of "TOE" actually passes the Ockham's razor test excellently; if the universe is that large, then it could ...
So which form of good should altruists, governments, FAIs, and other agencies in the helping people business respect?
Governments should give people what people say they want, rather than giving people what the governments think will make people happier, whenever they can't do both. But this is not because it's intrinsically better for people to get what they want than to get what makes them happier (it isn't), it's because people will resent what they percieve as paternalism in governments and because they won't pay taxes and obey laws in general if th...
Nothing which you have written appears to show that it's impossible or even unlikely that people try to get things they want (which sometimes include pleasure, and which sometimes include saving the world), and that successful planning just feels good.
I'm not trying to show that. I agree that people try to get things they want, as long as with "things they want" we mean "things that they are tempted to go for because the thought of going for those things is so pleasurable".
...(something about X) --> (I like the thought of X) --&g
In trusting your own judgment, that building an AI based on how humans currently are would be a bad thing, you implicitly trust human nature, because you are a human and so presumable driven by human nature. This undermines your claim that a super-human AI that is "merely more of on everything that it is to be human" would be a worse thing than a human.
Sure, humans with power often use their power to make other humans suffer, but power imbalances would not, by themselves, cause humans to suffer were not human brains such that they can very easily...
No, I didn't just try to say that "people like the thought of getting what they want". The title of the article says "not for the sake of pleasure alone". I tried to show that that is false. Everything we do, we do for pleasure alone, or to avoid or decrease suffering. We never make a decision based on a want that is not in turn based on a like/dislike. All "wants" are servile consequences of "likes"/"dislikes", so I think "wants" should be treated as mere transitional steps, not as initial causes of our decisions.
The pleasure machine argument is flawed for a number of reasons:
1) It assumes that, despite having never been inside the pleasure machine, but having lots of experience of the world outside of it, you could make an unbiased decision about whether to enter the pleasure machine or not. It's like asking someone if he would move all his money from a bank he knows a lot about to a bank he knows basically nothing about and that is merely claimed to make him richer than his current bank. I'm sure that if someone would build a machine that, after I stepped into it...
Going for what you "want" is merely going for what you like the thought of. To like the thought of something is to like something (in this case the "something" that you like is the thought of something; a thought is also something). This means that wanting cannot happen unless there is liking that creates the wanting. So, of wanting and liking, liking is the only thing that can ever independently make us make any choice we make. Wanting which is not entirely contingent on liking never makes us make any decisions, because there is no suc...
Wrong compared to what? Compared to no sympathies at all? If that's what you mean, doesn't that imply that humans must be expected to make the world worse rather than better, whatever they try to do? Isn't that a rather counterproductive belief (assuming that you'd prefer that the world became a better place rather than not)?
AI with human sympathies would at least be based on something that is tested and found to work throughout ages, namely the human being as a whole, with all its flaws and merits. If you try to build the same thing but without those trai...
I recommend reading this sequence.
Thanks for recommending.
Suffice it to say that you are wrong, and power does not bring with it morality.
I have never assumed that "power brings with it morality" if we with power mean limited power. Some superhuman AI might very well be more immoral than humans are. I think unlimited power would bring with it morality. If you have access to every single particle in the universe and can put it wherever you want, and thus create whatever is theoretically possible for an almighty being to create, you will kno...
Suppose you'd say it would be wrong of me to make the haters happy "against their will". Why would that be wrong, if they would be happy to be happy once they have become happy? Should we not try to prevent suicides either? Not even the most obviously premature suicides, not even temporarily, not even only to make the suicide attempter rethink their decision a little more thoroughly?
Making a hater happy "against his will", with the result that he stops hating, is (I think) comparable to preventing a premature suicide in order to give th...
Thanks for explaining!
I didn't mean to split hairs at all. I'm surprised that so many here seem to take it for granted that, if one had unlimited power, one would choose to let other people remain having some say, and some autonomy. If I would have to listen to anybody else in order to be able to make the best possible decision about what to do with the world, this would mean I would have less than unlimited power. Someone who has unlimited power will always do the right thing, by nature.
And besides:
Suppose I'd have less than unlimited power but still &quo...
Why was that downvoted by 3?
What I did was, I disproved Billy Brown's claim that "If you implement any single utopian vision everyone who wanted a different one will hate you". Was it wrong of me to do so?
If everything that can happen, happens (sooner or later) - which is assumed
100 people is practically nothing compared to the gazillions of future people whose lives are at stake. I agree with Robin Hanson, think carefully for very long. Sacrifice the 100 people per minute for some years if you need to. But you wouldn't need to. With unlimited power, it should be possible to freeze the world (except yourself, and your computer and the power supply and food you need, et cetera) to absolute zero temperature for indefinite time, to get enough time to think about what to do with the world.
Or rather: with unlimited power, you would kno...
If you use your unlimited power to make everyone including yourself constantly happy by design, and reprogram the minds of everybody into always approving of whatever you do, nobody will complain or hate you. Make every particle in the universe cooperate perfectly to maximize the amount of happiness in all future spacetime (and in the past as well, if time travel is possible when you have unlimited power). Then there would be no need for free will or individual autonomy for anybody anymore.
Eliezer, you may not feel ready to be a father to a sentient AI, but do you agree that many humans are sufficiently ready to be fathers and mothers to ordinary human kids? Or do you think humans should stop procreating, for the sake of not creating beings that can suffer? Why care more about a not yet existing AI's future suffering than about not yet existing human kids' future suffering?
From a utilitarian perspective, initially allowing, for some years, suffering to occur in an AI that we build is a low prize to pay for making possible the utopia that fut...
Eliezer_Yudkowsky wrote: "I want to reply, "But then most people don't have experiences this ordered, so finding myself with an ordered experience is, on your hypothesis, very surprising."
One will feel surprised by winning a million dollar on the lottery too, but that doesn't mean that it would be rational to assume that just because one won a million dollar on the lottery most people win a million dollar on the lottery.
Maybe most of us exist only for a fraction of a second, but in that case, what is there to lose by (probably falsely, but m...
When we have gained total control of all the matter down to every single particle within, say, our galaxy, and found out exactly what kinds of combinations we need to put particles together in to maximize the amount of happiness produced per particle used (and per spacetime unit), then what if we find ourselves faced with the choice between 1) maximizing happiness short term but not getting control over more of the matter in the universe at the highest possible rate (in other words, not expanding maximally fast in the universe), and 2) maximizing said expa...
Eliezer_Yudkowsky wrote: "We don't want the AI's models of people to be people - we don't want conscious minds trapped helplessly inside it. So we need how to tell that something is definitely not a person, and in this case, maybe we would like the AI itself to not be a person, which would simplify a lot of ethical issues if we could pull it off. Creating a new intelligent species is not lightly to be undertaken from a purely ethical perspective; if you create a new kind of person, you have to make sure it leads a life worth living."
I do want A...
mtraven, Why we are "bothering to be rational or to do anything at all" (rather than being nihilists), if nihilism seems likely to be valid? Well, as long as there is a chance, say, only a .0000000000000001 chance, that nihilism is invalid, there is nothing to lose and possibly something to gain from assuming that nihilism is invalid. This refutes nihilism completely as a serious alternative.
I think basically the same is true about Yudkowsky's fear that there are infinitely many copies of each person. Even if there is only a .0000000000000001 ch...
I'm a Swedish reader. A meetup in Stockholm would be great!