The one ring of power sits before us on a pedestal; around it stand a dozen folks of all races. I believe that whoever grabs the ring first becomes invincible, all powerful. If I believe we cannot make a deal, that someone is about to grab it, then I have to ask myself whether I would weld such power better than whoever I guess will grab it if I do not. If I think I'd do a better job, yes, I grab it. And I'd accept that others might consider that an act of war against them; thinking that way they may well kill me before I get to the ring.
With the ring,...
I'm not asking you if you'll take the Ring, I'm asking what you'll do with the Ring. It's already been handed to you.
Take advice? That's still something of an evasion. What advice would you offer you? You don't seem quite satisfied with what (you think is) my plan for the Ring - so you must already have an opinion of your own - what would you change?
Eliezer, I haven't meant to express any dissatisfaction with your plans to use a ring of power. And I agree that someone should be working on such plans even if the chances of it happening are rather small. So I approve of your working on such plans. My objection is only that if enough people overestimate the chance of such scenario, it will divert too much attention from other important scenarios. I similarly think global warming is real, worthy of real attention, but that it diverts too much attention from other future issues.
This is a great device for illustrating how devilishly hard it is to do anything constructive with such overwhelming power, yet not be seen as taking over the world. If you give each individual whatever they want you’ve just destroyed every variety of collectivism or traditionalism on the planet , and those who valued those philosophies will curse you. If you implement any single utopian vision everyone who wanted a different one will hate you, and if you limit yourself to any minimal level of intervention everyone who wants larger benefits than you provid...
Infinity screws up a whole lot of this essay. Large-but-finite is way way harder, as all the "excuses", as you call them, become real choices again. You have to figure out whether to monitor for potential conflicts, including whether to allow others to take whatever path you took to such power. Necessity is back in the game.
I suspect I'd seriously consider just tiling the universe with happy faces (very complex ones, but still probably not what the rest of y'all think you want). At least it would be pleasant, and nobody would complain.
This question is a bit off-topic and I have a feeling it has been covered in a batch of comments elsewhere so if it has, would someone mind directing me to it. My question is this: Given the existence of the multiverse, shouldn't there be some universe out there in which an AI has already gone FOOM? If it has, wouldn't we see the effects of it in some way? Or have I completely misunderstood the physics?
And Eliezer, don't lie, everybody wants to rule the world.
Okay, you don't disapprove. Then consider the question one of curiosity. If Tyler Cowen acquired a Ring of Power and began gathering a circle of advisors, and you were in that circle, what specific advice would you give him?
Eliezer, I'd advise no sudden moves; think very carefully before doing anything. I don't know what I'd think after thinking carefully, as otherwise I wouldn't need to do it. Are you sure there isn't some way to delay thinking on your problem until after it appears? Having to have an answer now when it seems an likely problem is very expensive.
What about a kind of market system of states? The purpose of the states will be will be to provide a habitat matching each citizen's values and lifestyle?
-Each state will have it's own constitution and rules. -Each person can pick the state they wish to live in assuming they are accepted in based on the state’s rules. -The amount of resources and territory allocated to each state is proportional to the number of citizens that choose to live there. -There are certain universal meta-rules that supercede the states' rules such as... -A citizen may leave a sta...
I'm glad to hear that you aren't trying to take over the world. The less competitors I have, the better.
@lowly undergrad
Perhaps you're thinking of The Great Filter (http://hanson.gmu.edu/greatfilter.html)?
"Eliezer, I'd advise no sudden moves; think very carefully before doing anything."
But about 100 people die every minute!
PK: I like your system. One difficulty I notice is that you have thrust the states into the role of the omniscient player in the Newcomb problem. Since the states are unable to punish the members beyond expelling them. They are open to 'hit and run' tactics. They are left with the need to predict accurately which members and potential members will break a rule, 'two box', and be a net loss to the state with no possibility of punishment. They need to choose people who can one box and stay for the long haul. Executions and life imprisonment are simpler, from a game theoretic perspective.
James, it's ok. I have unlimited power and unlimited precision. I can turn back time. At least, I can rewind the state of the universe such that you can't tell the difference (http://lesswrong.com/lw/qp/timeless_physics/).
Tangentally, does anyone know what I'm talking about if I lament how much of Eleizer's thought stream ran through my head, prompted by Sparhawk?
Eliezer: Let's say that someone walks up to you and grants you unlimited power.
Lets not exaggerate. A singleton AI wielding nanotech is not unlimited power; it is merely a Big Huge Stick with which to apply pressure to the universe. It may be the biggest stick around, but it's still operating under the very real limitations of physics - and every inch of potential control comes with a cost of additional invasiveness.
Probably the closest we could come to unlimited power, would be pulling everything except the AI into a simulation, and allowing for arbitra...
But about 100 people die every minute!
If you have unlimited power, and aren't constrained by current physics, then you can bring them back. Of course, some of them won't want this.
Now, if you have (as I'm interpreting this article) unlimited power, but your current faculties, then embarking on a program to bring back the dead could (will?) backfire.
I think Sparhawk was a fool. But you need to remember, internally he was basically medieval. Also externally you need to remember Eddings is only an English professor and fantasy writer.
It's probably not the worst tradeoff, being cursed only by those who feel their values should take precedence over those of other people.
Why should your values take precedence over theirs? It sounds like you're asserting that tyranny > collectivism.
@Cameron: Fictional characters with unlimited power sure act like morons, don't they?
Singularitarians: The Munchkins of the real universe.
Sorry for being out of topic, but has that 3^^^^3 problem been solved already? I just read the posts and, frankly, I fail to see why this caused so much problems.
Among the things that Jaynes repeats a lot in his book is that the sum of all probabilities must be 1. Hence, if you put probabilities somewhere, you must remove elsewhere. What is the prior probability for "me being able to simulate/kill 3^^^^3 persons/pigs"? Let's call that nonzero number "epsilon". Now, I guess that the (3^^^^3)-1 case should have a probability greater or eq...
Pierre, it is not true that all probabilities sum to 1. Only for an exhaustive set of mutually exclusive events must the probability sum to 1.
Sorry, I have not been specific enough. Each of my 3^^^^3, 3^^^^3-1, 3^^^^3-2, etc. examples are mutually exclusive (but the sofa is part of the "0" case). While they might not span all possibilities (not exhaustive) and could thus sum to less than one, they cannot sum to higher than 1. As I see it, the weakest assumption here is that "more persons/pigs is less or equally likely". If this holds, the "worst case scenario" is epsilon=1/(3^^^^3) but I would guess for far less than that.
To ask what God should do to make people happy, I would begin by asking whether happiness or pleasure are coherent concepts in a future in which every person had a Godbot to fulfill their wishes. (This question has been addressed many times in science fiction, but with little imagination.) If the answer is no, then perhaps God should be "unkind", and prevent desire-saturation dynamics from arising. (But see the last paragraph of this comment for another possibility.)
What things give us the most pleasure today? I would say, sex, creative activ...
"But considering an unlimited amount of ice cream forced me to confront the issue of what to do with any of it."
"If you invoke the unlimited power to create a quadrillion people, then why not a quadrillion?"
"Say, the programming team has cracked the "hard problem of conscious experience" in sufficient depth that they can guarantee that the AI they create is not sentient - not a repository of pleasure, or pain, or subjective experience, or any interest-in-self - and hence, the AI is only a means to an end, and not an end i...
Phil:
"If we are so unfortunate as to live in a universe in which knowledge is finite, then conflict may serve as a substitute for ignorance in providing us a challenge."
This is inconsistent. What would conflict really do is to provide new information to process ("knowledge").
I guess I can agree with the rest of post. What IMO is worth pointing out that the most pleasures, hormones and insticts excluded, are about processing 'interesting' infromations.
I guess, somewhere deep in all sentient beings, "interesting informations" ar...
Errr.... luzr, why would I assume that the majority of GAIs that we create will think in a way I define as 'right'?
Pierre, the proposition, "I am able to simulate 3^^^^3 people" is not mutually exclusive with the proposition "I am able to simulate 3^^^^3-1 people."
If you meant to use the propositions D_N: "N is the maximum number of people that I can simulate", then yes, all the D_N's would be mutually exclusive. Then if you assume that P(D_N) ≤ P(D_N-1) for all N, you can indeed derive that P(D_3^^^^3) ≤ 1/3^^^^3. But P("I am able to simulate 3^^^^3 people") = P(D_3^^^^3) + P(D_3^^^^3+1) + P(D^^^^3+2) + ..., which you don't have an upper bound for.
An expected utility maximizer would know exactly what to do with unlimited power. Why do we have to think so hard about it? The obvious answer is that we are adaptation executioners, not utility maximizers, and we don't have an adaptation for dealing with unlimited power. We could try to extrapolate an utility function from our adaptations, but given that those adaptations deal only with a limited set of circumstances, we'll end up with an infinite set of possible utility functions for each person. What to do?
James D. Miller: But about 100 people die every...
"Errr.... luzr, why would I assume that the majority of GAIs that we create will think in a way I define as 'right'?"
It is not about what YOU define as right.
Anyway, considering that Eliezer is existing self-aware sentient GI agent, with obviously high intelligence and he is able to ask such questions despite his original biological programming makes me suppose that some other powerful strong sentient self-aware GI should reach the same point. I also believe that more general intelligence make GI converge to such "right thinking".
What m...
Wei,
Is it any safer to think ourselves about how to extend our adaptation-executer preferences than to program an AI to figure out what conclusions we would come to, if we did think a long time?
I'm thinking here of studies I half-remember about people preferring lottery tickets whose numbers they made up to randomly chosen lottery tickets, and about people thinking themselves safer if they have the steering wheel than if equally competent drivers have the steering wheel. (I only half-remember the studies; don't trust the details.) Do you think a bias like that is involved in your preference for doing the thinking ourselves, or is there reason to expect a better outcome?
Robin wrote: Having to have an answer now when it seems an likely problem is very expensive.
(I think you meant to write "unlikely" here instead of "likely".)
Robin, what is your probability that eventually humanity will evolve into a singleton (i.e., not necessarily through Eliezer's FOOM scenario)? It seems to me that competition is likely to be unstable, whereas a singleton by definition is. Competition can evolve into a singleton, but not vice versa. Given that negentropy increases as mass squared, most competitors have to remain in t...
Wei Dai, singleton-to-competition is perfectly possible, if the singleton decides it would like company.
I quote:
"The young revolutionary's belief is honest. There will be no betraying catch in his throat, as he explains why the tribe is doomed at the hands of the old and corrupt, unless he is given power to set things right. Not even subconsciously does he think, "And then, once I obtain power, I will strangely begin to resemble that old corrupt guard, abusing my power to increase my inclusive genetic fitness."
"no sudden moves; think very carefully before doing anything" - doesn't that basically amount to an admission that human minds aren't up to this, that you ought to hurriedly self-improve just to avoid tripping over your own omnipotent feet?
This presents an answer to Eliezer's "how much self improvement?": there has to be some point at which the question "what to do" becomes fully determined and further improvement is just re-proving the already proven. So you improve towards that point and stop.
This is a general point concerning Robin's and Eliezer's disagreement. I'm posting it in this thread because this thread is the best combination of relevance and recentness.
It looks like Robin doens't want to engage with simple logical arguments if they fall outside of established, scientific frameworks of abstractions. Those arguments could even be damning critiques of (hidden assumptions in) those abstractions. If Eliezer were right, how could Robin come to know that?
I think Robin's implied suggestion -- to not be so quick to discard the option of building an AI that can improve itself in certain ways but not to the point of needing to hardcode something like Coherent Extrapolated Volition. Is it really impossible to make an AI that can become "smarter" in useful ways (including by modifying its own source code, if you like), without it ever needing to take decisions itself that have severe nonlocal effects? If intelligence is an optimization process, perhaps we can choose more carefully what is being optim...
What if creating a friendly AI isn't about creating a friendly AI?
I may prefer Eliezer to grab the One Ring over others who are also trying to grab it, but that does not mean I wouldn't rather see the ring destroyed, or divided up into smaller bits for more even distribution.
I haven't met Eliezer. I'm sure he's a pretty nice guy. But do I trust him to create something that may take over the world? No, definitely not. I find it extremely unlikely that selflessness is the causal factor behind his wanting to create a friendly AI, despite how much he may claim so or how much he may believe so. Genes and memes do not reproduce via selflessness.
Peter de Blanc: You are right and I came to the same conclusion while walking this morning. I was trying to simplify the problem in order to easily obtain numbers <=1/(3^^^^3), which would solve the "paradox". We now agree that I oversimplified it.
Instead of messing with a proof-like approach again, I will try to clarify my intuition. When you start considering events of that magnitude, you must consider a lot of events (including waking up with blue tentacles as hands to take Eliezer's example). The total probability is limited to 1 for exclu...
Wei, yes I meant "unlikely." Bo, you and I have very different ideas of what "logical" means. V.G., I hope you will comment more.
Grant: We did not evolve to handle this situation. It's just as valid to say that we have an opportunity to exploit Elizer's youthful evolved altruism, get him or others like him to make an FAI, and thereby lock himself out of most of the potential payoff. Idealists get corrupted, but they also die for their ideals.
I have been granted almighty power, constrained only by the most fundamental laws of reality (which may, or may not, correspond with what we currently think about such things).
What do I do? Whatever it is that you want me to do. (No sweat off my almighty brow.)
You want me to kill thy neighbour? Look, he's dead. The neighbour doesn't even notice he's been killed ... I've got almighty power, and have granted his wish too, which is to live forever. He asked the same about you, but you didn't notice either.
In a universe where I have "almighty" power,...
Anna Salamon wrote: Is it any safer to think ourselves about how to extend our adaptation-executer preferences than to program an AI to figure out what conclusions we would come to, if we did think a long time?
First, I don't know that "think about how to extend our adaptation-executer preferences" is the right thing to do. It's not clear why we should extend our adaptation-executer preferences, especially given the difficulties involved. I'd backtrack to "think about what we should want".
Putting that aside, the reason that I prefer we d...
If living systems can unite, they can also be divided. I don't see what the problem with that idea could be.
Hmm, there are a lot of problems here.
"Unlimited power" is a non-starter. No matter how powerful the AGI is it will be of finite power. Unlimited power is the stuff of theology not of actually achievable minds. Thus the ditty from Epicurus about "God" does not apply. This is not a trivial point. I have a concern Eliezer may get too caught up in these grand sagas and great dilemnas on precisely such a theological absolutist scale. Arguing as if unlimited power is real takes us well into the current essay.
"Wieldable with...
"Give it to you" is a pretty lame answer but I'm at least able to recognise the fact that I'm not even close to being a good choice for having it.
That's more or less completely ignoring the question but the only answers I could ever come up with at the moment are what I think you call cached thoughts here.
Now this is the $64 google-illion question!
I don't agree that the null hypothesis: take the ring and do nothing with it is evil. My definition of evil is coercion leading to loss of resources up to and including loss of one's self. Thus absolute evil is loss of one's self across humanity which includes as one use case humanity's extinction (but is not limited to humanity's extinction obviously because being converted into zimboes isn't technically extinction..)
Nobody can argue that the likes of Gaddafi exist in the human population: those who are intereste...
What would you do with unlimited power?
Perhaps "Master, you now hold the ring, what do you wish me to turn the universe into?" isn't a question you have to answer all at once.
Perhaps the right approach is to ask yourself "What is the smallest step I can take that has the lowest risk of not being a strict improvement over the current situation?"
For example, are we less human or compassionate now we have Google available, than we were before that point?
Supposing an AI researcher, a year before the Google search engine was made availabl...
I assume we're actually talking NEARLY unlimited power: no actually time-traveling to when you were born and killing yourself just to solve the grandfather paradox once and for all; given information theory, and the power to bypass the "no quantum xerox" limitation, I could effectively reset the relevant light cone and run it under read-only mode to gather information needed to make an afterlife for everyone's who's died...if i could also figure out how to pre-prune the run to ensure it winds up at exactly the same branch.
But move one is to hit...
Followup to: What I Think, If Not Why
My esteemed co-blogger Robin Hanson accuses me of trying to take over the world.
Why, oh why must I be so misunderstood?
(Well, it's not like I don't enjoy certain misunderstandings. Ah, I remember the first time someone seriously and not in a joking way accused me of trying to take over the world. On that day I felt like a true mad scientist, though I lacked a castle and hunchbacked assistant.)
But if you're working from the premise of a hard takeoff - an Artificial Intelligence that self-improves at an extremely rapid rate - and you suppose such extra-ordinary depth of insight and precision of craftsmanship that you can actually specify the AI's goal system instead of automatically failing -
- then it takes some work to come up with a way not to take over the world.
Robin talks up the drama inherent in the intelligence explosion, presumably because he feels that this is a primary source of bias. But I've got to say that Robin's dramatic story, does not sound like the story I tell of myself. There, the drama comes from tampering with such extreme forces that every single idea you invent is wrong. The standardized Final Apocalyptic Battle of Good Vs. Evil would be trivial by comparison; then all you have to do is put forth a desperate effort. Facing an adult problem in a neutral universe isn't so straightforward. Your enemy is yourself, who will automatically destroy the world, or just fail to accomplish anything, unless you can defeat you. - That is the drama I crafted into the story I tell myself, for I too would disdain anything so cliched as Armageddon.
So, Robin, I'll ask you something of a probing question. Let's say that someone walks up to you and grants you unlimited power.
What do you do with it, so as to not take over the world?
Do you say, "I will do nothing - I take the null action"?
But then you have instantly become a malevolent God, as Epicurus said:
Is God willing to prevent evil, but not able? Then he is not omnipotent.
Is he able, but not willing? Then he is malevolent.
Is both able, and willing? Then whence cometh evil?
Is he neither able nor willing? Then why call him God.
Peter Norvig said, "Refusing to act is like refusing to allow time to pass." The null action is also a choice. So have you not, in refusing to act, established all sick people as sick, established all poor people as poor, ordained all in despair to continue in despair, and condemned the dying to death? Will you not be, until the end of time, responsible for every sin committed?
Well, yes and no. If someone says, "I don't trust myself not to destroy the world, therefore I take the null action," then I would tend to sigh and say, "If that is so, then you did the right thing." Afterward, murderers will still be responsible for their murders, and altruists will still be creditable for the help they give.
And to say that you used your power to take over the world by doing nothing to it, seems to stretch the ordinary meaning of the phrase.
But it wouldn't be the best thing you could do with unlimited power, either.
With "unlimited power" you have no need to crush your enemies. You have no moral defense if you treat your enemies with less than the utmost consideration.
With "unlimited power" you cannot plead the necessity of monitoring or restraining others so that they do not rebel against you. If you do such a thing, you are simply a tyrant who enjoys power, and not a defender of the people.
Unlimited power removes a lot of moral defenses, really. You can't say "But I had to." You can't say "Well, I wanted to help, but I couldn't." The only excuse for not helping is if you shouldn't, which is harder to establish.
And let us also suppose that this power is wieldable without side effects or configuration constraints; it is wielded with unlimited precision.
For example, you can't take refuge in saying anything like: "Well, I built this AI, but any intelligence will pursue its own interests, so now the AI will just be a Ricardian trading partner with humanity as it pursues its own goals." Say, the programming team has cracked the "hard problem of conscious experience" in sufficient depth that they can guarantee that the AI they create is not sentient - not a repository of pleasure, or pain, or subjective experience, or any interest-in-self - and hence, the AI is only a means to an end, and not an end in itself.
And you cannot take refuge in saying, "In invoking this power, the reins of destiny have passed out of my hands, and humanity has passed on the torch." Sorry, you haven't created a new person yet - not unless you deliberately invoke the unlimited power to do so - and then you can't take refuge in the necessity of it as a side effect; you must establish that it is the right thing to do.
The AI is not necessarily a trading partner. You could make it a nonsentient device that just gave you things, if you thought that were wiser.
You cannot say, "The law, in protecting the rights of all, must necessarily protect the right of Fred the Deranged to spend all day giving himself electrical shocks." The power is wielded with unlimited precision; you could, if you wished, protect the rights of everyone except Fred.
You cannot take refuge in the necessity of anything - that is the meaning of unlimited power.
We will even suppose (for it removes yet more excuses, and hence reveals more of your morality) that you are not limited by the laws of physics as we know them. You are bound to deal only in finite numbers, but not otherwise bounded. This is so that we can see the true constraints of your morality, apart from your being able to plead constraint by the environment.
In my reckless youth, I used to think that it might be a good idea to flash-upgrade to the highest possible level of intelligence you could manage on available hardware. Being smart was good, so being smarter was better, and being as smart as possible as quickly as possible was best - right?
But when I imagined having infinite computing power available, I realized that no matter how large a mind you made yourself, you could just go on making yourself larger and larger and larger. So that wasn't an answer to the purpose of life. And only then did it occur to me to ask after eudaimonic rates of intelligence increase, rather than just assuming you wanted to immediately be as smart as possible.
Considering the infinite case moved me to change the way I considered the finite case. Before, I was running away from the question by saying "More!" But considering an unlimited amount of ice cream forced me to confront the issue of what to do with any of it.
Similarly with population: If you invoke the unlimited power to create a quadrillion people, then why not a quintillion? If 3^^^3, why not 3^^^^3? So you can't take refuge in saying, "I will create more people - that is the difficult thing, and to accomplish it is the main challenge." What is individually a life worth living?
You can say, "It's not my place to decide; I leave it up to others" but then you are responsible for the consequences of that decision as well. You should say, at least, how this differs from the null act.
So, Robin, reveal to us your character: What would you do with unlimited power?