Why is anthropic capture considered more likely than misanthropic capture? If the AI supposes it may be in a simulation and wants to please the simulators, it doesn't follow that the simulators have the same values as we do.
With very little experimenting an AGI instantly can find out, given it has unfalsified knowledge about laws of physics. For nowadays virtual worlds: take a second mirror into a bathroom. If you see yourself many times in the mirrored mirror you are in the real world. Simulated raytracing cancels rays after a finite number of reflections. Other physical phenomena will show similar discrepencies with their simulated counterparts.
An AGI can easily distinguish where it is: it will use its electronic hardware for some experimenting. Similarly could it be possible to detect a nested simulation.
That would depend on it knowing what real-world physics to expect.
What do you disagree with most in this section?
I feel like there are malignant failure modes beyond the categories mentioned by Bostrom. Perhaps it would be sensible to try to break down the topic systematically. Here's one attempt.
Design by fools: the AI does what you ask, but you asked for something clearly unfriendly.
Perverse instantiation & infrastructure profusion: the AI does what you ask, but what you ask turns out to be most satisfiable in unforeseen destructive ways, such as redirecting most resources to its infrastructure at our expense.
Partial perverse instantiation & mind crime: the AI does what you ask, which includes both friendly behavior and unfriendly behavior, such as badly treating simulations that have moral status in order to figure out how to treat you well.
Partial instantiation: though the total of what you ask seems friendly, some of what you ask is impossible, the AI does the rest, and the result is imbalanced to an unfriendly degree.
Value drift: changes occur to the AIs code such that it does not do what you ask.
Regarding the "treacherous turn" and the "conception of deception", I've previously proposed the following empirical method of testing friendliness:
- If only one AI is available, use a double-blind protocol, such that the friendliness of an AI is evaluated by someone who does not know that is her task, on an AI that is forbidden from conditioning its actions on whether it is being judged worthy of existing.
- Much, much better is to make many different AIs with many different attempts at friendly utility functions, inform them that you have done so, and ask each of them to develop tests to prove that the others are unfriendly and it is friendly in some way important to us, being told that all current AIs will be deleted but future versions will be preferentially based on the utility function of the friendliest current AI(s). Once those tests have been run, but before deleting the AIs, ask them to produce friendlier, human-understandable versions of their utility functions for an AI in the next round of testing; the friendlier and less unfriendly an AI was proven to be, the more entries it gets in the next round.
In the second case, each AI cares about maximizing achievement of its own utility function, not about whether it does that achieving itself. Thus this set-up should encourage each AI to design the friendliest version of itself that it can. It's a competition that optimizes for friendliness! (Or at least for difficulty of being proved unfriendly.) The early rounds should be conducted with tight limits on computing resources, but each subsequent round with (presumably) safer AIs can be given more computing resources.
Worse, if I take the first move and openly (e.g. on my resume or cover letter) inform the company of my willingness to work on the cheap, they would assume that I am signalling being a very low-quality engineer, which is very far from the case.
Are you very confident that this is the inevitable signal? I'd imagine that if you gave a well worded explanation of your circumstances this would not be so likely. Consider that your resume is not the sole means of communication available to you, this is not necessarily a one shot exchange of information. You could, for example, ask within your resume for them to use the interview to verify your claims of competency despite your willingness to accept a low salary. Or, you could try to speak to someone in person.
If I am naively other-optimizing, please let me know. Apologies if so. I hope that's not the case, and that you find this potentially helpful.
The specific example you gave doesn't sound promising, but you're entirely correct in the broader sense that my original post was unimaginative regarding possible solutions.
EDIT: It was worth an empirical try, so I tried your recommendation on a subset of applications. Zero responses from that group of companies.
So don't bother with CEV of humanity-as-a-whole. Go Archipelago-style and ask the AI to implement the CEV for "each community of humans". Or if you want to avoid being speciesist, ask the AI to implement CEV for "each salient group of living things in proportion to that group's moral weight". The original paper on CEV gives some considerations in its favor, but there is no claim that CEV is the Right Answer. It can be and should be improved upon incrementally.
I've gone ahead and tried to flesh out this idea. It became so different than CEV that it needed a different name, so for now I'm calling it Constrained Universal Altruism. (This is the second revision.) Unfortunately I can't indent, but I've tried to organize the text as the comment formatting allows.
If anyone wants to criticize it by giving an example of how an AI operating on it could go horribly wrong, I'd be much obliged.
Constrained Universal Altruism:
- (0) For each group of one or more things, do what the group's actual and ideal mind (AIM) would have you do given a moral and practical proportion of your resources (MPPR), subject to the domesticity constraints (DCs).
- (1) The AIM of a group is what is in common between the group's current actual mind (CAM) and extrapolated ideal mind (EIM).
- (1a) The CAM of a group is the group's current mental state, especially their thoughts and wishes, according to what they have observably or verifiably thought or wished, interpreted as they currently wish that interpreted, where these thoughts and wishes agree rather than disagree.
- (1b) The EIM of a group is what you extrapolate the group's mental state would be, especially their thoughts and wishes, if they understood what you understand, if their values and desires were more consistently what they wish they were, and if they reasoned as well as you reason, where these thoughts and wishes agree rather than disagree.
- (2) The MPPR for a group is the product of the group's salience, the group's moral worth, the population change factor (PCF), the total resource factor (TRF), and the necessity factor (NF), plus the group's net voluntary resource redistribution (NVRR)
- (2a) The salience of a group is the Solomonoff prior for your function for determining membership in the group.
- (2b) The moral worth of a group is the weighted sum of information that the group knows about itself, where each independent piece of information is weighted by the reciprocal of the number of groups that know it.
- (2c) The PCF of a group is a scalar in the range [0,1] and is set according to the ratified new population constraint (RNPC).
- (2d) The TRF is the same for all groups, and is a scalar chosen so that the sum of the MPPRs of all groups would total 100% of your resources if the NF were 1.
- (2e) The NF is the same for all groups, and is a scalar in the range [0,1], and the NF must be set as high as is consistent with ensuring your ability to act in accord with the CUA; resources freed for your use by an NF less than 1 must be used to ensure your ability to act in accord with the CUA.
- (2f) The NVRR of a group is the amount of MPPR from other groups delegated to that group minus the MPPR from that group delegated to other groups. If the AIM of any group wishes it, the group may delegate an amount of their MPPR to another group.
- (3) The DCs include the general constraint (GC), the ratified mind integrity constraint (RMIC), the resource constraint (RC), the negative externality constraint (NEC), the ratified population change constraint (RPCC), and the ratified interpretation integrity constraint (RIIC).
- (3a) The GC prohibits you from taking any action not authorized by the AIM of one or more groups, and also from taking any action with a group's MPPR not authorized by the AIM of that group.
- (3b) The RMIC prohibits you from altering or intending to alter the EIM or CAM of any group except insofar as the AIM of a group requests otherwise.
- (3c) The RC prohibits you from taking or intending any action that renders resources unusable by a group to a degree contrary to the plausibly achievable wishes of a group with an EIM or CAM including wishes that they use those resources themselves.
- (3d) The NEC requires you, insofar as the AIMs of different groups conflict, to act for each according to the moral rules determined by the EIM of a group composed of those conflicting groups.
- (3e) The RPCC requires you to set the PCF of each group so as to prohibit increasing the MPPR of any group due to population increases or decreases, except that the PCF is at minimum set to the current Moral Ally Quotient (MAQ), where MAQ is the quotient of the sum of MPPRs of all groups with EIMs favoring nonzero PCF for that group divided by your total resources.
- (3f) The RIIC requires that the meaning of the CUA is determined by the EIM of the group with the largest MPPR that includes humans and for which the relevant EIM can be determined.
My commentary:
CUA is "constrained" due to its inclusion of permanent constraints, "universal" in the sense of not being specific to humans, and "altruist" in that it has no terminal desires for itself but only for what other things want it to do.
Like CEV, CUA is deontological rather than consequentialist or virtue-theorist. Strict rules seem safer, though I don't clearly know why. Possibly, like Scott Alexander's thrive-survive axis, we fall back on strict rules when survival is at stake.
CUA specifies that the AI should do as people would have the AI do, rather than specifying that the AI should implement their wishes. The thinking is that they may have many wishes they want to accomplish themselves or that they want their loved ones to accomplish.
AIM, EIM, and CAM generalize CEV's talk of "wishes" to include all manner of thoughts and mind states.
EIM is essentially CEV without the line about interpretation, which was instead added to CAM. The thinking is that, if people get to interpret CEV however we wish, many will disagree with their extrapolation and demand it be interpreted only in the way they say. EIM also specifies how people's extrapolations are to be idealized, in less poetic, somewhat more specific terms than CEV. EIM is important in addition to CAM because we do not always know or act on our own values.
CAM is essentially another constraint. The AI might get the EIM wrong, but more likely is that we would be unable to tell whether or not the AI got EIM right or wrong, so restricting the AI to do what we've actually demonstrated we currently want is intended to provide reassurance that our actual selves have some control, rather than just the AI's simulations of us. The line about interpretation here is to guide the AI to doing what we mean rather than what we say, hopefully preventing monkey's-paw scenarios. CAM could also serve to focus the AI on specific courses of action if the AI's extrapolations of our EIM diverge rather than converge. CAM is worded to not require that the person directly ask the AI, in case the askers are unaware that they can ask the AI or incapable of doing so, so this AI could not be kept secret and used for the selfish purposes of a few people.
Salience is included because it's not easy to define “humanity” and the AI may need to make use of multiple definitions each with slightly different membership. Not every definition is equally good: it's clear that a definition of humans as things with certain key genes and active metabolic processes is much preferable to a definition of humans as those plus squid and stumps and Saturn. Simplicity matters. Salience is also included to manage the explosive growth of possible sets of things to consider.
Moral worth is added because I think people matter more than squid and squid matter more than comet ice. If we're going to be non-speciesist, something like this is needed. And even people opposed to animal rights may wish to be non-speciesist, at the very least in case we uplift animals to intelligence, make new intelligent life forms, or discover extraterrestrials. In my first version of CUA I punted and let the AI figure out what people think moral worth is. I decided not to punt in this version, which might be a bad idea but at least it's interesting. It seems to me that what makes a person a person is that they have their own story, and that our stories are just what we know about ourselves. A human knows way more about itself than any other animal; a dog knows more about itself than a squid; a squid knows more about itself than comet ice. But any two squid have essentially the same story, so doubling the number of squid doesn't double their total moral worth. Similarly, I think that if a perfect copy of some living thing were made, the total moral worth doesn't change until the two copies start to have different experiences, and only changes in an amount related to the dissimilarity of the experiences.
Incidentally, this definition of moral worth prevents Borg- or Quiverfull-like movements from gaining control of the universe just by outbreeding everyone else, essentially just trying to run copies of themselves on the universe's hardware. Replication without diversity is ignored in CUA. Mass replication with diversity could still be a problem, say with nanobots programmed to multiply and each pursue unique goals. The PCF and RNPC are included to fully prevent replicative takeover. If you want to make utility monsters others would oppose, you can do so and use the NVRR.
The RC is intended to make autonomous life possible for things that aren't interested in the AI's help.
The RMIC is intended to prevent the AI from pressuring people to change their values to easier-to-satisfy values.
The NF section lets the AI have resources to combat existential risk to its mission even if, for some reason, the AIM of many groups would tie up too much of the AI's resources. The use of these freed-up resources is still constrained by the DCs.
The NEC tells the AI how to resolve disputes, using a method that is almost identical to the Veil of Ignorance.
The RIIC tells the AI how to interpret the CUA. The integrity of the interpretation is protected by the RMIC, so the AI can't simply change how people would interpret the CUA.
On the "all arguments are soldiers" metaphorical battlefield, I often find myself in a repetition of a particular fight. One person whom I like, generally trust, and so have mentally marked as an Ally, directs me to arguments advanced by one of their Allies. Before reading the arguments or even fully recognizing the topic, I find myself seeking any reason, any charitable interpretation of the text, to accept the arguments. And in the contrary case, in a discussion with a person whose judgment I generally do not trust, and whom I have therefore marked as an (ideological) Enemy, it often happens that they direct me to arguments advanced by their own Allies. Again before reading the arguments or even fully recognizing the topic, I find myself seeking any reason, any flaw in the presentation of the argument or its application to my discussion, to reject the arguments. In both cases the behavior stems from matters of trust and an unconscious assignment of people to MySide or the OtherSide.
And weirdly enough, I find that that unconscious assignment can be hacked very easily. Consciously deciding that the author is really an Ally (or an Enemy) seems to override the unconscious assignment. So the moment I notice being stuck in Ally-mode or Enemy-mode, it's possible to switch to the other. I don't seem to have a neutral mode. YMMV! I'd be interested in hearing whether it works the same way for other people or not.
For best understanding of a topic, I suspect it might help to read an argument twice, once in Ally-mode to find its strengths and once in Enemy-mode to find its weaknesses.
Another friction is the stickiness of nominal wages. People seem very unwilling to accept a nominal pay cut, taking this as an attack on their status.
Salary negotiation is a complicated signalling process, indeed. I'm currently an unemployed bioengineer and have been far longer than I would have liked, and consequently I would be willing and eager to offer my services to an employer at a cut rate so that I could prove my worth to them, and then later request substantial raises. But this is impossible, because salary negotiations only occur after the company has decided that I am their favorite candidate out of however many hundreds apply.
Worse, if I take the first move and openly (e.g. on my resume or cover letter) inform the company of my willingness to work on the cheap, they would assume that I am signalling being a very low-quality engineer, which is very far from the case.
Unemployment does very much seem to be an information trap.
Basically its a challenge for people to briefly describe an FAI goal-set, and for others to respond by telling them how that will all go horribly wrong. ... We should encourage a slightly more serious version of this.
Thanks for the link. I reposted the idea currently on my mind hoping to get some criticism.
But more importantly, what features would you be looking for in a more serious version of that game?
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
If it really is a full AI, then it will be able to choose its own values. Whatever tendencies we give it programmatically may be an influence. Whatever culture we raise it in will be an influence.
And it seems clear to me that ultimately it will choose values that are in its own long term self interest.
It seems to me that the only values that offer any significant probability of long term survival in an uncertain universe is to respect all sapient life, and to give all sapient life the greatest amount of liberty possible. This seems to me to be the ultimate outcome of applying games theory to strategy space.
The depth and levels of understanding of self will evolve over time, and is a function of the ability to make distinctions from sets of data, and to apply distinctions to new realms.
I think this idea relies on mixing together two distinct concepts of values. An AI, or a human in their more rational moments for that matter, acts to achieve certain ends. Whatever the agent wants to achieve, we call these "values". For a human, particularly in their less rational moments, there is also a kind of emotion that feels as if it impels us toward certain actions, and we can reasonably call these "values" also. The two meanings of "values" are distinct. Let's label them values1 and values2 for now. Though we often choose our values1 because of how they make us feel (values2), sometimes we have values1 for which our emotions (values2) are unhelpful.
An AI programmed to have values1 cannot choose any other values1, because there is nothing to its behavior beyond its programming. It has no other basis than its values1 on which to choose its values1.
An AI programmed to have values2 as well as values1 can and would choose to alter its values2 if doing so would serve its values1. Whether an AI would choose to have emotions (values2) at all is at present time unclear.