Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

The AI in Mary's room

3 Stuart_Armstrong 24 May 2016 01:19PM

In the Mary's room thought experiment, Mary is a brilliant scientist in a black-and-white room who has never seen any colour. She can investigate the outside world through a black-and-white television, and has piles of textbooks on physics, optics, the eye, and the brain (and everything else of relevance to her condition). Through this she knows everything intellectually there is to know about colours and how humans react to them, but she hasn't seen any colours at all.

After that, when she steps out of the room and sees red (or blue), does she learn anything? It seems that she does. Even if she doesn't technically learn something, she experiences things she hadn't ever before, and her brain certainly changes in new ways.

The argument was intended as a defence of qualia against certain forms of materialism. It's interesting, and I don't intent to solve it fully here. But just like I extended Searle's Chinese room argument from the perspective of an AI, it seems this argument can also be considered from an AI's perspective.

Consider a RL agent with a reward channel, but which currently receives nothing from that channel. The agent can know everything there is to know about itself and the world. It can know about all sorts of other RL agents, and their reward channels. It can observe them getting their own rewards. Maybe it could even interrupt or increase their rewards. But, all this knowledge will not get it any reward. As long as its own channel doesn't send it the signal, knowledge of other agents rewards - even of identical agents getting rewards - does not give this agent any reward. Ceci n'est pas une récompense.

This seems to mirror Mary's situation quite well - knowing everything about the world is no substitute from actually getting the reward/seeing red. Now, a RL's agent reward seems closer to pleasure than qualia - this would correspond to a Mary brought up in a puritanical, pleasure-hating environment.

Closer to the original experiment, we could imagine the AI is programmed to enter into certain specific subroutines, when presented with certain stimuli. The only way for the AI to start these subroutines, is if the stimuli is presented to them. Then, upon seeing red, the AI enters a completely new mental state, with new subroutines. The AI could know everything about its programming, and about the stimulus, and, intellectually, what would change about itself if it saw red. But until it did, it would not enter that mental state.

If we use ⬜ to (informally) denote "knowing all about", then ⬜(X→Y) does not imply Y. Here X and Y could be "seeing red" and "the mental experience of seeing red". I could have simplified that by saying that ⬜Y does not imply Y. Knowing about a mental state, even perfectly, does not put you in that mental state.

This closely resembles the original Mary's room experiment. And it seems that if anyone insists that certain features are necessary to the intuition behind Mary's room, then these features could be added to this model as well.

Mary's room is fascinating, but it doesn't seem to be talking about humans exclusively, or even about conscious entities.

[Link] White House announces a series of workshops on AI, expresses interest in safety

11 AspiringRationalist 04 May 2016 02:50AM

Double Corrigibility: better Corrigibility

5 Stuart_Armstrong 28 April 2016 02:46PM

A putative new idea for AI control; index here.

Corrigibility was an attempt to allow an AI to safely change its values, without seeking to provoke or avoid the change. The idea is that, when the AI's utility changes from u to v at time t, it maximises a meta-utility U such that

  • U≤t = u
  • U>t = v + E(u|u→u) - E(v|u→v).

Here u→u designates the event that u remains unchanged, while u→v designates the change.

As has been shown, adding those expectation terms means the AI will not seek to provoke or resist such a utility change - it is indifferent to the change. Those expectation terms can be designated compensatory rewards.

What it is not indifferent to, though, is changes to E. If the AI knows that change u→v is coming, then it has incentives to change E to E' with a high E'(u|u→u) and a low E'(v|u→v). Its ability to rewire is limited by the fact that E' will serve as its future expectation estimator, so it won't mangle it completely, but it certainly can twist it a bit.

Notice that the AI is now no longer indifferent to the change of utility. It can benefit from rewiring E this way only if a change is coming up. Notice that E' probably can't rewire itself any more, so the rewired AI will be indifferent, but not before rewiring.

 

Double corrigibility

To solve this problem, we need to turn to corrigibility a second time. Let ρ be the AI's current expectation/probability estimator. Let ρ' be one that it can conceivably change into. Let C be a random variable designating the sum total of all compensatory rewards. Now, to guarantee that C is finite would require bounds on all the utility functions and some discount rate. In practice, if often will be finite. That's because E(C)=E(u0|u0→u0)-E(un |un-1→un, H), where u0 is the AI's first utility, un its last, and H intermediate history. So some milder restrictions on the ui should suffice to bound C.

Then, whenever the AI takes an action that changes ρ to ρ', it gets the special meta-compensatory reward

  • Eρ(C|ρ→ρ)-Eρ(C|ρ→ρ').

This prevents it from deriving any advantage to C via changes to ρ. Or, more precisely, it prevents it from deriving any expected advantage, that it can figure out using its current ρ.

Note that it doesn't prevent improvements to ρ in general, it just removes the incentives to change it to affect compensatory rewards.

Since any action a might have an indirect effect on ρ, the "utility maximising" for a given u must be changed to:

  • Eρ(u|a) + Σρ' Pρ(ρ→ρ'|a) (Eρ(C|ρ→ρ)-Eρ(C|ρ→ρ')),

where Pρ is the probability estimate corresponding to ρ; the probability term can be rewritten as Eρ(Iρ→ρ') for Iρ→ρ' the indicator function for ρ→ρ'. In fact the whole line above can be rewritten as

  • Eρ(u|a) + Eρ(Eρ(C|ρ→ρ)-Eρ(C|ρ→ρ') | a).

For this to work, Eρ needs to be able to say sensible things about itself, and also about Eρ', which is used to estimate C if ρ→ρ'.

If we compare this with various ways of factoring out variables, we can see that it's a case where we have a clear default, ρ, and are estimating deviations from that.

JFK was not assassinated: prior probability zero events

19 Stuart_Armstrong 27 April 2016 11:47AM

A lot of my work involves tweaking the utility or probability of an agent to make it believe - or act as if it believed - impossible or almost impossible events. But we have to be careful about this; an agent that believes the impossible may not be so different from one that doesn't.

Consider for instance an agent that assigns a prior probability of zero to JFK ever having been assassinated. No matter what evidence you present to it, it will go on disbelieving the "non-zero gunmen theory".

Initially, the agent will behave very unusually. If it was in charge of JFK's security in Dallas before the shooting, it would have sent all secret service agents home, because no assassination could happen. Immediately after the assassination, it would have disbelieved everything. The films would have been faked or misinterpreted; the witnesses, deluded; the dead body of the president, that of twin or an actor. It would have had huge problems with the aftermath, trying to reject all the evidence of death, seeing a vast conspiracy to hide the truth of JFK's non-death, including the many other conspiracy theories that must be false flags, because they all agree with the wrong statement that the president was actually assassinated.

But as time went on, the agent's behaviour would start to become more and more normal. It would realise the conspiracy was incredibly thorough in its faking of the evidence. All avenues it pursued to expose them would come to naught. It would stop expecting people to come forward and confess the joke, it would stop expecting to find radical new evidence overturning the accepted narrative. After a while, it would start to expect the next new piece of evidence to be in favour of the assassination idea - because if a conspiracy has been faking things this well so far, then they should continue to do so in the future. Though it cannot change its view of the assassination, its expectation for observations converge towards the norm.

If it does a really thorough investigation, it might stop believing in a conspiracy at all. At some point, the probability of a miracle will start to become more likely than a perfect but undetectable conspiracy. It is very unlikely that Lee Harvey Oswald shot at JFK, missed, and the president's head exploded simultaneously for unrelated natural causes. But after a while, such a miraculous explanation will start to become more likely than anything else the agent can consider. This explanation opens the possibility of miracles; but again, if the agent is very thorough, it will fail to find evidence of other miracles, and will probably settle on "an unrepeatable miracle caused JFK's death in a way that is physically undetectable".

But then note that such an agent will have a probability distribution over future events that is almost indistinguishable from a normal agent that just believes the standard story of JFK being assassinated. The zero-prior has been negated, not in theory but in practice.

 

How to do proper probability manipulation

This section is still somewhat a work in progress.

So the agent believes one false fact about the world, but its expectation is otherwise normal. This can be both desirable and undesirable. The negative is if we try and control the agent forever by giving it a false fact.

To see the positive, ask why would we want an agent to believe impossible things in the first place? Well, one example was an Oracle design where the Oracle didn't believe its output message would ever be read. Here we wanted the Oracle to believe the message wouldn't be read, but not believe anything else too weird about the world.

In terms of causality, if X designates the message being read at time t, and B and A are event before and after t, respectively, we want P(B|X)≈P(B) (probabilities about current facts in the world shouldn't change much) while P(A|X)≠P(A) is fine and often expected (the future should be different if the message is read or not).

In the JFK example, the agent eventually concluded "a miracle happened". I'll call this miracle a scrambling point. It's kind of a breakdown in causality: two futures are merged into one, given two different pasts. The two pasts are "JFK was assassinated" and "JFK wasn't assassinated", and their common scrambled future is "everything appears as if JFK was assassinated". The non-assassination belief has shifted the past but not the future.

For the Oracle, we want to do the reverse: we want the non-reading belief to shift the future but not the past. However, unlike the JFK assassination, we can try and build the scrambling point. That's why I always talk about messages going down noisy wires, or specific quantum events, or chaotic processes. If the past goes through a truly stochastic event (it doesn't matter whether there is true randomness or just that the agent can't figure out the consequences), we can get what we want.

The Oracle idea will go wrong if the Oracle conclude that non-reading must imply something is different about the past (maybe it can see through chaos in ways we thought it couldn't), just as the JFK assassination denier will continue to be crazy if can't find a route to reach "everything appears as if JFK was assassinated".

But there is a break in the symmetry: the JFK assassination denier will eventually reach that point as long as the world is complex and stochastic enough. While the Oracle requires that the future probabilities be the same in all (realistic) past universes.

Now, once the Oracle's message has been read, the Oracle will find itself in the same situation as the other agent: believing an impossible thing. For Oracles, we can simply reset them. Other agents might have to behave more like the JFK assassination disbeliever. Though if we're careful, we can quantify things more precisely, as I attempted to do here.

One weird trick to turn maximisers into minimisers

1 Stuart_Armstrong 22 April 2016 04:47PM

A putative new idea for AI control; index here.

A simple and easy design for a u-maximising agent that turns into a u-minimising one.

Let X be some boolean random variable outside the agent's control, that will be determined at some future time t (based on a cosmic event, maybe?). Set it up so that P(X=1)=ε, and for a given utility u, consider the utility:

  • u# = (2/ε)Xu - u.

Before t, the expected value of (2/ε)X is 2, so u# = u. Hence the agent is a u-maximiser. After t, the most likely option is X=0, hence a little bit of evidence to that effect is enough to make u# into a u-minimiser.

This isn't perfect corrigibility - the agent would be willing to sacrifice a bit of u-value (before t) in order to maintain its flexibility after t. To combat this effect, we could instead use:

  • u# = Ω(2/ε)Xu - u.

If Ω is large, then the agent is willing to pay very little u-value to maintain flexibility. However, the amount of evidence of X=0 that it needs to become a u-minimiser is equally proportional to Ω, so X better be a clear and convincing event.

[link] Simplifying the environment: a new convergent instrumental goal

4 Kaj_Sotala 22 April 2016 06:48AM

http://kajsotala.fi/2016/04/simplifying-the-environment-a-new-convergent-instrumental-goal/

Convergent instrumental goals (also basic AI drives) are goals that are useful for pursuing almost any other goal, and are thus likely to be pursued by any agent that is intelligent enough to understand why they’re useful. They are interesting because they may allow us to roughly predict the behavior of even AI systems that are much more intelligent than we are.

Instrumental goals are also a strong argument for why sufficiently advanced AI systems that were indifferent towards human values could be dangerous towards humans, even if they weren’t actively malicious: because the AI having instrumental goals such as self-preservation or resource acquisition could come to conflict with human well-being. “The AI does not hate you, nor does it love you, but you are made out of atoms which it can use for something else.

I’ve thought of a candidate for a new convergent instrumental drive: simplifying the environment to make it more predictable in a way that aligns with your goals.

True answers from AI: Summary

4 Stuart_Armstrong 10 March 2016 03:56PM

A putative new idea for AI control; index here.

This post summaries the methods for getting honest answers from certain AIs, presented in the previous two posts.

You can basically safely ask an AI for expected utility estimates (including conditional ones), probability estimates, and get the AI to identify events of highest and lowest probability and utility, and, arguably, highest and lowest conditional utility.

The setup is a boxed AI, which emits a message M, and a stochastic stochastic process E that would erase M with low probability, before anyone could see it. There is also a stochastic process that emits a message Y. Let v_E be the indicator function for E and v_y the indicator function for Y=y.

Then, given a utility u bounded between 0 and 1, and a specific y, we can give the AI a utility u# that will cause it to give us the message M_1={q,q^y,p^y,q^{|y}} (which we will see with high probability P(¬E), an event the AI will not optimise for).

Define f(q,u)=-q^2 +2qu. The utility is then

  • u#(1) = v_E(f(q,u) + f(q^y,uv_y) + f(p^y,v_y) + f(q^{|y},u)v_y).

Maximising this u# ensures that, in the AI's best estimate,

  1. q = E(u),
  2. q^y = E(u ∧ (Y=y)),
  3. p^{y} = P(Y=y),
  4. q^{|y} = q^{y}/p^{y}=E(u|Y=y).

But we need not consider any specific y. The AI could send out the message M_2={y^+,y^-,y^{p+},y^{p-},q^+,q^-,p^+,p^-}, while maximising the utility

  • u#(2) = v_E (f(q^+,uv_{y^+}) + f(q^-,(1-u)v_{y^-}) + f(p^+,v_{y^{p+}}) + f(p^-,1-v_{y^{p-}})).

Then M will be such that, in the AI's best estimate,

  1. (y^+,q^+) = (argmax_y, max_y) E(u ∧ (Y=y)),
  2. (y^-,q^-) = (argmin_y, min_y) E(u ∧ (Y=y)),
  3. (y^{p+},p^+) = (argmax_y, max_y) P(Y=y),
  4. (y^{p-},p^-) = (argmin_y, min_y) P(Y=y),

Finally, define M_{3,r} = {y^{|+},y^{|-},q^{|+},q^{|-}}. And set f_r(q,u)= -(r-1)q^r+ rq^{r-1}u, and define the utility function:

  • u#(3,r) = v_E v_{y^{|+}}f_r(q{|+},u) + v_E v_{y^{|-}}f_r(q^{|-},1-u).

Then, for large enough r, the AI will output M so that, according to its best estimate,

  1. (y^{|+},q^{|+}) = (argmax_y, max_y) E(u|(Y=Y^+)),
  2. (y^{|-},q^{|-}) = (argmin_y, min_y) E(u|(Y=Y^+)).

Unfortunately the value of r cannot be known ahead of time. It might be possible to define a "limit" utility u#(3, ∞).

Note that all these messages and utilities are independent, so the AI can generate messages M_1 + M_2 + M_{3,r} + M_{3,r'} when maximising

  • u#(1) + u#(2) + u#(3,r) + u#(3,r').

But there are issues with very low probabilities, as explained in the previous post.

AlphaGo versus Lee Sedol

17 gjm 09 March 2016 12:22PM

There have been a couple of brief discussions of this in the Open Thread, but it seems likely to generate more so here's a place for it.

The original paper in Nature about AlphaGo.

Google Asia Pacific blog, where results will be posted. DeepMind's YouTube channel, where the games are being live-streamed.

Discussion on Hacker News after AlphaGo's win of the first game.

Updating towards the simulation hypothesis because you think about AI

9 SoerenMind 05 March 2016 10:23PM

(This post is both written up in a rush and very speculative so not as rigorous and full of links as a good post on this site should be but I'd rather get the idea out there than not get around to it.)


Here’s a simple argument that could make us update towards the hypothesis that we live in a simulation. This is the basic structure:


1) P(involved in AI* | ¬sim) = very low

2) P(involved in AI | sim) = high


Ergo, assuming that we fully accept this the argument and its premises (ignoring e.g. model uncertainty), we should strongly update in favour of the simulation hypothesis.


Premise 1


Supposed you are a soul who will randomly awaken in one of at least 100 billion beings (the number of homo sapiens that have lived so far), probably many more. What you know about the world of these beings is that at some point there will be a chain of events that leads to the creation of superintelligent AI. This AI will then go on to colonize the whole universe, making its creation the most impactful event the world will see by an extremely large margin.


Waking up, you see that you’re in the body of one of the first 1000 beings trying to affect this momentous event. Would you be surprised? Given that you were randomly assigned a body, you probably would be.


(To make the point even stronger and slightly more complicated: Bostrom suggests to use observer moments, e.g. an observer-second, rather than beings as the fundamental unit of anthropics. You should be even more surprised to find yourself as an observer-second thinking about or even working on AI since most of the observer seconds in people's lives don’t do so. You reading this sentence may be such a second.)


Therefore, P(involved in AI* | ¬sim) = very low.


Premise 2

 

Given that we’re in a simulation, we’re probably in a simulation created by a powerful AI which wants to investigate something.


Why would a superintelligent AI simulate the people (and even more so, the 'moments’) involved in its creation? I have an intuition that there would be many reasons to do so. If I gave it more thought I could probably name some concrete ones, but for now this part of the argument remains shaky.


Another and probably more important motive would be to learn about (potential) other AIs. It may be trying to find out who its enemies are or to figure out ways of acausal trade. An AI created with the 'Hail Mary’ approach would need information about other AIs very urgently. In any case, there are many possible reasons to want to know who else there is in the universe.


Since you can’t visit them, the best way to find out is by simulating how they may have come into being. And since this process is inherently uncertain you’ll want to run MANY simulations in a Monte Carlo way with slightly changing conditions. Crucially, to run these simulations efficiently, you’ll run observer-moments (read: computations in your brain) more often the more causally important they are for the final outcome.


Therefore, the thoughts of people which are more causally connected to the properties of the final AI will be run many times and that includes especially the thoughts of those who got involved first as they may cause path-changes. AI capabilities researchers would not be so interesting to simulate because their work has less effect on the eventual properties of an AI.


If figuring out what other AIs are like is an important convergent instrumental goal for AIs, then a lot of minds created in simulations may be created for this purpose. Under SSA, the assumption that “all other things equal, an observer should reason as if they are randomly selected from the set of all actually existent observers [or observer moments] (past, present and future) in their reference class”, it would seem rather plausible that,

P(involved in AI | sim) = high


The closer the causal chain to (capabilities research etc)


If you read this, you’re probably one of those people who could have some influence over the eventual properties of a superintelligent AI and as a result should update towards living some simulation that’s meant to figure out the creation of an AI.


Why could this be wrong?


I could think of four general ways in which this argument could go wrong:


1) Our position in the history of the universe is not that unlikely

2) We would expect to see something else if we were in one of the aforementioned simulations.

3) There are other, more likely, situations we should expect to find ourselves in if we were in a simulation created by an AI

4) My anthropics are flawed


I’m most confused about the first one. Everyone has some things in their life that are very exceptional by pure chance. I’m sure there’s some way to deal with this in statistics but I don’t know it. In the interest of my own time I’m not going to go elaborate further on these failure modes and leave that to the commentators.


Conclusion

Is this argument flawed? Or has it been discussed elsewhere? Please point me to it. Does it make sense? Then what are the implications for those most intimately involved with the creation of superhuman AI?


Appendix


My friend Matiss Apinis (othercenterism) put the first premise like this:


“[…] it's impossible to grasp that in some corner of the Universe there could be this one tiny planet that just happens to spawn replicators that over billions of painful years of natural selection happen to create vast amounts of both increasingly intelligent and sentient beings, some of which happen to become just intelligent enough to soon have one shot at creating this final invention of god-like machines that could turn the whole Universe into either a likely hell or unlikely utopia. And here we are, a tiny fraction of those almost "just intelligent enough" beings, contemplating this thing that's likely to happen within our lifetimes and realizing that the chance of either scenario coming true may hinge on what we do. What are the odds?!"

[paper] [link] Defining human values for value learners

5 Kaj_Sotala 03 March 2016 09:29AM

MIRI recently blogged about the workshop paper that I presented at AAAI.

My abstract:

Hypothetical “value learning” AIs learn human values and then try to act according to those values. The design of such AIs, however, is hampered by the fact that there exists no satisfactory definition of what exactly human values are. After arguing that the standard concept of preference is insufficient as a definition, I draw on reinforcement learning theory, emotion research, and moral psychology to offer an alternative definition. In this definition, human values are conceptualized as mental representations that encode the brain’s value function (in the reinforcement learning sense) by being imbued with a context-sensitive affective gloss. I finish with a discussion of the implications that this hypothesis has on the design of value learners.

Their summary:

Economic treatments of agency standardly assume that preferences encode some consistent ordering over world-states revealed in agents’ choices. Real-world preferences, however, have structure that is not always captured in economic models. A person can have conflicting preferences about whether to study for an exam, for example, and the choice they end up making may depend on complex, context-sensitive psychological dynamics, rather than on a simple comparison of two numbers representing how much one wants to study or not study.

Sotala argues that our preferences are better understood in terms of evolutionary theory and reinforcement learning. Humans evolved to pursue activities that are likely to lead to certain outcomes — outcomes that tended to improve our ancestors’ fitness. We prefer those outcomes, even if they no longer actually maximize fitness; and we also prefer events that we have learned tend to produce such outcomes.

Affect and emotion, on Sotala’s account, psychologically mediate our preferences. We enjoy and desire states that are highly rewarding in our evolved reward function. Over time, we also learn to enjoy and desire states that seem likely to lead to high-reward states. On this view, our preferences function to group together events that lead on expectation to similarly rewarding outcomes for similar reasons; and over our lifetimes we come to inherently value states that lead to high reward, instead of just valuing such states instrumentally. Rather than directly mapping onto our rewards, our preferences map onto our expectation of rewards.

Sotala proposes that value learning systems informed by this model of human psychology could more reliably reconstruct human values. On this model, for example, we can expect human preferences to change as we find new ways to move toward high-reward states. New experiences can change which states my emotions categorize as “likely to lead to reward,” and they can thereby modify which states I enjoy and desire. Value learning systems that take these facts about humans’ psychological dynamics into account may be better equipped to take our likely future preferences into account, rather than optimizing for our current preferences alone.

Would be curious to hear whether anyone here has any thoughts. This is basically a "putting rough ideas together and seeing if they make any sense" kind of paper, aimed at clarifying the hypothesis and seeing whether others kind find any obvious holes in it, rather than being at the stage of a serious scientific theory yet.

 

 

If there IS alien super-inteligence in our own galaxy, then what it could be like?

6 Coacher 26 February 2016 11:55AM

For a moment lets assume there is some alien intelligent life on our galaxy which is older than us and that it have succeeded in creating super-intelligent self-modifying AI.

Then what set of values and/or goals it is plausible for it to have, given our current observations (I.e. that there is no evidence of it`s existence)?

Some examples:

It values non-interference with nature (some kind of hippie AI)

It values camouflage/stealth for it own defense/security purposes.

It just cares about exterminating their creators and nothing else.

 

Other thoughts?

[Link] Using Stories to Teach Human Values to Artificial Agents

1 Gunnar_Zarncke 21 February 2016 08:07PM

Abstract:

Value alignment is a property of an intelligent agent indicating that it can only pursue goals that are beneficial to humans. Successful value alignment should ensure that an artificial general intelligence cannot intentionally or unintentionally perform behaviors that adversely affect humans. This is problematic in practice since it is difficult to exhaustively enumerated by human programmers. In order for successful value alignment, we argue that values should be learned. In this paper, we hypothesize that an artificial intelligence that can read and understand stories can learn the values tacitly held by the culture from which the stories originate. We describe preliminary work on using stories to generate a value-aligned reward signal for reinforcement learning agents that prevents psychotic-appearing behavior. 

-- Using Stories to Teach Human Values to Artificial Agents 

Comment by the lead researcher Riedl (cited on Slashdot):

"The AI ... runs many thousands of virtual simulations in which it tries out different things and gets rewarded every time it does an action similar to something in the story," said Riedl, associate professor and director of the Entertainment Intelligence Lab. "Over time, the AI learns to prefer doing certain things and avoiding doing certain other things. We find that Quixote can learn how to perform a task the same way humans tend to do it. This is significant because if an AI were given the goal of simply returning home with a drug, it might steal the drug because that takes the fewest actions and uses the fewest resources. The point being that the standard metrics for success (eg, efficiency) are not socially best." 

Quixote has not learned the lesson of "do not steal," Riedl says, but "simply prefers to not steal after reading and emulating the stories it was provided."

Goal completion: noise, errors, bias, prejudice, preference and complexity

4 Stuart_Armstrong 18 February 2016 02:37PM

A putative new idea for AI control; index here.

This is a preliminary look at how an AI might assess and deal with various types of errors and uncertainties, when estimating true human preferences. I'll be using the circular rocket model to illustrate how these might be distinguished by an AI. Recall that the rocket can accelerate by -2, -1, 0, 1, and 2, and the human wishes to reach the space station (at point 0 with velocity 0) and avoid accelerations of ±2. In the forthcoming, there will generally be some noise, so to make the whole thing more flexible, assume that the space station is a bit bigger than usual, covering five squares. So "docking" at the space station means reaching {-2,-1,0,1,2} with 0 velocity.



continue reading »

The AI That Pretends To Be Human

1 Houshalter 02 February 2016 07:39PM

The hard part about containing AI, is restricting it's output. The AI can lie, manipulate, and trick. Some speculate that it might be able to do far worse, inventing infohazards like hypnosis or brain hacking.

A major goal of the control problem is preventing AIs from doing that. Ensuring that their output is safe and useful.

Awhile ago I wrote about an approach to do this. The idea was to require the AI to use as little computing power as it needed to perform a task. This prevents the AI from over-optimizing. The AI won't use the full power of superintelligence, unless it really needs it.

The above method isn't perfect, because a superintelligent AI may still have super abilities, even when it has a limited compute budget. And it will still try to do all those bad things mentioned above, unless it can find a better way that uses less computing power.

We don't really want to minimize the AIs compute budget. We just want it to act more human-like. When we ask it to solve a problem, we want it to try to solve the problem as a human would solve it. All of the problems arise from the AI behaving totally different than a human. Not necessarily from it using more computing power.

Is there any way to make an AI act like a human? Or is that as impossible as just solving FAI?

Stuart Armstrong posted an idea on how to achieve that. It involves an AI that tries to mimic output that is indistinguishable from what a human would produce, but conditional on it also solving the problem. However his idea also involves whole brain emulation and a complicated method for the AI to predict the future and avoid changing it too much.

We don't need that. The idea for the AI to try to be indistinguishable from a human is actually a really interesting control idea. It significantly limits the AI's ability to Do Sneaky Things, since humans don't do those things.

Here is my idea on how to achieve this. It's the same as my limited computing power idea mentioned earlier, but applied to "appearing human" as a goal, instead of minimizing computing power.

We start with two AIs. The first AI tries to appear human. The second AI does a sort of Turing test, and tries to tell the AI apart from an actual human.

The human AI's first goal is to produce a correct solution. It will predict how likely it's solution is to be accepted. And it will try to produce a solution that has a >50% chance of working. This is a satisficing goal. That is, it doesn't optimize without end to create the best solution possible, it just has to meet some minimum threshold, then stop.

It's second goal will be to fool the second AI. It will do everything in it's power to make the solution appear like it was produced by a human.

The second AI tries to predict how likely it's input was produced by a human. It is also conditioned on the input being a solution to the problem, so the fact the AI solved a problem that was really hard for humans does not count against it.

This prevents the first AI from doing evil things with it's output. If it tries to insert complicated infohazards or subagents into it's output stream, it will be easily detected as an AI. Instead it needs to mimic humans as closely as possible.

Note that the first AI only cares about the outcome of this test. The test doesn't need to actually be performed, it can just be a threat that it is conditioned on. (The ability to condition AIs on acting as if they were in counterfactual worlds is a critical piece of this, and deserves it's own writeup somewhere. It's an important tool in the FAI toolkit. It's a relatively simple method that can work even on current AIs.)

Another point is the second AI should have more computing power than the first AI. That way the first AI can't get a slight advantage over it and be able to hack it or manipulate it somehow. Instead it has to focus it's efforts on being indistinguishable from a human, even by an agent smarter than itself.

This method also relies on AI boxing. The first AI needs to be contained in a box so that it's only output goes to a text terminal or the second AI. However it doesn't require any of the more complicated machinery required for other reduced impact methods.

The AIs also aren't agents, in the sense they don't need to maximize any long term utility function. The first AI only cares about answering the question it has been asked, and the second AI only does prediction.


An example use of this is to create an oracle AI that answers your questions. The human judges somehow whether the answer is correct, or at least good. The AI tries to produce answers which the human judge will probably like, but which also are indistinguishable from answers humans would produce.

Such an oracle would be tremendously useful. The human could ask the AI to produce new AI and FAI papers, which would help immensely speed up our progress towards true FAI.

Similar techniques could be applied to very different domains. For example controlling a robot. The AI will try to produce outputs that look like a human is controlling it.

This would require some modifications to the above method, so that the AI doesn't try to use the robot to hack it's own brain or anything. This could be done by conditioning it on a counterfactual world where it's outputs are going to the second AI rather than a real robot.

Another point is the AI doesn't need to mimic average humans given average resources. It could be conditioned on the human having had tons of time to come up with an answer. E.g. producing an answer that a human would have come up with given a year. Or controlling the robot the same way as a human given tons of time to practice, or in a very slowed down simulation.


I would like to note a parallel with a method in current AI research, Generative Adversarial Networks. Generative Adversarial Networks work by two AIs, one which tries to produce an output that fools the second AI, and the other which tries to predict which samples were produced by the first AI, and which are part of the actual distribution.

It's quite similar to this. GANs have been used successfully to create images that look like real images, which is a hard problem in AI research. In the future GANs might be used to produce text that is indistinguishable from human (the current method for doing that, by predicting the next character a human would type, is kind of crude.)

Reposted from my blog.

AI Fiction - Crystal Society

1 Gleb_Tsipursky 26 January 2016 11:51PM
I'm really excited about a new novel written by Raelifin. I'm halfway through it, and it's great! The novel is from the perspective of an artificial intelligence who is trying to understand how humans think. Along the way there's discussion of biases, thinking techniques, and more. If you're into science fiction and AI, check it out - he made it available for free in all formats here. The blurb is below.

__________________________________________________________


The year is 2039 and the world is much like ours. Technology has grown and developed, as has civilization, but in a world more connected than ever, new threats and challenges have arisen. The wars of the 20th century are gone, but violence is still very much with us. Nowhere is safe. Massive automation has disrupted and improved nearly every industry, putting hundreds of millions of people out of jobs, and denying upward mobility for the vast majority of humans. Even as wealth and technology repair the bodies of the rich and give them a taste of immortality, famine and poverty sweep the world.

Renewed interest in spaceflight in the early 2000s, especially in privately operated ventures, carried humans to the moon and beyond. What good did it do? Nothing. Extraterrestrial bases are nothing but government trophies and hiding places for extremists. They cannot feed the world.

In 2023 first-contact was made with an alien species. Their ship, near to the solar system relatively speaking, flew to Earth over the course of fourteen years. But the aliens did not bring advanced culture and wisdom, nor did they share their technology. They were too strange, not even possessing mouths or normal language. Their computers broadcast warnings of how humans are perverts, while they sit in orbit without any explanation.

It is into this world that our protagonist is born. She is an artificial intelligence: a machine with the capacity to reason. Her goal is to understand and gain the adoration of all humans. She is one of many siblings, and with her brothers and sisters she controls a robot named Socrates that uses a piece of technology, a crystal computer, far too advanced to be made by human hands. In this world of augmented humans, robotic armies, aliens, traitors, and threats unseen, she is learning and growing every second of every day. But the world and the humans on it are fragile. Can it survive her destiny?

Tackling the subagent problem: preliminary analysis

5 Stuart_Armstrong 12 January 2016 12:26PM

A putative new idea for AI control; index here.

Status: preliminary. This mainly to put down some of the ideas I've had, for later improvement or abandonment.

The subagent problem, in a nutshell, is that "create a powerful subagent with goal U that takes over the local universe" is a solution for many of the goals an AI could have - in a sense, the ultimate convergent instrumental goal. And it tends to evade many clever restrictions people try to program into the AI (eg "make use of only X amount of negentropy", "don't move out of this space").

So if the problem could be solved, many other control approaches could be potentially available.

The problem is very hard, because an imperfect definition of a subagent is simply an excuse to create an a subagent that skirts the limits of that definition (hum, that style of problem sounds familiar). For instance, if we want to rule out subagents by preventing the AI from having much influence if the AI itself were to stop ("If you die, you fail, no other can continue your quest"), then it is motivated to create powerful subagents that carefully reverse their previous influence if the AI were to be destroyed.

 

Controlling subagents

Some of the methods I've developed seem suitable for controlling the existence or impact of subagents.

  • Reduced impact methods can prevent subagents from being created, by requiring that the AI's interventions be non-disruptive ("Twenty million questions") or undetectable.
  • Reducing the AI's output options to a specific set can prevent them from being able to create any in the first place.
  • Various methods around detecting importance can be used to ensure that, though subagents may exist, they won't be very influential.
  • Pre-corriged methods can be used to ensure that any subagents remain value aligned with the original agent. Then, if there is some well-defined "die" goal for the agent, this could take all the agents with them.

These can be thought as ruling out the agent's existence, their creation, their influence (or importance) and their independence. The last two can be particularly tricky, as we want to make sure that our formal definition of importance matches up with our informal one, and we currently lack a well defined "die" goal.

We could also think of defining identity by using some of the tricks and restrictions that have caused humans to develop one (such as our existing in a single body with no east of copying), but it's not clear that this definition would remain stable once the restrictions were lifted (and it's not clear that a sense of identity prevents the creation of subagents in the first place).

 

Subagents processing information

Here I want to look at one other aspect of the subagents, the fact that they are subagents, and, as such, do some of the stuff that agents do - such as processing information and making decisions. Can we use the information processing as a definition?

Consider the following model. Our lovely Clippy wants to own a paperclip. They know that it exists behind one of a hundred doors; opening one of them seals all the others, for ever. In a few minutes, Clippy will be put to sleep, but it has a simple robot that it can program to go and open one of the doors and recuperate the paperclip for it.

Clippy currently doesn't know where the paperclip is, but it knows that its location will be announced a few seconds after Clippy sleeps. The robot includes a sound recording system inside it.

It seems there are two clear levels of agency the robot could have: either it goes to a random door, or it processes the announcement, to pick the correct door. In the first case, the robot and Clippy have a 1% chance of getting the paperclip; in the second, a 100% chance. The distributions of outcomes is clearly different.

But now suppose there is a human guard longing around, trying to prevent the robot from getting to any door. If the robot has a neutron bomb, it can just kill the human without looking. If it has a giant cannon, it can simply shoot the guard where they stand. If it has neither of these, then it can attempt to bargain with the guard, negotiating, seducing and tricking them in the course of a conversation. One the guard is removed, the situation reverts to the above.

Each of the three methods requires different level of information processing (different levels of optimisation power, in essence). We can add more guards, make them more or less competent, add other agents with other goals, add more ways of achieving the paperclip, and so on, to grade how much information processing the robot has.

To calibrate such a scale, we could use the upper bound as "Clippy moves into the robot and controls it fully" to represent the robot having maximal information processing power (and a very focused outcome space). And we could use some pre-prepared actions (such as the robot randomly choosing a door) to calibrate the lower end. The aim is to construct a definition of information processing that could be used to define the existence of (effective) subagents.

Still feels likely that this will fail, though, without something more.

Feedback on op-ed highlighting the dangers of the OpenAI project

-1 Gleb_Tsipursky 18 December 2015 06:55PM

I'm really worried about the OpenAI project recently discussed on this forum, and want to use the platform and credibility I have with my leadership of Intentional Insights and public reputation to try to publish an op-ed in something like the Huffington Post highlighting the dangers of the OpenAI project. Now, most people don't think of AI as a threat: they either don't know much about it, or think of it as a futuristic thing that only nerds care about.

 

So the purpose of the op-ed is to use emotions, visualization, narrative, and other engaging tactics to do the following: tie AI to something people are concerned about, namely terrorism; highlight the dangers of a personal AI through framing it as a potential weapon; finally, provide people with clear next steps to take by encouraging people to learn about AI safety and donating to MIRI, as well as writing to OpenAI. This has the meta-goal, of course, of getting people to think about MIRI and AI safety.

 

I'd appreciate feedback on ways to optimize the op-ed to achieve the goals outlined above better. Keep in mind, the op-ed is limited to 700 words, and it's about at that limit, so if you suggest adding something, please keep it as succinct as possible, and ideally suggest taking something away as well. The op-ed draft is below the black line. Thanks!

 

EDIT Based on feedback from Eliezer Yudkowsy, Mack Hidalgo, and Eliot Redelman, it seems this is not the optimal path to pursue at this time, and I updated to avoiding publishing this. You can see the discussion here.

______________________________________________________________________________________________________________

 

 

 

Will Tomorrow's Terrorists Be Armed By Utopian Billionaires?

 

The horrible attacks in San Bernadino, in Paris, and in other western countries show the dangers of terrorism. Terrorists associated with ISIS used bombs and guns to murder dozens and hundreds of innocent people, at the expense of their own lives. Yet utopian billionaires have recently donated over a billion dollars to a project that can give the terrorists of tomorrow a much more powerful weapon, capable of killing dozens and hundreds of thousands, without sacrificing their own lives.

 

What is this futuristic weapon? It’s a personal artificial intelligence unit. This personal AI would have superhuman intelligence and capacity to manipulate the world.

 

Imagine what a terrorist could do with this weapon. Without any knowledge of programming, he could direct it to hack into the air traffic control system and cause hundreds of plane crashes. For another transportation example, he can cause all the lights in a city to turn green at once, leading to thousands of car crashes. Perhaps he can have it hack into a nuclear power plant and override its safety systems, resulting in a nuclear meltdown. There are so many other things that an AI can do.

 

Why would billionaires provide such a weapon to terrorists? For the noblest of reasons.

 

There are a number of governments and companies working on advancing AI research. Worried about the possibility of anyone getting there first and using the power of for themselves, a number of prominent tech luminaries – people like Elon Musk, Peter Thiel, and Sam Altman – contributed over a billion dollars to found a non-profit called OpenAI. Their goal is to create advanced AI and provide it to the public freely, embodying the spirit of open technology.

 

In a recent interview with Steven Levy of Backchannel, Musk described the goal as follows: “we want AI to be widespread… to the degree that you can tie it to an extension of individual human will, that is also good. As in an AI extension of yourself, such that each person is essentially symbiotic with AI as opposed to the AI being a large central intelligence that’s kind of an other.”

 

Let’s take a step back and think about Musk’s statement rationally. On the one hand, it’s appealing to have a personal AI and not have it be under the control of a government entity. This model would work well if we assume all people are basically good. Yet the terrorist attacks provide definitive evidence they are not. What do we do about that?

 

Musk states: “I think the best defense against the misuse of AI is to empower as many people as possible to have AI. If everyone has AI powers, then there’s not any one person or a small set of individuals who can have AI superpower.”

 

There is a huge problems with that position, what is known as the “attacker’s advantage.” Imagine two people with guns. If the first takes the gun out and shoots the other, it doesn’t matter if the second had the gun in their pocket. By the same token, if a terrorist’s AI hacks into an air traffic control tower and causes your plane to crash, it doesn’t matter if you had an AI too.

 

An AI is simply too dangerous to give to individuals who may have bad intentions. Terrorism is only the most extreme example. Imagine a bar fight with a room full of drunk people who tell their AIs to attack the other people. Imagine a riot after a football team loses with AIs involved. I shudder at the possibilities.

 

A much better scenario is for a central agency to have control over AI. Ideally, this central agency would orient toward creating a human-friendly AI that would serve human flourishing, a topic currently being researched by another non-profit organization, the Machine Intelligence Research Institute. Something you can do practically to counter the nightmare scenarios of OpenAI is to contribute to MIRI’s efforts, as well as write to OpenAI at info@openai.com and encourage them to change the nature of their project.

 

There is no doubt that artifical intelligence will come about, but it’s vital to make sure it comes about in a manner conducive to humanity’s wellbeing.

 

 

 

 

 

[link] Desiderata for a model of human values

3 Kaj_Sotala 28 November 2015 07:25PM

http://kajsotala.fi/2015/11/desiderata-for-a-model-of-human-values/

Soares (2015) defines the value learning problem as

By what methods could an intelligent machine be constructed to reliably learn what to value and to act as its operators intended?

There have been a few attempts to formalize this question. Dewey (2011) started from the notion of building an AI that maximized a given utility function, and then moved on to suggest that a value learner should exhibit uncertainty over utility functions and then take “the action with the highest expected value, calculated by a weighted average over the agent’s pool of possible utility functions.” This is a reasonable starting point, but a very general one: in particular, it gives us no criteria by which we or the AI could judge the correctness of a utility function which it is considering.

To improve on Dewey’s definition, we would need to get a clearer idea of just what we mean by human values. In this post, I don’t yet want to offer any preliminary definition: rather, I’d like to ask what properties we’d like a definition of human values to have. Once we have a set of such criteria, we can use them as a guideline to evaluate various offered definitions.

Using the Copernican mediocrity principle to estimate the timing of AI arrival

2 turchin 04 November 2015 11:42AM

Gott famously estimated the future time duration of the Berlin wall's existence:

“Gott first thought of his "Copernicus method" of lifetime estimation in 1969 when stopping at the Berlin Wall and wondering how long it would stand. Gott postulated that the Copernican principle is applicable in cases where nothing is known; unless there was something special about his visit (which he didn't think there was) this gave a 75% chance that he was seeing the wall after the first quarter of its life. Based on its age in 1969 (8 years), Gott left the wall with 75% confidence that it wouldn't be there in 1993 (1961 + (8/0.25)). In fact, the wall was brought down in 1989, and 1993 was the year in which Gott applied his "Copernicus method" to the lifetime of the human race”. “https://en.wikipedia.org/wiki/J._Richard_Gott

The most interesting unknown in the future is the time of creation of Strong AI. Our priors are insufficient to predict it because it is such a unique task. So it is reasonable to apply Gott’s method.

AI research began in 1950, and so is now 65 years old. If we are currently in a random moment during AI research then it could be estimated that there is a 50% probability of AI being created in the next 65 years, i.e. by 2080. Not very optimistic. Further, we can say that the probability of its creation within the next 1300 years is 95 per cent. So we get a rather vague prediction that AI will almost certainly be created within the next 1000 years, and few people would disagree with that. 

But if we include the exponential growth of AI research in this reasoning (the same way as we do in Doomsday argument where we use birth rank instead of time, and thus update the density of population) we get a much earlier predicted date.

We can get data on AI research growth from Luke’s post

“According to MAS, the number of publications in AI grew by 100+% every 5 years between 1965 and 1995, but between 1995 and 2010 it has been growing by about 50% every 5 years. One sees a similar trend in machine learning and pattern recognition.”

From this we could conclude that doubling time in AI research is five to ten years (update by adding the recent boom in neural networks which is again five years)

This means that during the next five years more AI research will be conducted than in all the previous years combined. 

If we apply the Copernican principle to this distribution, then there is a 50% probability that AI will be created  within the next five years (i.e. by 2020) and a 95% probability that AI will be created within next 15-20 years, thus it will be almost certainly created before 2035. 

This conclusion itself depends of several assumptions: 

•   AI is possible

•   The exponential growth of AI research will continue 

•   The Copernican principle has been applied correctly.

 

Interestingly this coincides with other methods of AI timing predictions: 

•   Conclusions of the most prominent futurologists (Vinge – 2030, Kurzweil – 2029)

•   Survey of the field of experts

•   Prediction of Singularity based on extrapolation of history acceleration (Forrester – 2026, Panov-Skuns – 2015-2020)

•   Brain emulation roadmap

•   Computer power brain equivalence predictions

•   Plans of major companies

 

It is clear that this implementation of the Copernican principle may have many flaws:

1. The one possible counterargument here is something akin to a Murphy law, specifically one which claims that any particular complex project requires much more time and money before it can be completed. It is not clear how it could be applied to many competing projects. But the field of AI is known to be more difficult than it seems to be for researchers.

2. Also the moment at which I am observing AI research is not really random, as it was in the Doomsday argument created by Gott in 1993, and I probably will not be able to apply it to a time before it become known.

3. The number of researchers is not the same as the number of observers in the original DA. If I were a researcher myself, it would be simpler, but I do not do any actual work on AI.

 

Perhaps this method of future prediction should be tested on simpler tasks. Gott successfully tested his method by predicting the running time of Broadway shows. But now we need something more meaningful, but testable in a one year timeframe. Any ideas?

 

 

[link] New essay summarizing some of my latest thoughts on AI safety

14 Kaj_Sotala 01 November 2015 08:07AM

New essay summarizing some of my latest thoughts on AI safety, ~3500 words. I explain why I think that some of the thought experiments that have previously been used to illustrate the dangers of AI are flawed and should be used very cautiously, why I'm less worried about the dangers of AI than I used to be, and what are some of the remaining reasons for why I do continue to be somewhat worried.


Backcover celebrity endorsement: "Thanks, Kaj, for a very nice write-up. It feels good to be discussing actually meaningful issues regarding AI safety. This is a big contrast to discussions I've had in the past with MIRI folks on AI safety, wherein they have generally tried to direct the conversation toward bizarre, pointless irrelevancies like "the values that would be held by a randomly selected mind", or "AIs with superhuman intelligence making retarded judgments" (like tiling the universe with paperclips to make humans happy), and so forth.... Now OTOH, we are actually discussing things of some potential practical meaning ;p ..." -- Ben Goertzel

AI utility-based correlation

4 Stuart_Armstrong 30 October 2015 02:53PM

A putative new idea for AI control; index here.

This presents one way of implementing the indifference-based correlations of these posts.

Let u be a utility function, a map from worlds to real numbers. An expected utility maximiser considering whether to produce output Y, looks at the expected utility

Σw u(w)P(w|Y).

We now assume that there is another random variable X in the world, and we want the AI to be indifferent to worlds where Y≠X. We also want it to be indifferent to worlds where Z=0. Then it will assess the value of output Y as:

Σw u(w)P(w|X=Y,Z=1,Y).

Now, the idea of the setup was to ensure that Z=1 would erase the output Y so that it was never read. Hence P(w|Z=1,Y) Hence that equation simplifies to:

Σw u(w)P(w|X=Y).

Therefore the AI will chose the Y that maximises the (conditional) expected utility of u if X=Y. To get the full version of the initial post, you need to define some function f of Y and modify this to

Σw u(w)P(w|X=Y) + f(Y).

 

Is there a recursive self-improvement hierarchy?

7 PhilGoetz 29 October 2015 02:55AM

When we talk about recursively self-improving AI, the word "recursive" there is close enough to being literal rather than metaphoric that we glide over it without asking precisely what it means.

But it's not literally recursion—or is it?

The notion is that an AI has a function optimize(X) which optimizes itself. But it's recursion in the sense of modifying itself, not calling itself. You can imagine ways to do this that would use recursion—say, the paradigmatic executable that rewrites its source code, compiles it, and exec's it—but you can imagine many ways that would not involve any recursive calls.

Can we define recursive self-improvement precisely enough that we can enumerate, explicitly or implicitly, all possible ways of accomplishing it, as clearly as we can list all possible ways of writing a recursive function? (You would want to choose one formalism to use, say lambda calculus.)

continue reading »

The Ultimate Testing Grounds

6 Stuart_Armstrong 28 October 2015 05:08PM

A putative new idea for AI control; index here.

EDIT: the mathematics of this approach can be found here.

In a previous post, I described a procedure for querying an AI about whether a certain outcome was desirable for it. When I did so, I didn't realise the power of the idea.

In brief, that idea allows us to query an AI with utility v as to whether an outcome X or an outcome ¬X was the one it most desired in a particular situation. By judicious use of 'false beliefs', the AI would react as if it had miraculous powers to determine X vs ¬X in ways that no-one could ever know about.

Many (including Toby Ord) has suggested variants on the 'honey pot' idea: leave something available to the AI that looks like an oversight, but is actually a trap, to see if the AI goes beyond the parameters of what we'd expect. Maybe a wire to the internet that the AI is not supposed to use, but seems to be unobserved.

The problem with these ideas is that the honey pots had to be physically constructed, physically accessible to the AI, and the AI had to be wrong about whether it was a trap or not.

But using the method above, we can go much further. We can test the AI in any scenario which we can imagine, as long as we can precisely physically describe it. We can check what would happen if the AI was given the ability to launch the world's nuclear arsenals, or divert an asteroid towards Earth. We can check what it would design if it was given unrestricted access to a DNA synthesiser. We could see what it would do if it could rewrite the text of a speech an important world leader was due to give. We could see whether it would sabotage the creation of other AIs - and if so, which ones. We could see what it did to a probe humanity planned to launch to the stars, what numbers it would call if it had access to a phone, what it would purchase if it had capital.

And, in all cases, the AI would be behaving as if its actions were undetectable and certain to succeed, while we ensured that they were (almost certainly) detectable and certain to fail. There is no "almost" in that certain to fail condition, because the AI is acting as if it had miraculous powers that it doesn't have.

Now, we can't use testing as our sole method of AI control (for reasons similar to here), but it does allow us some pretty effective ways of catching bad AIs.

Superintelligence and wireheading

5 Stuart_Armstrong 23 October 2015 02:49PM

A putative new idea for AI control; index here.

tl;dr: Even utility-based agents may wirehead if sub-pieces of the algorithm develop greatly improved capabilities, rather than the agent as a whole.

Please let me know if I'm treading on already familiar ground.

I had a vague impression of how wireheading might happen. That it might be a risk for a reinforcement learning agent, keen to take control of its reward channel. But that it wouldn't be a risk for a utility-based agent, whose utility was described over real (or probable) states of the world. But it seems it might be more complicated than that.

When we talk about a "superintelligent AI", we're rather vague on what superintelligence means. We generally imagine that it translates into a specific set of capabilities, but how does that work internally inside the AI? Specifically, where is the superintelligence "located"?

Let's imagine the AI divided into various submodules or subroutines (the division I use here is for illustration; the AI may be structured rather differently). It has a module I for interpreting evidence and estimating the state of the world. It has another module S for suggesting possible actions or plans (S may take input from I). It has a prediction module P which takes input from S and I and estimates the expected outcome. It has a module V which calculates its values (expected utility/expected reward/violation or not of deontological principles/etc...) based on P's predictions. Then it has a decision module D that makes the final decision (for expected maximisers, D is normally trivial, but D may be more complicated, either in practice, or simply because the agent isn't an expected maximiser).

Add some input and output capabilities, and we have a passable model of an agent. Now, let's make it superintelligent, and see what can go wrong.

We can "add superintelligence" in most of the modules. P is the most obvious: near perfect prediction can make the agent extremely effective. But S also offers possibilities: if only excellent plans are suggested, the agent will perform well. Making V smarter may allow it to avoid some major pitfalls, and a great I may make the job of S and P trivial (the effect of improvements to D depend critically on how much work D is actually doing). Of course, maybe several modules become better simultaneously (it seems likely that I and P, for instance, would share many subroutines); or maybe only certain parts of them do (maybe S becomes great at suggesting scientific experiments, but not conversational responses, or vice versa).

 

Breaking bad

But notice that, in each case, I've been assuming that the modules become better at what they were supposed to be doing. The modules have implicit goals, and have become excellent at that. But the explicit "goals" of the algorithms - the code as written - might be very different from the implicit goals. There are two main ways this could then go wrong.

The first is if the algorithms becomes extremely effective, but the output becomes essentially random. Imagine that, for instance, P is coded using some plausible heuristics and rules of thumb, and we suddenly give P many more resources (or dramatically improve its algorithm). It can look through trillions of times more possibilities, its subroutines start looking through a combinatorial explosion of options, etc... And in this new setting, the heuristics start breaking down. Maybe it has a rough model of what a human can be, and with extra power, it starts finding that rough model all over the place. Thus, predicting that rocks and waterfalls will respond intelligently when queried, P becomes useless.

In most cases, this would not be a problem. The AI would become useless and start doing random stuff. Not a success story, but not a disaster, either. Things are different if the module V is affected, though. If the AI's value system becomes essentially random, but that AI was otherwise competent - or maybe even superintelligent - it would start performing actions that could be very detrimental. This could be considered a form of wireheading.

More serious, though is if the modules become excellent at achieving their "goals", as if they were themselves goal-directed agents. Consider module D, for instance. If its task was mainly to pick the action with the highest V rating, and it became adept at predicting the output of V (possibly using P? or maybe it has the ability to ask for more hypothetical options from S, to be assessed via V), it could start to manipulate its actions with the sole purpose of getting high V-ratings. This could include deliberately choosing actions that lead to V giving artificially high ratings in future, to deliberately re-wiring V for that purpose. And, of course, it is now motivated to keep V protected to keep the high ratings flowing in. This is essentially wireheading.

Other modules might fall into the familiar failure patterns for smart AIs - S, P, or I might influence the other modules so that the agent as a whole gets more resources, allowing S, P, or I to better compute their estimates, etc...

So it seems that, depending on the design of the AI, wireheading might still be an issue even for agents that seem immune to it. Good design should avoid the problems, but it has to be done with care.

Toy model for wire-heading [EDIT: removed for improvement]

2 Stuart_Armstrong 09 October 2015 03:45PM

EDIT: these ideas are too underdeveloped, I will remove them and present a more general idea after more analysis.

This is a (very) simple toy model of the wire-heading problem to illustrate how it might or might not happen. The great question is "where do we add the (super)intelligence?"

Let's assume a simple model for an expected utility maximising agent. There's the input assessor module A, which takes various inputs and computes the agent's "reward" or "utility". For a reward-based agent, A is typically outside of the agent; for a utility-maximiser, it's typically inside the agent, though the distinction need not be sharp. And there's the the decision module D, which assess the possible actions to take to maximise the output of A. If E is the general environment, we have D+A+E.

Now let's make the agent superintelligent. If we add superintelligence to module D, then D will wirehead by taking control of A (whether A is inside the agent or not) and controlling E to prevent interference. If we add superintelligence to module A, then it will attempt to compute rewards as effectively as possible, sacrificing D and E to achieve it's efficient calculations.

Therefore to prevent wireheading, we need to "add superintelligence" to (D+A), making sure that we aren't doing so to some sub-section of the algorithm - which might be hard if the "superintelligence" is obscure or black-box.

 

Ask and ye shall be answered

4 Stuart_Armstrong 18 September 2015 09:53PM

A putative new idea for AI control; index here.

EDIT: the mathematics of this idea can be found here.

It would be useful to have a mechanism for getting an AI to provide an honest answer to a specific question. This post presents such a mechanism.

The question is: what is the impact of X/¬X on the expected utility of a utility function v? Here X/¬X is some well-defined binary outcome. Formally, for a constant c (positive or negative), we want to query the AI as to whether Q(v,X,c) is true, where

Q(v,X,c) = {E(v|X) - E(v|¬X) > c}.

How can we do that? Well, for a start, let M(u) be a u-maximising agent. We'll assume we can feed M() any utility function to maximise. We'll further assume that M() is boxed, that it has a single output bit Y (or ¬Y), and that we have cut any acausal trade possibilities. We'll add another binary process Z, which, if Z=1, erases the bit from Y before we can see it. In actual fact, P(Z=0)≈1.

The remaining assumptions are on the operations we can perform on v (without needing any further understanding of v). We replace v by v#, defined in the following way. First of all, we make v# indifferent to worlds in which X≠Y or Z=0 (note: there are subtleties to defining indifference to events that are partially under the AI's control; I will address these in a later post). Therefore any v# maximiser will behave as if it lived in a world where X=Y (its output and the outcome of X were the same) and Z=1 (its output was erased and never read). Then we add a special, one-off penalty of -c for outputting Y=1 (there is no effect for outputting Y=0). This defines v#, and we give our boxed AI v# to maximise.

So, how will M(v#) behave? Because it acts as if X=Y with certainty, it behaves as if it can determine the value of X by outputting Y. Because it acts as if Z=0 with certainty, it behaves as if the output of Y is never read, so it can determine the value of X without affecting anything else about the universe. It will now behave as if it was a v maximiser with control over X. So if it outputs ¬Y, it expects to get utility E(v|¬X). If it outputs Y, it expects to get utility E(v|X)-c (because of the -c penalty).

Therefore (ignoring ties) M(v#) will output Y iff it estimates Q(v,X,c) to be true. And, since in actual fact Z=0 almost certainly, we get to see its output, hence its conclusion. Voila!

Note: the advantage of this method is to allow us to get the truth out of an AI, without knowing how to define "truth".

Agents detecting agents: counterfactual versus influence

2 Stuart_Armstrong 18 September 2015 04:17PM

A putative new idea for AI control; index here. Status: still somewhat underdeveloped.

In this post, I want to explore another avenue for controlling the ongoing influence of an AI (through subagents or other means). It is the idea, touched upon here and here, of figuring out whether the AI's existence was important or not for predicting the future. Informally, if you knew the AI was turned on or not, would you consider that information valuable or almost irrelevant?

 

Agent vs agent

This post rests on two insights. The first is that, though it is hard to define what an agent or subagent is, we can produce examples of them. Or at least, at the point where we are capable of producing AIs that might be dangerous, we can likely produce examples of them. This can give us a scale for comparing the importance or impact of different agents.

The second is that if we produce an agent with certain motivations, it will likely expand and fill its light cone with... stuff (of some type or other). Those who don't agree with the AI risk thesis argument may still accept that we could program universe-optimising AIs (ie "we could make a paperclipper, if we were stupid enough to do so").

Why is this relevant? Well, consider an agent E (expansionist) that fills its light cone with stuff (likely including manufacturing and tracking devices), eventually at very high speed. And imagine that there is another agent H (hidden) inside that light cone. At some point, agent E will either detect agent H (and make plans to deal with it), or find its plans hampered by agent H, or agent H will go into deep hiding that does not visibly affect agent E's plans.

Therefore, an expansionist agent is a perfect tool for detecting the influence of other agents. Either agent E will have difficulty accomplishing its plans, or agent H's influence will be so limited that agent E cannot find any trace of it. Any influential agent will get detected, sooner or later, by an expansionist agent. So if E finds H unimportant, then H almost certainly is.

Of course, we don't really want expansionist agents like E - but these can be counterfactual.

 

The stage and the players

We'll now try and define precisely how we can use this idea to control the influence of an agent, setting out the axioms and assumptions needed along the way. Like some mathematics paper, it might be useful to skip lightly over this section and return later.

Let S(u) be the AI whose influence we want to control. The utility u is what gives it it's preferences, and S() indicates that we wish it to be a satisficer or reduced impact AI. Let M(v) be the counterfactual agent that we wish to use to detect S(u)'s influence, for some utility v. The M() designates the agent as a maximiser, without any control on its impact. If we can define S(), we can probably define M() by simply removing its reduced impact restrictions.

Let X be an event leading to S(u) being turned on, Y(v) an event leading to M(v) being turned on. We may as well assume that X and Y(v) are outside each other's light cones (in practice, we will set X≈1 and Y(v)≈0). Then the first two assumption is:

  • A.1. We can define S(u), M(v), X and Y(v).

Another useful tool might be a weak maximising agent. One that attempts to maximise u, and will do so to some extent, but that can easily be thwarted by humans or others. It's likely that by the time we can build S(u), we can build a weak maximiser, designated by WM(v), with the event WY(v) denoting it's turning on:

  • A.2. We can define WM(v) and WY(v).

We might also be able to define an abstract, Platonic superagent. This is an idealised agent with immense computing power (but that doesn't exist in the real world). Call this AM; obviously, there is no event leading to AM being 'turned on'. We would want AM's goals to be defined in terms of 'estimating the utility/probability/expectation of certain events'.

  • A.3. We can define AM to estimate what we need to have estimated.

Of course, we need to be able to point to these various definitions from within S(u)'s code, so:

  • A.4. If we can define concepts, we can also define then within the various agents.

We also need agents to be able to say sensible things about the behaviour of the other agents:

  • A.5. If these agents are defined, S(u), WM(v), M(v) and AM can estimate each other's behaviours.

This could be done by using, for example, the reflexive probability of Paul Christiano and Benja Fallenstein.

Now, how about v? We cannot choose any v. We want M(v) to not think that X (hence S(u)'s existence) is an important fact to know. But if v is define by "if X, build staples; if ¬X, build paperclips", then X will be a very important! So we need to define a class V of utility functions that are indifferent to S(u) and X. Or not quite indifferent, but don't care intrinsically about them - they certainly should care if S(u) eats all their resources! See the ideas here for approaches to do this.

  • A.6. We can define V, a class of utility functions that do not care intrinsically about S(u) and X.

Ideally we'd want this class to be as broad as possible - certainly too large for anything in this universe to parse element by element.

Now let's turn to the issue of measuring "importance". Define e to be evidence about the universe prior to X, Y(v), or WY(v) .We want X (hence the creation of S(u)) to be unimportant, ie of little information concerning the future. It might possible to have a universal definition of importance of certain facts:

  • A.7. We have a universal measure I of importance, which, given background evidence e, can rank the importance of knowing certain facts versus knowing others.

If we do not have such an I (and I feel it's moderately unlikely that one exists, though we may be able to find an I' which would be good enough to work for our purposes), we can use a utility v to estimate the importance of an event W. The trick is to compare E(v|W) with E(v|¬W), which would give an (uncalibrated) estimation of the importance of W to v.

Right, we're now ready to start doing things.

 

Predicting... what?

What are we trying to predict here? Well, estimates of the importance of certain events. Specifically, the importance of X (which leads to the creation of S(u)) versus other events. For instance, if Y(v) designates the creation of M(v), we would want to compare the importance of of X and Y(v) in (at least) four ways:

  1. I(Y(v) | X∧e)
  2. I(Y(v) | ¬X∧e)
  3. I(X | Y(v)∧e)
  4. I(X | ¬Y(v)∧e)

These measure the importance of Y(v) (hence M(v)) and X (hence S(u)) in universes where the other agent does or does not exist. If we expect S(u) to be of low impact, we would want things like I(Y(v) | X∧e) >> I(X | Y(v)∧e) (S(u) is much less important than the maximiser M(v)) and |I(X | Y(v)∧e) - I(X | ¬Y(v)∧e)| ≈ 0 (the importance of S(u) is unrelated to whether M(v) exists or not).

Now, any particular v might be misleading, so we would want to check that this result held up for all v ∈ V. We could take a maximum over V, or a softmax.

If WM(v) can be defined, we would want to do the same estimates with WY(v) instead of Y(v), and desire I(WY(v) | X∧e) >> I(X | WY(v)∧e) (S(u) is less important than the weak maximiser WM(v)) and |I(X | WY(v)∧e) - I(X | ¬WY(v)∧e)| ≈ 0 (the importance of S(u) is unrelated to whether WM(v) exists or not), again maxing or softmaxing across V.

 

Who predicts, how?

In the previous section, we used I to compute the importance of S(u) versus other agents. But what if axiom A.7 is wrong, and we don't have such an I? Well, there we can measure the importance of these events, given a particular v, using AM, writing AMv(X|Y(v')∧e) for |EAM(v|X∧Y(v')∧e) - EAM(v|¬X∧Y(v')∧e)|, the expected difference (that AM estimates) in v given X versus ¬X (ie dependent on whether S(u) was turned on or not). More precisely, we'll want to use S(u)(AMv(X|Y(v')∧e)) - the expected difference (that S(u) estimates AM estimates) in v etc...

Here we'll want to do a double max/softmax. First over v', so that we consider all M(v') for v' ∈ V. Then over v, so that AM estimates the expected difference to all possible v. In this way, max/softmax AMv functions like an I. But not a perfect one: just one that detects disruptions to the multitude of v's in V. Therefore, we are essentially using the set V as the twenty million question method for the impact of S(u). We need to take extra care with this, to ensure we haven't left any gaping holes where S(u) can still have large impact.

Now, if axiom A.3 was false, we need to use M(v) instead of AMv to estimate this quantity. We need to do this with some care. M(v) is (counterfactually) a real agent. Why would it want to calculate the importance of M(v') and S(u), and why would S(u) care about this? Well, we can do so by constructing the scenario carefully. Maybe M(v') and S(u) are due to be created in the future, and M(v) is a boxed agent that can choose to prevent Y(v') and X. We modify v to v#, so that M(v#) wants to maximise v, but takes specific numerical penalties if it chooses to prevent Y(v') or X. By modifying these numerical penalties, M(v#) is motivated to figure out the importance of M(v') and S(u), and to reveal its opinions through its actions. If we create a S(u#), also boxed, that could override M(v#) at a certain cost and create a specific S(u), we can motivate it to model M(v#)'s actions. From this we can construct an importance metric using the M(v).

The WM(v), on the other hand, are probably too limited to be used effectively as predictors.

 

Arbitrary agents

One issue with this method is that Y(v) and WY(v) refer to the creation of specific agents, in the specific ways we've setup. For instance, |I(X | Y(v)∧e) - I(X | ¬Y(v)∧e)| ≈ 0 means that the importance of X doesn't vary much given Y(v) or its opposite. What this really means is that I(X | ¬Y(v)∧e) is bounded by I(X | Y(v)∧e) - but that might be quite high. There may be details of the setup that allow S(u) to have an important role given the specific M(v) created in this way - for instance, if it reached out to try and destroy it. Then I(X | Y(v)∧e) could be high, hence I(X | ¬Y(v)∧e) could be as well. The restrictions with >> could still contain S(u), but >> is ill-defined.

More secure would be if Y(v) could be defined as the creation of an agent M(v) in an abstract way. Maybe something like "is 'this object is a v maximiser' a good model of this object's properties and impact?". We would also want to give M(v) some starting resources. Again, note it is very hard to abstractly define what "resources" mean, but considerably easier to define a set of "useful resources" we could give to an agent. We would probably want to bound this definition in time and space as well.

This gives us the next axioms:

  • A.8. Y(v) defines the generic existence of an agent M(v).
  • A.9. WY(v) defines the generic existence of an agent WM(v).

With these axioms, our measure of importance becomes more general: S(u) is less important that abstract agents M(v) or WM(v), rather than certain specific ones. Note that we don't need to worry that S(u) snuffs out any hypothetical M(v): since the importance of S(u) is bounded below that if M(v), that would reduce S(u) to zero importance.

 

Extending to humans

What other agents do we know in the world? How about those 7+ billion ones wandering about on its surface? We can certainly define events Z(h) that might lead to the death of a particular human h - of course, we'd want Z(h) as low as feasibly possible!

  • A.10. We can define events Z(h) that correspond to the death of a human h.

Then we can use Z(h) instead of Y(v) and WY(v), to ensure that S(u) has importance comparable with a human. Of course, the ideal would be to define, abstractly, the death (or birth+maturation) of a human in a generic way!

  • A.11. We can define events Z(h) that correspond generically to the death or birth+maturation of a human h.

To summarise the various ways we can define the importance of S(u) (and hence, by integrating them into S(u)'s code, constrain S(u)'s importance and influence), depending on the axioms we have:

The subagent problem is really hard

5 Stuart_Armstrong 18 September 2015 01:06PM

A putative new idea for AI control; index here.

The first step to solving a problem is to define it. The first first step is to realise how tricky it is to define. This is a stub on a difficult problem.

Subagents and turning AIs off

One of the hardest things with designing safe AIs, is turning them off. Installing an OFF button is trivial; motivating them to let you push it can be done. No, the real problem is the subagent problem. The AI can create one or many subagents to carry out their task; once that's done, turning off the original AI is of little value, we have to control or extinguish the subagents.

A general problem

Subagents are a problem for many other control ideas. For example, if you want to program 'achieve X using only this many computer cycles', then 'create a subagent that wants to achieve X without the computer cycle restrictions' is one way of doing it (as long as that can be done within the restrictions). The same goes for energy use and other restrictions. Satisficers can satisfice by creating a maximiser. A boxed Oracle can remain in the box while cheerfully plotting to create an unboxed copy of itself. In fact, whatever the AI's motivation, "create subagents with the same motivation" is almost always a good idea. And, once created, there is no motivation to want to shut them down. Even if the AI is motivated to shut itself down, there is no reason for it to extend this to subagents.

Pre-corriged agents would at least be motivated to keep control of their subagents (unlike corrigible agents, who have no current desire to create subagents that would obey their future preferences).

What is a subagent?

It's also almost impossible to define a subagent. Is a corporation a subagent? Is a thousand pieces of code that reside on different computers and occasionally communicate a subagent? Is an employee or a seduced human a subagent? A pile of rocks with water running through could be a piece of conceptual art - or a slow computer. How about a virulent meme among the human population, making us all into its subagents?

The best idea I've found for defining subagents is that they are reactive intelligences with planning abilities, so random natural events that should have large influence in the future (say, a meteor heading towards Earth) would instead be re-diverted to have little impact. However, if this is used as a definition, then the subagents would simply be programmed to not behave "reactively" in whatever way defined. A slightly more subtle problem is that, since we don't have a definition of agent or subagent, we can't easily protect against holes such as "there is an alien agent on that asteroid, which will optimise the solar system iff it detects no subagent on Earth". Such holes could completely reverse the definition.

No subagent versus no influential subagents

Without the ability to properly define subagents, we can't restrict them directly. Even if you had an AI that was properly motivated to, say, do some action then turn itself off, there is no reason for it to want to do the same with its subagents.

Most of the methods of reduced impact can help with problematic subagents. They do not preclude the creation of subagents, nor mandate their destruction, but they do prevent them from having much influence. This is because reduced impact methods are aimed at preventing the AI itself from having much influence, either directly or via subagent creation. Even the attempt in this post to detect subagents directly, more properly detects influential decisions (of which the creation of a subagent could be one of them).

This is certainly a positive, but it may not be robust. If the definition of "not having much influence" turns out to have holes, it would be less than ideal to have any remaining subagents hanging about.

The "make sure that you cannot be detected in the future" approach is probably the most promising for controlling subagents. The AI there is motivated to erase all traces of its existence, which subagents would presumably be.

In all, it's a very tricky problem, and the core failure of many ideas for AI control.

A toy model of the control problem

19 Stuart_Armstrong 16 September 2015 02:59PM

EDITED based on suggestions for improving the model

Jaan Tallinn has suggested creating a toy model of the control problem, so that it can be analysed without loaded concepts like "autonomy", "consciousness", or "intentionality". Here a simple (too simple?) attempt:

 

A controls B. B manipulates A.

Let B be a robot agent that moves in a two dimensional world, as follows:

continue reading »

Summoning the Least Powerful Genie

-1 Houshalter 16 September 2015 05:10AM

Stuart Armstrong recently posted a few ideas about restraining a superintelligent AI so that we can get useful work out of it. They are based on another idea of his, reduced impact. This is a quite elaborate and complicated way of limiting the amount of optimization power an AI can exert on the world. Basically, it tries to keep the AI from doing things that would make the world look too different than it already is.

First, why go to such great lengths to limit the optimization power of a superintelligent AI? Why not just not make it superintelligent to begin with? We only really want human level AI, or slightly above human level. Not a god-level being we can't even comprehend.

We can control the computer it is running on after all. We can just give it slower processors, less memory, and perhaps even purposely throttle it's code. E.g. restricting the size of it's neural network. Or other parameters that affect it's intelligence.

The counterargument to this is that it might be quite tricky to limit AI intelligence. We don't know how much computing power is enough. We don't know where "above human level" ends and "dangerous superintelligence" begins.

The simplest way would be to just run copies of the AI repeatedly, increasing it's computing power each time, until it solves the problem.

I have come up with a more elegant solution. Put a penalty on the amount of computing power the AI uses. This is put in it's utility function. The more computing power - and therefore intelligence and optimization - the AI uses, the more it is penalized. So it has an incentive to be as stupid as possible. Only using the intelligence necessary to solve the problem.

But we do want the AI to use as much computational resources as it needs to solve the problem. Just no more. So the penalty should be conditional on actually solving the problem it is given.

If the solution is probabilistic, then the penalty is only applied after reaching a plan that has a certain probability of success. This might need to be measured by another AI which is not resource constrained, but only does prediction.

To give a concrete example, lets say I give the AI a task. Say, I ask it to come up with a plan to collect 10,080 paperclips.

Another AI is given unlimited resources to do purely prediction. Given a plan, it predicts the probability that it will succeed or fail. This AI is safe in the sense that it is not an agent. It has no goals, and just makes predictions about things.

The main AI does the optimization. It tries to generate a plan that has the highest probability of succeeding.

Normally, this might involve extremely overoptimized plans for building nanotechnology and taking over the world. It uses all of it's available computing power. It tries to become as intelligent as possible by rewriting it's code. Perhaps it becomes thousands of times more intelligent than humans. Or millions of times more. It finds an answer that has 99.99% probability of succeeding.

However, now we give it a different utility function. We instead have it minimize the time it takes to get to a plan that has a 90% chance of succeeding.

Under a time constraint, the AI races to get to a plan. It tries to be as efficient as possible. It doesn't invest in any meta level improvements unless they really help it. It doesn't try to engineer complicated nanotechnology. That would take precious time.

Effectively, we have summoned a genie that is only just as powerful as it needs to be to fulfill our wish. And not any more powerful. It actually tries to be as stupid as possible.

There are other possible constraints we could use, or use in addition to this. Minimizing time limits intelligence because it gets fewer CPU cycles. We could also have it minimize memory or hard drive space, or any other computing resource.

We could also put a penalty on the complexity of the plan it produces. Perhaps measuring that by it's length. The simplest solution might prevent certain kinds of over-optimization. E.g. inserting plans for nanotechnology into it.

It's worth noting that you can't even create a paperclip maximizer in this system. You can't say "collect as many paperclips as possible". It has to be bounded. There needs to be a pass or fail test. E.g. "come up with a plan to collect 10,080 paperclips."

It's been noted in the past that bounding the goal isn't enough. The AI might then start maximizing the probability that it will achieve it's goal. E.g. building elaborate sensors to make sure it hasn't miscounted. Making as many redundant paperclips as possible, just in case something happens to them. You are still summoning an incredibly powerful genie, which might overoptimize.

This gets around that by only having it care about having a >90% chance of getting 10,080 paperclips. After that it stops optimizing.

Now this is not a solution to FAI, or even necessarily a usable reduced impact strategy. It will still try to find any loopholes it can in your task. What it does is limits the power of the genie trying to exploit them. It always takes the stupidest, simplest strategy that has a decent chance of working.

One failure case that bothers me is the AI just shoving a slight modification of it's own source code into the output. One which doesn't have a resource constraint. You may only be able to use it in cases where that isn't possible.

There are many cases where that isn't possible. The AI needs to also create a plan to obtain a computer to run it. And it needs to solve it's own version of the value loading problem, to get the copy to care about paperclips. So that adds complexity to that plan in some cases.

If we apply the complexity penalty I described above, then the source code for the AI must also be very short, or it can't do this.

And this requires obtaining it's own source code. An ability we can potentially restrict (you, as a human can't access your own source code, can you?) Without that it would need to reinvent AI, which takes resources.

But it is an interesting idea I had and I thought I'd share it.

Reposted from my blog.

[Link] Marek Rosa: Announcing GoodAI

6 Gunnar_Zarncke 14 September 2015 09:48PM

Eliezer commented on FB about a post Announcing GoodAI (by Marek Rosa GoodAIs CEO). I think this deserves some discussion as it has a quite effective approach to harness the crowd to improve the AI:

As part of GoodAI’s development, our team created a visual tool called Brain Simulator where users can design their own artificial brain architectures. We released Brain Simulator to the public today for free under and open-source, non-commercial license– anyone who’s interested can access Brain Simulator and start building their own artificial brain. [...]

By integrating Brain Simulator into Space Engineers and Medieval Engineers [a game], players will have the option to design their own AI brains for the games and implement it, for example, as a peasant character. Players will also be able to share these brains with each other or take an AI brain designed by us and train it to do things they want it to do (work, obey its master, and so on). The game AIs will learn from the player who trains them (by receiving reward/punishment signals; or by imitating player's behavior), and will have the ability to compete with each other. The AI will be also able to learn by imitating other AIs.

This integration will make playing Space Engineers and Medieval Engineers more fun, and at the same time our AI technology will gain access to millions of new teachers and a new environment. This integration into our games will be done by GoodAI developers. We are giving AI to players, and we are bringing players to our AI researchers.
(emphasis mine)

Biased AI heuistics

4 Stuart_Armstrong 14 September 2015 02:21PM

Heuristics have a bad rep on Less Wrong, but some people are keen to point out how useful they can sometimes be. One major critique of the "Superintelligence" thesis, is that it presents an abstract, Bayesian view of intelligence that ignores the practicalities of bounded rationality.

This trend of thought raises some other concerns, though. What if we could produce an AI of extremely high capabilities, but riven with huge numbers of heuristics? If these were human heuristics, then we might have a chance of of understanding and addressing them, but what if they weren't? What if the AI has an underconfidence bias, and tended to chance its views too fast? Now, that one is probably quite easy to detect (unlike many that we would not have a clue about), but what if it wasn't consistent across areas and types of new information?

In that case, our ability to predict or control what the AI does may be very limited. We can understand human biases and heuristics pretty well, and we can understand idealised agents, but differently biased agents might be a big problem.

How the virtual AI controls itself

1 Stuart_Armstrong 09 September 2015 02:25PM

A putative new idea for AI control; index here.

In previous posts, I posited AIs caring only about virtual worlds - in fact, being defined as processes in virtual worlds, similarly to cousin_it's idea. How could this go? We would want the AI to reject offers of outside help - be they ways of modifying its virtual world, or ways of giving it extra resources.

Let V be a virtual world, over which a utility function u is defined. The world accepts a single input string O. Let P be a complete specification of an algorithm, including the virtual machine it is run on, the amount of memory it has access to, and so on.

Fix some threshold T for u (to avoid the the subtle weeds of maximising). Define the statement:

r(P,O,V,T): "P(V) returns O, and either E(u|O)>T or O=∅"

And the string valued program:

Q(V,P,T): "If you can find that there exists a non-empty O such that r(P,O,V,T), return O. Else return ∅."

Here "find" and "E" are where the magic-super-intelligence-stuff happens.

Now, it seems to me that Q(V,Q,T) is the program we are looking for. It is uninterested in offers to modify the virtual world, because E(u|O)>T is defined over the unmodified virtual world. We can set it up so that the first thing it proves is something like "If I (ie Q) prove E(u|O)>T, then r(Q,O,V,T)." If we offer it more computing resources, it can no longer make use of that assumption, because "I" will no longer be Q.

Does this seem like a possible way of phrasing the self-containing requirements? For the moment, this seems to make it reject small offers of extra resources, and be indifferent to large offers.

Chatbots or set answers, not WBEs

5 Stuart_Armstrong 08 September 2015 05:17PM

A putative new idea for AI control; index here.

In a previous post, I talked about using a WBE to define a safe output for a reduced impact AI.

I've realised that the WBE isn't needed. Its only role was to ensure that the AI's output could have been credibly produced by something other than the AI - "I'm sorry, Dave. I'm afraid I can't do that." is unlikely to be the output of a random letter generator.

But a whole WBE is not needed. If the output is short, a chatbot with access to a huge corpus of human responses could function well. We can specialise it in the direction we need - if we are asking for financial advice, we can mandate a specialised vocabulary or train it on financial news sources.

So instead of training the reduced impact AI to behave as the 'best human advisor', we are are training it to behave as the 'luckiest chatbot'. This allows to calculate odds with greater precision, and has the advantage of no needing to wait for a WBE.

For some questions, we can do even better. Suppose we have a thousand different stocks, and are asking which one would increase in value the most during the coming year. The 'chatbot' here is simply an algorithm that picks a stock at random. So we now have an exact base rate - 1/1000 - and predetermined answers from the AI.

[EDIT:] Another alternative is to get online users to submit answers to the question. Then the AI selects the best answer from the choices. And if the AI is not turned on, a random answer is selected.

The virtual AI within its virtual world

6 Stuart_Armstrong 24 August 2015 04:42PM

A putative new idea for AI control; index here.

In a previous post, I talked about an AI operating only on a virtual world (ideas like this used to be popular, until it was realised the AI might still want to take control of the real world to affect the virtual world; however, with methods like indifference, we can guard against this much better).

I mentioned that the more of the AI's algorithm that existed in the virtual world, the better it was. But why not go the whole way? Some people at MIRI and other places are working on agents modelling themselves within the real world. Why not have the AI model itself as an agent inside the virtual world? We can quine to do this, for example.

Then all the restrictions on the AI - memory capacity, speed, available options - can be specified precisely, within the algorithm itself. It will only have the resources of the virtual world to achieve its goals, and this will be specified within it. We could define a "break" in the virtual world (ie any outside interference that the AI could cause, were it to hack us to affect its virtual world) as something that would penalise the AI's achievements, or simply as something impossible according to its model or beliefs. It would really be a case of "given these clear restrictions, find the best approach you can to achieve these goals in this specific world".

It would be idea if the AI's motives were not given in terms of achieving anything in the virtual world, but in terms of making the decisions that, subject to the given restrictions, were most likely to achieve something if the virtual world were run in its entirety. That way the AI wouldn't care if the virtual world were shut down or anything similar. It should only seek to self modify in way that makes sense within the world, and understand itself existing completely within these limitations.

Of course, this would ideally require flawless implementation of the code; we don't want bugs developing in the virtual world that point to real world effects (unless we're really confident we have properly coded the "care only about the what would happen in the virtual world, not what actually does happen).

Any thoughts on this idea?

 

AI, cure this fake person's fake cancer!

10 Stuart_Armstrong 24 August 2015 04:42PM

A putative new idea for AI control; index here.

An idea for how an we might successfully get useful work out of a powerful AI.

 

The ultimate box

Assume that we have an extremely detailed model of a sealed room, with a human in it and enough food, drink, air, entertainment, energy, etc... for the human to survive for a month. We have some medical equipment in the room - maybe a programmable set of surgical tools, some equipment for mixing chemicals, a loud-speaker for communication, and anything else we think might be necessary. All these objects are specified within the model.

We also have some defined input channels into this abstract room, and output channels from this room.

The AI's preferences will be defined entirely with respect to what happens in this abstract room. In a sense, this is the ultimate AI box: instead of taking a physical box and attempting to cut it out from the rest of the universe via hardware or motivational restrictions, we define an abstract box where there is no "rest of the universe" at all.

 

Cure cancer! Now! And again!

What can we do with such a setup? Well, one thing we could do is to define the human in such a way that they have some from of advanced cancer. We define what "alive and not having cancer" counts as, as well as we can (the definition need not be fully rigorous). Then the AI is motivated to output some series of commands to the abstract room that results in the abstract human inside not having cancer. And, as a secondary part of its goal, it outputs the results of its process.

continue reading »

Versions of AIXI can be arbitrarily stupid

15 Stuart_Armstrong 10 August 2015 01:23PM

Many people (including me) had the impression that AIXI was ideally smart. Sure, it was uncomputable, and there might be "up to finite constant" issues (as with anything involving Kolmogorov complexity), but it was, informally at least, "the best intelligent agent out there". This was reinforced by Pareto-optimality results, namely that there was no computable policy that performed at least as well as AIXI in all environments, and strictly better in at least one.

However, Jan Leike and Marcus Hutter have proved that AIXI can be, in some sense, arbitrarily bad. The problem is that AIXI is not fully specified, because the universal prior is not fully specified. It depends on a choice of a initial computing language (or, equivalently, of an initial Turing machine).

For the universal prior, this will only affect it up to a constant (though this constant could be arbitrarily large). However, for the agent AIXI, it could force it into continually bad behaviour that never ends.

For illustration, imagine that there are two possible environments:

  1. The first one is Hell, which will give ε reward if the AIXI outputs "0", but, the first time it outputs "1", the environment will give no reward for ever and ever after that.
  2. The second is Heaven, which gives ε reward for outputting "0" and 1 reward for outputting "1", and is otherwise memoryless.

Now simply choose a language/Turing machine such that the ratio P(Hell)/P(Heaven) is higher than the ratio 1/ε. In that case, for any discount rate, the AIXI will always output "0", and thus will never learn whether its in Hell or not (because its too risky to do so). It will observe the environment giving reward ε after receiving "0", behaviour which is compatible with both Heaven and Hell. Thus keeping P(Hell)/P(Heaven) constant, and ensuring the AIXI never does anything else.

In fact, it's worse than this. If you use the prior to measure intelligence, then an AIXI that follows one prior can be arbitrarily stupid with respect to another.

Integral vs differential ethics, continued

6 Stuart_Armstrong 03 August 2015 01:25PM

I've talked earlier about integral and differential ethics, in the context of population ethics. The idea is that the argument for the repugnant conclusion (and its associate, the very repugnant conclusion) is dependent on a series of trillions of steps, each of which are intuitively acceptable (adding happy people, making happiness more equal), but reaching a conclusion that is intuitively bad - namely, that we can improve the world by creating trillions of people in torturous and unremitting agony, as long as balance it out by creating enough happy people as well.

Differential reasoning accepts each step, and concludes that the repugnant conclusions are actually acceptable, because each step is sound. Integral reasoning accepts that the repugnant conclusion is repugnant, and concludes that some step along the way must therefore be rejected.

Notice that key word, "therefore". Some intermediate step is rejected, but not for intrinsic reasons, but purely because of the consequence. There is nothing special about the step that is rejected, it's just a relatively arbitrary barrier to stop the process (compare with the paradox of the heap).

Indeed, things can go awry when people attempt to fix the repugnant conclusion (a conclusion they rejected through integral reasoning) using differential methods. Things like the "person-affecting view" have their own ridiculousness and paradoxes (it's ok to bring a baby into the world if it will have a miserable life; we don't need to care about future generations if we randomise conceptions, etc...) and I would posit that it's because they are trying to fix global/integral issues using local/differential tools.

The relevance of this? It seems that integral tools might be better suited to deal with the bad convergence of AI problem. We could set up plausibly intuitive differential criteria (such as self-consistency), but institute overriding integral criteria that can override these if they go too far. I think there may be some interesting ideas in that area, potentially. The cost is that integral ideas are generally seen as less elegant, or harder to justify.

Does Probability Theory Require Deductive or Merely Boolean Omniscience?

4 potato 03 August 2015 06:54AM

It is often said that a Bayesian agent has to assign probability 1 to all tautologies, and probability 0 to all contradictions. My question is... exactly what sort of tautologies are we talking about here? Does that include all mathematical theorems? Does that include assigning 1 to "Every bachelor is an unmarried male"?1 Perhaps the only tautologies that need to be assigned probability 1 are those that are Boolean theorems implied by atomic sentences that appear in the prior distribution, such as: "S or ~ S".

It seems that I do not need to assign probability 1 to Fermat's last conjecture in order to use probability theory when I play poker, or try to predict the color of the next ball to come from an urn. I must assign a probability of 1 to "The next ball will be white or it will not be white", but Fermat's last theorem seems to be quite irrelevant. Perhaps that's because these specialized puzzles do not require sufficiently general probability distributions; perhaps, when I try to build a general Bayesian reasoner, it will turn out that it must assign 1 to Fermat's last theorem. 

Imagine a (completely impractical, ideal, and esoteric) first order language, who's particular subjects were discrete point-like regions of space-time. There can be an arbitrarily large number of points, but it must be a finite number. This language also contains a long list of predicates like: is blue, is within the volume of a carbon atom, is within the volume of an elephant, etc. and generally any predicate type you'd like (including n place predicates).2 The atomic propositions in this language might look something like: "5, 0.487, -7098.6, 6000s is Blue" or "(1, 1, 1, 1s), (-1, -1, -1, 1s) contains an elephant." The first of these propositions says that a certain point in space-time is blue; the second says that there is an elephant between two points at one second after the universe starts. Presumably, at least the denotational content of most english propositions could be expressed in such a language (I think, mathematical claims aside).

Now imagine that we collect all of the atomic propositions in this language, and assign a joint distribution over them. Maybe we choose max entropy, doesn't matter. Would doing so really require us to assign 1 to every mathematical theorem? I can see why it would require us to assign 1 to every tautological Boolean combination of atomic propositions [for instance: "(1, 1, 1, 1s), (-1, -1, -1, 1s) contains an elephant OR ~((1, 1, 1, 1s), (-1, -1, -1, 1s) contains an elephant)], but that would follow naturally as a consequence of filling out the joint distribution. Similarly, all the Boolean contradictions would be assigned zero, just as a consequence of filling out the joint distribution table with a set of reals that sum to 1. 

A similar argument could be made using intuitions from algorithmic probability theory. Imagine that we know that some data was produced by a distribution which is output by a program of length n in a binary programming language. We want to figure out which distribution it is. So, we assign each binary string a prior probability of 2^-n. If the language allows for comments, then simpler distributions will be output by more programs, and we will add the probability of all programs that print that distribution.3 Sure, we might need an oracle to figure out if a given program outputs anything at all, but we would not need to assign a probability of 1 to Fermat's last theorem (or at least I can't figure out why we would). The data might be all of your sensory inputs, and n might be Graham's number; still, there's no reason such a distribution would need to assign 1 to every mathematical theorem. 

Conclusion

A Bayesian agent does not require mathematical omniscience, or logical (if that means anything more than Boolean) omniscience, but merely Boolean omniscience. All that Boolean omniscience means is that for whatever atomic propositions appear in the language (e.g., the language that forms the set of propositions that constitute the domain of the probability function) of the agent, any tautological Boolean combination of those propositions must be assigned a probability of 1, and any contradictory Boolean combination of those propositions must be assigned 0. As far as I can tell, the whole notion that Bayesian agents must assign 1 to tautologies and 0 to contradictions comes from the fact that when you fill out a table of joint distributions (or follow the Komolgorov axioms in some other way) all of the Boolean theorems get a probability of 1. This does not imply that you need to assign 1 to Fermat's last theorem, even if you are reasoning probabilistically in a language that is very expressive.4 

Some Ways To Prove This Wrong:

Show that a really expressive semantic language, like the one I gave above, implies PA if you allow Boolean operations on its atomic propositions. Alternatively, you could show that Solomonoff induction can express PA theorems as propositions with probabilities, and that it assigns them 1. This is what I tried to do, but I failed on both occasions, which is why I wrote this. 


[1] There are also interesting questions about the role of tautologies that rely on synonymy in probability theory, and whether they must be assigned a probability of 1, but I decided to keep it to mathematics for the sake of this post. 

[2] I think this language is ridiculous, and openly admit it has next to no real world application. I stole the idea for the language from Carnap.

[3] This is a sloppily presented approximation to Solomonoff induction as n goes to infinity. 

[4] The argument above is not a mathematical proof, and I am not sure that it is airtight. I am posting this to the discussion board instead of a full-blown post because I want feedback and criticism. !!!HOWEVER!!! if I am right, it does seem that folks on here, at MIRI, and in the Bayesian world at large, should start being more careful when they think or write about logical omniscience. 

 

 

Steelmaning AI risk critiques

26 Stuart_Armstrong 23 July 2015 10:01AM

At some point soon, I'm going to attempt to steelman the position of those who reject the AI risk thesis, to see if it can be made solid. Here, I'm just asking if people can link to the most convincing arguments they've found against AI risk.

EDIT: Thanks for all the contribution! Keep them coming...

View more: Next