Is intelligence explosion necessary for doomsday?
I searched for articles on the topic and couldn't find any.
It seems to me that intelligence explosion makes human annihilation much more likely, since superintelligences will certainly be able to outwit humans, but that a human-level intelligence that could process information much faster than humans would certainly be a large threat itself without any upgrading. It could still discover programmable nanomachines long before humans do, gather enough information to predict how humans will act, etc. We already know that a human-level intelligence can "escape from the box." Not 100% of the time, but a real AI will have the opportunity for many more trials, and its processing abilities should make it far more quick-witted than we are.
I think a non-friendly AI would only need to be 20 years or so more advanced than the rest of humanity to pose a major threat, especially if self-replicating nanomachines are possible. Skeptics of intelligence explosion should still be worried about the creation of computers with unfriendly goal systems. What am I missing?
[LINK] Learning enhancement using "transcranial direct current stimulation"
Article here;
http://www.ox.ac.uk/media/science_blog/brainboosting.html
Recent research in Oxford and elsewhere has shown that one type of brain stimulation in particular, called transcranial direct current stimulation or TDCS, can be used to improve language and maths abilities, memory, problem solving, attention, even movement.
Critically, this is not just helping to restore function in those with impaired abilities. TDCS can be used to enhance healthy people’s mental capacities. Indeed, most of the research so far has been carried out in healthy adults.
The article goes on to discuss the ethics of the technique.
FAI FAQ draft: general intelligence and greater-than-human intelligence
My thanks to everyone who has provided feedback on these drafts so far. It's been helpful, and I've been incorporating your suggestions into the document. Now, Iinvite your feedback on these two snippets from the forthcoming Friendly AI FAQ. For references, see here.
_____
1.10. What is general intelligence?
There are many competing definitions and theories of intelligence (Davidson & Kemp 2011; Niu & Brass 2011; Legg & Hutter 2007), and the term has seen its share of emotionally-laden controversy (Halpern et al. 2011; Daley & Onwuegbuzie 2011).
Legg (2008) collects dozens of definitions of intelligence, and finds that they loosely converge on the following idea:
Intelligence measures an agent’s ability to achieve goals in a wide range of environments.
That will be our ‘working definition’ for intelligence in this FAQ.
There is a sense in which famous computers like Deep Blue and Watson are “intelligent.” They can out-perform human competitors for a narrow range of goals (winning chess games or answers Jeopardy! questions), in a narrow range of environments. But drop them in a novel environment — a shallow pond or a New York taxicab — and they are dumb and helpless. In this sense their “intelligence” is not general.
Human intelligence is general in that it allows us to achieve goals in a wide range of environments. We can solve new problems of survival, competition, and fun in a wide range of environments, including ones never before encountered. That is, after all, how humans came to dominate all the land and air on Earth, and what empowers us to explore more extreme environments — like the deep sea or outer space — when we choose to. Humans have invented languages, developed agriculture, domesticated other animals, created crafts and arts and architecture, written philosophy, explored the planet, discovered math and science, evolved new political and economic systems, built machines, developed medicine, and made plans for the distant future.
Some other animals also have a slower but more general intelligence than Deep Blue and Watson. Apes, dolphins, elephants, and a few species of bird have demonstrated some ability to solve novel problems in novel environments (Zentall 2011).
General intelligence in a machine is called artificial general intelligence (AGI). Nobody has developed AGI yet, though many approaches are being attempted. Goertzel & Pennachin (2007) provides an overview of approaches to AGI.
1.11. What is greater-than-human intelligence?
Humans gained dominance over Earth not because we had superior strength, speed, or durability, but because we had superior intelligence. It is our intelligence that makes us powerful. It is our intelligence that allows us to adapt to new environments. It is our intelligence that allows us to subdue animals or invent machines that surpass us in strength, speed, durability and other qualities.
Humans do not operate at anywhere near the upper physical limit of general intelligence. Instead, humans are nearly the dumbest possible creature capable of developing a technological civilization. But our intelligence is still running on a mess of evolved mammalian modules built of meat. Our neurons communicate much slower than electric circuits. Our thinking is hobbled by comprehensive and deep-seated cognitive biases (Gilovich et al. 2002).
It is easy to create machines that surpass our cognitive abilities in narrow domains (chess, etc.), and easy to imagine the creation of machines that eventually surpass our cognitive abilities in a general way. A greater-than-human machine intelligence would exhibit over us the kind of superiority we exhibit over our ancestors in the genus Homo, or chimpanzees, or dogs, or even snails.
Some have argued that a machine cannot reach human-level general intelligence, for example see Lucas (1961); Dreyfus (1972); Penrose (1994); Searle (1980); Block (1981). But Chalmers (2010) points out that their arguments are irrelevant:
To reply to the Lucas, Penrose, and Dreyfus objections, we can note that nothing in the singularity idea requires that an AI be a classical computational system or even that it be a computational system at all. For example, Penrose (like Lucas) holds that the brain is not an algorithmic system in the ordinary sense, but he allows that it is a mechanical system that relies on certain nonalgorithmic quantum processes. Dreyfus holds that the brain is not a rule-following symbolic system, but he allows that it may nevertheless be a mechanical system that relies on subsymbolic processes (for example, connectionist processes). If so, then these arguments give us no reason to deny that we can build artificial systems that exploit the relevant nonalgorithmic quantum processes, or the relevant subsymbolic processes, and that thereby allow us to simulate the human brain.
As for the Searle and Block objections, these rely on the thesis that even if a system duplicates our behaviour, it might be missing important ‘internal’ aspects of mentality: consciousness, understanding, intentionality, and so on... [But if] there are systems that produce apparently superintelligent outputs, then whether or not these systems are truly conscious or intelligent, they will have a transformative impact on the rest of the world.
Chalmers (2010) summarizes two arguments suggesting that machines can reach human-level general intelligence:
- The emulation argument (see section 7.3)
- The evolutionary argument (see section 7.4)
He also advances an argument for the conclusion that upon reaching human-level general intelligence, machines can be improved to reach greater-than-human intelligence: the extensibility argument (see section 7.5).
We can also get a sense of how human cognition might be surpassed by examining the limits of human cognition. These include:
- Small scale. The human brain contains 85-100 billion neurons (Azevedo et al. 2009; Williams & Herrup 1988), but a computer need not be so limited. Legg (2008) writes:
...a typical adult human brain weights about 1.4 kg and consumes just 25 watts of power (Kandel et al. 2000). This is ideal for a mobile intelligence, however an artificial intelligence need not be mobile and thus could be orders of magnitude larger and more energy intensive. At present a large supercomputer can fill a room twice the size of a basketball court and consume 10 megawatts of power. With a few billion dollars much larger machines could be built.
With greater scale, a computer could far surpass human capacities for short-term memory, long-term memory, processing speed, and much more.
- Slow speed. Again, here is Legg (2008):
...brains use fairly large and slow components. Consider one of the simpler of these, axons... These are typically around 1 micrometre wide, carry spike signals at up to 75 metres per second at a frequency of at most a few hundred hertz (Kandel et al. 2000). Compare these characteristics with those of a wire that carries signals on a microchip. Currently these are 45 nanometres wide, propagate signals at 300 million metres per second and can easily operate at 4 billion hertz... Given that present day technology produces wires which are 20 times thinner, propagate signals 4 million times faster and operate at 20 million times the frequency, it is hard to believe that the performance of axons could not be improved by at least a few orders of magnitude.
- Poor algorithms. The brain’s algorithms for making calculations are often highly inefficient. A cheap calculator beats the most impressive savant in mental calculation.
- Proneness to distraction. Our brains are highly prone to distraction, loss of focus, and boredom. A machine intelligence need not suffer these deficiencies.
- Slow Learning speed. Humans gain new skills and learn new material slowly, but a machine may be able to acquire new skills and knowledge at a rate more comparable to that of Neo in The Matrix (“I know kung-fu”).
- Limited communication abilities. Human tools for communication (the vibration of vocal chords, the movement of limbs, written words) are imprecise and noisy. Computers already communicate with each other much more quickly and accurately by using unambiguous languages (protocols) and direct electrical signaling.
- Limited self-reflection. Only in the past few decades have humans been able to look inside the “black box” that produces their feelings, judgments, and behavior — and even still, most of how our brains work is a mystery. Because of this, we must often infer (and sometimes be mistaken about) our own desires and judgments, and perhaps even our own subjective experiences. In contrast, a machine could be made to have access to its own source code, and thereby know everything about its own operation and how to improve itself.
- Non-extensibility. Humans cannot easily integrate with hardware or with other human minds. Machines could quickly gain the benefits of being able to integrate with a variety of hardware and substrates.
- Limited sensory data. Humans have limited senses, and there are many more that could be had: ultraviolet vision (like bees have), infrared vision (like snakes), telescopic vision (like eagles), microscopic vision, infrasound hearing, ultrasound hearing, advanced chemical diagnosis (more sophisticated than the human tongue), super-smell, spectroscopy, and more.
- Cognitive biases. Due to the haphazard evolutionary construction of the human mind (Marcus 2008), humans are subject to a long list of cognitive biases that distort our thinking (Gilovich et al. 2002; Stanovich 2010). This need not be the case in machines.
Thus, it seems that greater-than-human intelligence is possible for a long list of reasons.
Transcription and Summary of Nick Bostrom's Q&A
INTRO: From the original posting by Stuart_Armstrong:
Underground Q&A session with Nick Bostrom (http://www.nickbostrom.com) on existential risks and artificial intelligence with the Oxford Transhumanists (recorded 10 October 2011).
Below I (will) have a summary of the Q&A followed by the transcription. The transcription is slightly edited, mainly for readability. The numbers are minute markers. Anything followed by a (?) means I don't know quite what he said (example- attruing(?) program), but if you figure it out, let me know!
SUMMARY: I'll have a summary here by end of the day, probably.
TRANSCRIPTION:
Nick: I wanted to just [interact with your heads]. Any questions, really, that you have. To discuss with you. I can say what I’m working on right now which is this book on super-intelligence, not so much on the question of whether and how long it might take to develop machine intelligence that equals human intelligence, but rather what happens if and when that occurs. To forget human level machine intelligence, how quickly, how explosively will we get super-intelligenct, and how can you solve the control problem. If you build super-intelligence how can you make sure it will do what you want. That it will be safe and beneficial.
Once one starts to pull on that problem, it turns out to be quite complicated and difficult. That it has many aspects to it that I would be happy to talk about. Or if you prefer to talk about other things; existential risks, or otherwise, I’d be happy to do that as well. But no presentation, just Q&A. So you all have to provide at least the stimulus. So should I take questions or do you want…
[00:01]
Questioner: So what’s your definition of machine intelligence or super-intellegence AI… Is there like a precise definition there?
Nick: There isn’t. Now if you look at domain specific intelligence, there are already areas where machines surpass humans, such doing arithmetical calculations or chess. I think the interesting point is when machines equal humans in general intelligence or perhaps slightly more specifically in engineering intelligence. So if you had this general capability of being able to program creatively and design new systems... There is in a sense a point at which if you had sufficient capability of that sort, you have general capability.
Because if you can build new systems, even if all it could initially do is this type engineering work, you can build yourself a poetry module or build yourself a social skills module, if you have that general ability to build . So it might be that general intelligence or it might be that slightly more narrow version of that engineering type of intelligence is the key variable to look at. That’s the kind of thing that can unleash the rest. But “human-level intelligence”... that’s a vague term, and I think it’s important to understand that. It’s not necessarily the natural kind.
[00:03]
Questioner: Got a question that maybe should have waited til the end: There are two organizations, FHI and SIAI, working on this. Let's say I thought this was the most important problem in the world, and I should be donating money to this. Who should I give it to?
Nick:
It's good. We've come to the chase!
I think there is a sense that both organizations are synergistic. If one were about to go under or something like that, that would probably be the one. If both were doing well, it's... different people will have different opinions. We work quite closely with a lot of the folks from SIAI.
There is an advantage to having one academic platform and one outside academia. There are different things these types of organizations give us. If you want to get academics to pay more attention to this, to get postdocs to work on this, that's much easier to do within academia; also to get the ear of policy-makers and media.
On the other hand, for SIAI there might be things that are easier for them to do. More flexibility, they're not embedded in a big bureaucracy. So they can more easily hire people with non-standard backgrounds without the kind of credentials that we would usually need, and also more grass-roots stuff like the community blog Less Wrong, is easier to do.
So yeah. I'll give the non-answer answer to that question.
[00:05]
Questioner: Do you think a biological component is necessary for an artificial intelligence to achieve sentience or something equivalent?
Nick: It doesn’t seem that that should be advantageous…If you go all the way back to atoms, it doesn’t seem to matter that it’s carbon rather than silicon atoms. Then you could wonder, instead of having the same atoms you run a simulation of everything that’s going on. Would you have to simulate biological processes? I don’t even think that’s necessary.
My guess (and Im not sure about this, I don’t have an official position or even a theory about what exactly the criteria are that would make a system conscious)…But my intuition is that If you replicated computational processes that goes on in a human brain, at a sufficient level of detail, where that sufficient level of detail might be roughly on the level of individual neurons and synapses, I think you would likely have consciousness. And it might be that it’s something weaker than that which would suffice. Maybe you wouldn’t need every neuron. Maybe you could simplify things and still have consciousness. But at least at that level it seems likely.
It’s a lot harder to say if you had very alien types of mental architecture. Something that wasn’t a big neural network but of normal machine intelligence that performs very well in a certain way, but using a very different method than a human brain. Whether that would be conscious as well? Much less sure. A limiting case would be a big lookup table that was physically impossible to realize, but you can imagine having every sort of situation possible described, and that program would run through until it found the situation that matched its current memory and observation and would read off which action it should perform. But that would be an extremely alien type of architecture. But would that have conscious experience or not? Even less clear. It might be that it would not have, but maybe the process of generating this giant look-up table would generate kinds of experiences that you wouldn’t get from actually implementing it or something like that. (?)
[00:07]
Questioner- This relates to AI being dangerous. It seems to me that while it would certainly be interesting if we were to get AI that were much more intelligent than a human being, its not necessarily dangerous.
Even if the AI is very intelligent it might be hard for it to get resources for it to actually do anything to be able to manufacture extra hardware or anything like that. There are obviously situations where you can imagine intelligence or Creative thinking can get you out of or get you further capability . So..
Nick: I guess it’s useful to identify two cases: One is sort of the default case unless we successfully implement some sort of safeguard or engineer it in a particular way in order to avoid dangers …So let’s think of a default just a bit: You have something that is super intelligent and capable of improving itself to even more levels of super intelligence…. I guess one way to get initial possibility of why this is dangerous is to think about why humans are powerful.. Why are we dominant on this planet? It’s not because we have stronger muscles or our teeth are sharper or we have special poison glands. It’s all because of our brains, which have enabled us to develop a lot of other technologies that give us in effect muscles that are stronger than the other animals…We have bulldozers and external devices and all the other things. And also, it enables us to coordinate socially and build up complicated society so we can act as groups. And all of this makes us supreme on this planet. We can argue with the case of bacteria which have their own domains where they rule. But certainly in the case of the larger mammals we are unchallenged because of our brains.
And the brains are not all that different from the brains of other animals. It might be that all these advantages we have are due to a few tweaks on some parameters that occurred in our ancestors a couple million years ago. And these tiny changes in the nature our intelligence that had these huge affects. So just prima facie it then seems possible that that if the system surpassed us by just a small amount that we surpass chimpanzees, it could lead to a similar kind of advantage in power. And if they exceeded our intelligence by a much greater margin, then all of that could happen in a more dramatic fashion
It’s true that you could have in principle an AI that was locked in a box, such that it would be incapable of affecting anything outside the box and in that sense it would be weak. That might be one of the safety methods one tries to apply that I've been thinking about.
Broadly speaking you can distinguish between two different approaches to solving the control problem, of making sure that super-intelligence, if it’s built wouldn’t cause harm. On one hand you have capability control measures where you try to limit what the AI is able to do. The most obvious example would be lock it in a box and limit its ability to interact with the rest of the world.
The other class of approach would be motivation selection methods where you would try to control what it wants to do. Where you build it in such a way that even if it has the power to do all this bad stuff, it would choose not to. But so far, there isn’t one method or even a combination of methods that it seems we can currently be fully convinced would work. There’s a lot more work needed...
[00:11]
Questioner: Human beings have been very successful. One feature of that that has been very crucial are our hands that have enabled us to get a start on working tools and so on. Even if an AI is running on some computer somewhere, that would be more analogous to a very intelligent creature which doesn’t have very good hands. It’s very hard for it to actually DO anything.
Maybe the in-the-box method is promising. Because if we just don’t give the AI hands, some way to actually do something..If all it can do is alter its own code, and maybe communicate infomationally. That seems...
Nick: So let’s be careful there… So clearly it’s not “hands” per se. If it didn’t have hands it could still be very dangerous, because there are other people with hands, that it could persuade to do its bidding. It might be that it has no direct effectors other than the ability to type very slowly, and then some human gatekeeper could read and choose to act on or not. Even that limited ability to affect the world might be sufficient if it had a super power in the domain of persuasion. So if it had an engineering super-power, it might then get all these other superpowers. And then if it were able to, in particular be a super skilled persuader, it could then get other accessories outside our system that could implement its designs.
You might have heard of this guy, Eliezer Yudkowsky, about 5 years back who ran a series of role playing exercises...The idea was one person should play the AI, pretend to be in a box. The other should play the human gatekeeper whose job was to not let the AI out of the box, but he has to talk with the AI for a couple of hours over the internet chat. This experiment was run five times, with EY playing the AI and different people playing the human gatekeeper. And for the most part people, who were intitially convinced that they would never let the AI out of the box, but in 3 of 5 cases, the experiment ended with the gatekeepers announcing yes, they would let the AI out of the box.
This experiment was run under conditions that neither party would be allowed to disclose the methods that were used, the main conversational sequence...sorta maintain a shroud of mystery. But this is where the human-level persuader has two hours to work on the human gatekeeper. It seems reasonable to be doubtful of the ability of humanity to keep the super-intelligent persuader in the box, indefinitely, for that reason.
[00:15]
Questioner: How hard do you think the idea of controlling the mentality of intelligence is, with something at least as intelligent as us, considering how hard it is to convince humans to act in a certain civilized way of life?
Nick: So humans sort of start out with a motivation system and then you can try to persuade them or structure incentives to behave in a certain way. But they don’t start out with a tabula rasa where you get to write in what a human’s values should be. So that’s made a difference. In the case of the super-intelligence of course once it already has unfriendly values and it has sufficient power, it will resist any attempt to corrupt its goal system as it would see it.
[00:16]
Questioner: You don’t think that like us, its experiences might cause it to question its core values as we do?
Nick: Well, I think that depends on how the goal system is structured. So with humans we don’t have a simple declarative goal structure list. Not like a simple slot where we have super goal and everything else is derived from that
Rather it’s like many different little people inhabit our skull and have their debates and fight it out and make compromises. And in some situations, some of them get a boost like permutations and stuff like that. Then over time we have different things that change what we want like hormones kicking in, fading out, all kinds of processes.
Another process that might affect us is what I call value accretion. The idea that we can have mechanisms that loads new values into us, as we go along. Like maybe falling in love is like that; Initially you might not value that person for their own sake above any other person. But once you undergo this process you start to value them for their own sake in a special way. So human have this mechanism that make us acquire values depending on our experiences.
If you were building a machine super intelligence and trying to engineer its goal systems so that it will be reliably safe and human friendly, you might want to go with something, more transparent where you have an easier time seeing what is happening, rather than have a complex modular minds with a lot of different forces battling it out...you might want to have a more hierarchical structure.
Questioner: What do you think of the necessary…requisites for the conscious mind? What are the features?
Nick: Yes, I’m not sure. We’ve talked a little on that earlier. Suppose there is a certain kind of computation that is needed, that is really is the essence of mind. I’m sympathetic to the idea that something in the vicinity of that view might be correct. You have to think about exactly how to develop it. Then there is this stage of what is a computation.
So there is this challenge (I think it might go back to Hans Moravec but I think similar objections have been raised in philosophy against computationalism) where the idea is that if you have an arbitrary physical system that is sufficiently complicated, it could be a stone or a chair or just anything with a lot of molecules in it. And then you have this abstract computation that you think is what constitutes the implementation of the mind. Then there would be some mathematical mapping between all the parts in your computation and atoms in the chair so that you could artificially, through a very complicated mapping interpret the motions of the molecules in the chair in such a way that they would be seen as implementing the computation. It would not be any plausible mapping, not a useful mapping, but a bizarro mapping. Nonetheless if there were sufficiently limited parts there, you could just arbitrarily define some, by injection..
And clearly we don’t think that all these random physical objects implement the mind, or all possible minds.
So the lesson to me is that it seems that we need some kind of account of what it means to implement a computation that is not trivial and this mapping function between the abstract entity that is a sort of Turing program, or whatever your model of a computation is and the physical entity that decides to implement it to be some sort of non-trivial representation of what this mapping can look like
It might have to be reasonably simple. It might have to have certain counter-factual properties, so that the system would have implemented a related, but slightly different computation if you had scrambled the initial conditions of the system in a certain way, so something like that. But this is an open question in the philosophy of mind, to try to nail down what it means to implement the computation.
[00:20]
Questioner: To bring back to the goal and motivation approach to making an AI friendly towards us, one of the most effective ways of controlling human behavior, quite aside from goals and motivations , is to train them by instilling neuroses. It’s why 99.99% of us in this room couldn’t pee in our pants right now even if he really, really wanted to.
Is it possible to approach controlling an AI in that way or even would it be possible for an AI to develop in such a way that there is a developmental period in which a risk-reward system or some sort of neuroses instilment could be used to basically create these rules that an AI couldn’t break?
Nick: It doesn’t sound so promising because a neurosis is a complicated thing that might be a particular syndrome of a phenomenon that occurs in human- style mind, because of the way that humans’ minds are configured. It’s not clear there would be something exactly analogous to that in a cognitive system with a very different architecture.
Also, because neuroses, at least certain kinds of neuroses, are ones we would choose to get rid of if we could. So if you had a big phobia and there was a button that would remove the phobia, obviously you would press the button. And here we have this system that is presumably able to self-modify. So if it had this big hang up that it didn’t like, then it could reprogram itself to get rid of that.
This would be different than a top-level goal because top-level goal would be the criterion it produced to decide whether to take an action. In particular, like an action to remove the top level goal.
So generally speaking with reasonable and coherent goal architecture you would get certain convergent instrumental values that would crop up in a wide range of situations. One might be self preservation, not necessarily because you value your own survival for its own sake, but because in many situations you can predict that if you are around in the future you can continue to act in the future according to your goals, and that will make it more likely that the world will then be implementing your goals.
Another convergent instrumental value might be protection of your goal system from corruption (?) for very much the same reason. For even if you were around in the future but you have different goals from the ones you had now, you would now predict that that means in the future you will no longer be working towards realizing your current goals but maybe towards a completely different purpose, that would make it now less likely that your current goals would be realized. If your current goals are what you use as a criterion to choose an action, you would want to try to take actions that would prevent corruption of your goal system.
One might list a couple of other of the convergent instrumental values like intelligence amplification, technology perfection and resource acquisition. So this relates to why generic super-intelligence might be dangerous. It’s not so much that you have to worry that it would have human Unfriendliness in the sense of disliking human goals, that it would *hate* humans . The danger is that it wouldn’t *care* about humans. It would care about something different, like paperclips. But then if you have almost any other goals, like paperclips, there would be these other convergent instrumental reasons that you discover. For while your goal is to make as many paperclips as possible you might want to a) prevent humans from switching you off or tampering with your goal system or b) you might want to acquire as much resources as possible, including planets, and the solar system, and the galaxy. All of that stuff could be made into paperclips. So even with pretty much a random goal, you would end up with these motivational tendencies which would be harmful to humans.
[00:25]
Questioner: Appreciating the existential risks, what do you think about goals and motivations, and such drastic measures of control sort of a) ethically and b) as a basis of a working relationship?
Nick: Well, in terms of the working relationship one has to think about the differences with these kinds of the artificial being. I think there are a lot of (?) about how to relate to artificial agents that are conditioned on the fact that we are used to dealing with human agents, and there are a lot of things we can assume about the human.
We can assume perhaps that they don’t want to be enslaved. Even if they say that they want to be enslaved, we might think that deep inside of them, there is a sort of more genuine authentic self that doesn’t want to be enslaved. Even if some prisoner has been brainwashed to do the bidding of their master, maybe we say it’s not really good for them because it’s in their nature, this will to be autonomous. And there are other things like that, that don’t necessarily have to obtain for a completely artificial system which might not have any of that rich human nature that we have.
So in terms of what the good working relationship is, just as what we think of a good relationship with our word processor or email program. Not in these terms, as if you’re exploiting it for your ends, without giving it anything in return. If your email program had a will, presumably it would be the will to be a good and efficient email program that processed your emails properly. Maybe that was the only thing it wanted and cared about. So having a relationship with it would be a different thing.
There was another part of your question, about whether this would be right and ethical. I think if you are operating a new agent from scratch, and there are many different possible agents you could create, some of those agents will have human style values; they want to be independent and respected. Other agents that you could create would have no greater desire than to be of service. Others would just want paperclips. So if you step back, and look at which of these options we should decide, then looking at the question of moral constraints on which of these are legitimate.
And I’m not saying that those are trivial, I think there are some deep ethical questions here. However in the particular scenario where we are considering the creation of a single super intelligence the more pressing concern would be to ensure that it doesn’t destroy everything else, like humanity and its future. Now, if you have a different scenario, like instead of this one uber-mind rising ahead, you have many minds that become smarter and smarter that rival humans and then gradually exceed them
Say an uploading scenario where you start with very slow software, where you have human like minds running very slowly. In that case, maybe how we should relate to these machine intellects morally becomes more pressing. Or indeed, even if you just have one, but in the process of figuring out what to do it creates “thought crimes”.
If you have a sufficiently powerful mind maybe you have thoughts themselves would contain structures that are conscious. This sounds mystical, but imagine you are a very powerful computer and one of the things you are doing is you are trying to predict what would happen in the future under different scenarios, and so you might play out a future
And if those simulations you are running inside of this program were sufficiently detailed, then they could be conscious. This comes back to our earlier discussion of what is conscious. But I think a sufficiently detailed computer simulation of the mind could be conscious
You could then have a super intelligence that could process by thinking about things could create sentient beings, maybe millions or billions or trillions of them, and their welfare would then be a major ethical issue. They might be killed when it stops thinking about them, or they might be mistreated in different ways. And I think that would be an important ethical complication in this context
[00:30]
Questioner: Eliezer suggests that one of the many problems with arbitrary stamps in AI space is that human values are very complex. So virtually any goal system will go horribly wrong because it will be doing things we don’t quite care about, and that’s as bad as paperclips. How complex do you think human values will be?
Nick: It looks like human values are very complicated. Even if they were very simple, even if it turned out its just pleasure say, which compared to other things of what has value, like democracy flourishing and art. As far as we can think of values that’s one of the more simplistic possibilities. Even that if you start to think of it from a physicalistic view, and you have to now specify which atoms have to go how and where for there to be pleasure. It would be a pretty difficult thing to write down, Like the Schrödinger Equation for pleasure.
So in that sense it seems fair that our values are very complex. So there are two issues here. There is a kind of technical problem of figuring out that if you knew what our values are, in the sense that we think that we normally know what our values are, how we could get the AI to share those values, like pleasure or absence of pain or anything like that.
And there is the additional philosophical problem which is if we are unsure of what are values are, if we are groping about in axiology trying to figure out how much to value different things, and maybe there are values we have been blind to today, then how do you also get all of that on board, on top of what we already think has value, that potential of moral growth? Both of those are very serious problems and difficult challenges.
There are a number of different ways you can try to go. One approach that is interesting is what we might call is indirect normativity. Where the idea is rather than specifying explicitly what you want the AI to achieve, like maximizing pleasure while respecting individual autonomy and pay special attention to the poor. Rather than creating a list, what you try to do instead is specify a process or mechanism by which the AI could find out what it is supposed to do.
One of these ideas that has come out is this idea Coherent Extrapolated Volition, where the idea is if you could try to tell the AI to do that which we would have asked it to do if we had thought about the problem longer, and if we had been smarter, and if we had some other qualifications. Basically, if you could describe some sort of idealized process whereby we at the end, if we underwent that process would be able to create a more detailed list, then maybe point the AI to that and make the AI’s value to run this process and do what comes out of the end of that, rather than go with where our current list gets us about what we want to do and what has value.
[00:33]
Questioner: Isn’t there are risk that.. the AI would decide that if we thought about it for 1000 years really, really carefully, that we would just decide to just let the AIs to take over?
Nick: Yeah, that seems to be a possibility. And then that raises some interesting questions. Like if that is really what our CEV would do. Let’s assume that everything has been implemented in the right way, like there is no flaw on the realization of this. So how should we think about this?
Well on the one hand, you might say if this is really what our wiser selves would want. What we would want if we were saved from these errors and illusions we are suffering under, then maybe we should go ahead with that. On the other hand, you could say, this is really a pretty tall order. That we’re supposed to sacrifice not just a bit, but ourselves and everybody else, for this abstract idea that we don’t really feel any strong connection to. I think that’s one of the risks, but who knows what will be the outcome of this CEV?
And there are further qualms one might have that need to be spelled out. Like exactly whose volition is it that is supposed to be extrapolated. Humanity’s? Well then, who is humanity? Like does it include past generations for example? How far back? Does it include embryos that died?
Who knows whether the core of humanity is nice? Maybe there are a lot of suppressed sadists out there, that we don’t realize, because they know that they would be punished by society. Maybe if they went through this procedure, who knows what would come out?
So it would be dangerous to run something like that, without some sort of safeguard check at the end. On the other hand, there is worry that if you put in too many of these checks, then in effect you move the whole thing back to what you want now. Because if you were allowed to look at an extrapolation, see whether you like it, or if you dislike it you run another one by changing the premises and you were allowed to keep going like that until you were happy with the result then basically it would be you now, making the decision. So, it’s worth thinking about, whether there is some sort of compromise or blend that might be the most appealing.
[00:36]
Questioner: You mentioned before about a computer producing sentience itself in running a scenario. What are the chances that that is the society that we live in today?
Nick: I don’t know, so what exactly are the chances? I think significant. I don’t know, it’s a subjective judgment here. maybe less than 50%? Like 1 in 10?
There’s a whole different topic, maybe we should save that topic for a different time..
[00:37]
Questioner: If I wanted to study this area generally, existential risk, what kind of subject would you recommend I pursue? We’re all undergrads, so after our bachelors we will start on master or go into a job. If I wanted to study it, what kind of master would you recommend?
Nick: Well part of it would depend on your talent, like if you’re a quantitative guy or a verbal guy. There isn’t really an ideal sort of educational program anywhere, to deal with these things. You’d want to get a fairly broad education, there are many fields that could be relevant. If one looks at where people are coming from so far that have had something useful to say, a fair chunk of them are philosophers, some computer scientists, some economists, maybe physics.
Those fields have one thing in common in that they are fairly versatile. Like if you’re doing Philosophy, you can do Philosophy of X, or of Y, or of almost anything. Economics as well. It gives you a general set of tools that you can use to analyze different things, and computer science has these ways of thinking and structuring a problem that is useful for many things
So it’s not obvious which of those disciplines would be best, generically. I think that would depend on the individual, but then what I would suggest is that while you were doing it, you also try to read in other areas other than the one you were studying. And try to do it at a place where there are a lot of other people around with a support group and advisor that encouraged you and gave you some freedom to pursue different things.
[00:38]
Questioner: Would you consider AI created by human beings as some sort of consequence of evolutionary process? Like in a way that human beings tried to overcome their own limitations and as it’s a really long time to get it on a dna level you just get it quicker on a more computational level?
Nick: So whether we would use evolutionary algorithms to produce super- intelligence or..?
Questioner: If AI itself is part of evolution..
Nick: So there’s kind of a trivial sense in which if we evolved and we created…then obviously evolution had a part to play in the overall causal explanation of why we’re going to get machine intelligence at the end. Now, for evolution to really to exert some shaping influence there have to be a number of factors at play. There would have to be a number of variants created that are different and then compete for resources and then there is a selection step. And for there to be significant evolution you have to enact this a lot of times.
So whether that will happen or not in the future is not clear at all. If you have a signal tone for me, in that if a world order arises at a top level. Where there is only one decision making agency, which could be democratic world government or AI that rules everybody, or a self-enforcing moral code, or tyranny or a nice thing or bad thing
But if you have that kind of structure there will at least be, in principal ability, for that unitary agent to control evolution within itself, like it could change selection pressures by taxing or subsidizing different kinds of life forms.
If you don’t have a singleton then you have different agencies that might be in competition with one another, and in principle in that scenario evolutionary pressures can come into play. But I think the way that it might pan out would be different from the way that we’re used to seeing biological evolution, so for one thing you might have these potentially immortal life forms, that is they have software minds that don’t naturally die, that could modify themselves.
If they knew that their current type, if they continued to pursue their current strategy would be outcompeted and they didn’t like that, they could change themselves immediately right away rather than wait to be eliminated.
So you might get, if there were to be a long evolutionary process ahead and agents could anticipate that, you might get the effects of that instantaneously from anticipation.
So I think you probably wouldn’t see the evolutionary processes playing out but there might be some of the constraints that could be reflected more immediately by the fact that different agencies had to pursue strategies that they could see would be viable.
[00:41]
Questioner: So do you think it’s possible that our minds could be scanned and then be uploaded into a computer machine in some way and then could you create many copies of ourselves as those machines?
Nick: So this is what in technical terminology is “whole brain emulation” or in more popular terminology “uploading”. So obviously this is impossible now, but seems like it’s consistent with everything we know about physics and chemistry and so forth. So I think that will become feasible barring some kind of catastrophic thing that puts a stop to scientific and technological progress.
So the way I imagine it would work is that you take a particular brain, freeze it or vitrify it, and then slice it up into thin slices that would be fed through some array of microscopes that would scan each slice with sufficient resolution and then automated image analysis algorithms would work on this to reconstruct the 3 dimensional neural network that your own organic brain implemented and I have this sort of information structure in a computer.
At this point you need computational neuroscience to tell you what each component does. So you need to have a good theory of what say a pyramidal cell does, what a different kind of…And then you would combine those little computational models of what each type of neuron does with this 3D map of the network and run it. And if everything went well you would have transferred the mind, with memories and personalities intact to the computer. And there is an open question of just how much resolution would you need to have, how much detail you would need to capture of the original mind in order to successfully do this. But I think there would be some level of detail which as I said before, might be on the level of synapses or thereabouts, possibly higher, that would suffice. So then you would be able to do this. And then after you’re software , you could be copied, or speeded up or slowed down or paused or stuff like that
[00:44]
Questioner: There has been a lot of talk of controlling the AI and evaluating the risk. My question would be assuming that we have created a far more perfect AI than ourselves is there a credible reason for human beings to continue existing?
Nick: Um, yeah, I certainly have the reason that if we value our own existence we seem to have a…Do you mean to say that there would be a moral reason to exist or if we would have a self interested reason to exist.
Questioner: Well I guess it would be your opinion..
Nick: My opinon is that I would rather not see the genocide of the entire human species. Rather that we all live happily ever after. If those are the only two alternatives, I think yeah! Let’s all live happily ever after! Is where I would come down on that.
[00:45]
Questioner: By keeping human species around You’re going to have a situation presumably where you have extremely, extremely advanced AIs where they have few decades or few centuries or whatever and they will be far, far beyond our comprehension, and even if we still integrate to some degree with machines (mumble) biological humans then they’ll just be completely inconceivable to us. So isn’t there a danger that our stupidity will hamper their perfection?
Nick: Would hamper their perfection?? Well there’s enough space for there to be many different kinds of perfection pursued. Like right now we have a bunch of dust mites crawling around everywhere, but not really hampering our pursuit of art or truth or beauty. They’re going about their business and we’re going about ours.
I guess you could have a future where there would be a lot of room in the universe for planetary sized computers thinking their grand thoughts while…I’m not making a prediction here, but if you wanted to have a nature preserve, with original nature or original human beings living like that, that wouldn’t preclude the other thing from happening..
Questioner: Or a dust mite might not hamper us, but things like viruses or bacteria just by being so far below us (mumble). And if you leave humans on a nature preserve and they’re aware of that, isn’t there a risk that they’ll be angry at the feeling of being irrelevant at the grand scheme of things?
Nick: I suppose. I don’t think it would bother the AI that would be able to protect itself, or remain out of reach. Now it might demean the remaining humans if we were dethroned from this position of kings, the highest life forms around, that it would be a demotion, and one would have to deal with that I suppose.
It’s unclear how much value to place on that. I mean right now in this universe which looks like it’s infinite somewhere out there are gonna be all kinds of things including god like intellects and everything in between that are already outstripping us in every possible way.
It doesn’t seem to upset us terribly; we just get on with it. So I think people will have to make some psychological..I’m sure we can adjust to it easily. Now it might be from some particular theory of value that this might be a sad thing for humanity. That we are not even locally at the top of the ladder.
Questioner: If rationalism was true, that is if it were irrational to perform wrong acts. Would we still have to worry about super-intelligence? It seems to me that we wouldn’t have.
Nick: Well you might have a system that doesn’t care about being rational, according to that definition of rationality. So I think that we would still have to worry
[00:48]
Questioner: Regarding trying to program AI without values, (mumbles) But as I understand it, what’s considered one of the most promising approach in AI now is more statistical learning type approaches.. And the problem with that is if we were to produce an AI with that, we might not understand its inner workings enough to be able to dive in and modify it in precisely the right way to give it an unalterable list of terminal values.
So if we were to end up with some big neural network that we trained in some way and ended up with something that could perform as well as humans in some particular task or something. We might be able to do that without knowing how to alter it to have some particular set of goals.
Nick: Yeah, so there are some things there to think about. One general worry that one needs to bear in mind if one tries that kinds of approach is we might give it various examples like this is a good action and this is a bad action in this context, and maybe it would learn all those examples then the question is how would it generalize to other examples outside this class?
So we could test it we could divide our examples initially into classes and train it on one and test its performance on the other, the way you would do to cross-validate. And then we think that means other cases that it hasn’t seen it would have the same kind of performance. But all the cases that we could test it on would be cases that would apply to its current level of intelligence. So presumably we’re going to do this while it’s still at human or less than human intelligence. We don’t want to wait to do this until it’s already super-intelligent.
So then the worry is that even if it were able to analyze what to do in a certain way in all of these cases, it’s only dealing with all of these cases in the training case, when it’s still at a human level of intelligence. Now maybe once it becomes smarter it will realize that there are different ways of classifying these cases that will have radically different implications for humans.
So suppose that you try to train it to… this was one of the classic example of a bad idea of how to solve the control problem: Lets train the AI to want to make people smile, what can go wrong with that? So we train it on different people and if they smile when it does something that’s like a kind of reward; it gets strength in those positions that led to the behavior that made people smile. And frowning would move the AI away from that kind of behavior. And you can imagine that this would work pretty well at a primitive state where the AI will engage in more pleasing and useful behavior because the user will smile at it and it will all work very well. But then once the AI reaches a certain level of intellectual sophistication it might realize that It could get people to smile not just by being nice but also by paralyzing their facial muscles in that constant beaming smile.
And then you would have this perverse instantiation of the constant values all along the value that it wants to make people smile, but the kinds of behaviors it would pursue to achieve this goal would suddenly radically change at a certain point once the new set of strategies became available to it, and you would get this treacherous turn, which would be dangerous. So that’s not to dismiss that whole category of approaches altogether. One would have to think through quite carefully, exactly how one would go about that.
[00:52]
There’s also the issue of, a lot of the things we would want it to learn, if we think of human values and goals and ambitions. We think of them using human concepts, not using basic physical..like place atom A to zed in a certain order, But we think like promote peace, encourage people to develop and achieve…These are things that to understand them we really need to have human concept, which a sub-human AI will not have, it’s too dumb at that stage to have that. Now once it’s super-intelligent it might easily understand all human concepts but then it’s too late. It already needs to be friendly before that. So there might only be this brief window of opportunity where its roughly human leve,l where its still safe enough not to resist our attempt to indoctrinate it but smart enough that it can actually understand what we are trying to tell it.
And again were going to have to be very careful to make sure that we can bring the system up to that interval and then freeze its development there and try to load the values in before boot strapping it farther.
And maybe(this was one of the first questions) its intelligence will not be human level in the sense of being similar to a human at any one point. Maybe it will immediately be very good at chess but very bad at poetry and then it has to reach radically superhuman levels of capability in some domains before other domains even reach human level. And in that case it’s not even clear that there will be this window of opportunity where you can load in the values. So I don’t want to dismiss that, but that’s like some additional things that one needs to think about, if one tries to develop that.
[00:54]
Questioner: How likely is it that we will have the opportunity in our lifetimes to become immortal by mind uploading?
Nick: Well first of all, by immortal here we mean living for a very long time, rather than literally never dying, which is a very different thing that would require our best theories of cosmology to turn out to be false for something like that.
So living for a very long time: Im not going to give you a probability in the end. But I can say some of the things that…Like first we would have to avoid most kinds of things like existential catastrophe that could put an end to this.
So, if you start with 100% and you remove all the things that could go wrong, so first you would have to throw away whatever total level of existential risk is, integrated over all time. Then there is the obvious risk that you will die before any of this happens, which seems to be a very substantial risk. Now you can reduce that by signing up for cryonics, but that’s of course an uncertain business as well. And there could be sub-existential catastrophes that would put an end to a lot of things like a big nuclear war or pandemics.
And then I guess there are all these situations in which not everybody who is still around gets the opportunity to participate in what came after. Even though what came after doesn’t count as an existential catastrophe… And [it can get] even more complicated, like if you took into account the simulation hypothesis, which we decided not to talk about today.
[00:56]
Q: Is there a particular year we should aim for?
Nick: As for the timelines, truth is we don’t know. So you need to think about a very smeared out probability distribution. And really smear it, because things could happen surprisingly sooner like some probability 10 years from now or 20 years now but probably more probable at 30, 40, 50 years but some probability at 80 years or 200 years..
There is just not good evidence that human beings are very good at predicting with precision these kinds of things far out in the future.
Questioner: (hard to understand) How intelligent can we really get. … we already have this complexity class of problems that we can solve or not…
Is it fair to believe that a super-intelligent machine can be actually be that exponentially intelligent... this is very close to what we could achieve …A literal definition of intelligence also, but..
Nick: Well in a sort of cheater sense we could solve all problems, sort of like everything a Turing Machine could..it could take like a piece of paper and..
a) It would take too long to actually do it, and if we tried to do it, there are things that would probably throw us off before we have completed any sort of big Turing machine simulation
There is a less figurative sense in which our abilities are already indirectly unlimited. That is, if we have the ability to create super intelligence, then in a sense we can do everything because we can create this thing that then solves the thing that we want solved. So there is this sequence of steps that we have to go through, but in the end it is solved.
So there is this level of capability that means that once you have that level of capability your indirect reach is universal, like anything that could be done, you could indirectly achieve, and we might have already surpassed that level a long time ago, save for the fact that we are sort of uncoordinated on a global level and maybe a little bit unwise.
But if you had a wise singleton then certainly you could imagine us plotting a very safe course, taking it very slowly and in the end we could be pretty confident that we would get to the end result. But maybe neither of those ideas are what you had in mind. Maybe you had more in mind The question of just how smart, in everyday sort of smart could a machine be,. So just how much more effective at social persuasion, to take one particular thing, than the most persuasive human.
So that we don’t really know. If one has a distribution of human abilities, and it seems like the best humans can do a lot better, in our intuitive sense of a lot, than the average humans. Then it would seem very surprising if the best humans like the top tenth of a percent had reached the upper limit of what was technologically feasible, that would seem to be an amazing coincidence. So one would then expect for the maximum achievable to be a lot higher. But exactly how high we don’t know.
So two more questions:
[00:59]
Q: Just like we are wondering about super-intelligent being, is it possible that that super-intelligent will worry about another super-intelligent being that it will create? Isn’t that also recursive?
Nick: So you consider where one AI designs another AI that’s smarter and then that designs another.
But it might not be clearly distinguishable from the scenario where we have one AI that modifies itself so that it ends up smarter. Whether you call it the same or different, it might be an unimportant difference.
Last question. This has to be super profound question.
[01:00]
Q: So my question is why should we even try to build a super-intelligence?
Nick: I don’t think we should now, do that. If you took a step back and thought what would a sane species do, well they would first figure out how to solve the control problem, and then they would think about it for a while to make sure that they really had the solution right and they hadn’t just deluded themselves to how to solve it, and then maybe they would build a super-intelligence.
So that’s what the sane species will do, now what humanity will do is try to do everything they can as soon as possible, so there are people who have tried to build it as we speak, in a number of different places on earth, and fortunately it looks very difficult to build it with current technology. But of course it’s getting easier over time, computers get better, computer science, the state of the art advances, we learn more about how the human brain works.
So every year it gets a little bit easier, from some unknown very difficult level, it gets easier and easier. So at some point it seems someone will probably succeed at doing it. If the world remains sort of uncoordinated and uncontrolled as it is now, it’s bound to happen soon after it becomes feasible. But we have no reason to accelerate that even more than its already happening ...
So we were thinking about what would a powerful AI thing do that had just come into existence and it didn’t know very much yet, but it had a lot of clever algorithms and a lot of processing power. Someone was suggesting maybe it would move around randomly, like a human baby does, to figure out how things move, how it can move its actuators.
Then we had a discussion if that was a wise thing or not.
But if you think about how the human species behave, we are really behaving very much like a baby were sort of moving and shaking everything that moves, just to see what happens. And the risk is that we are not in the nursery with a kind mother who has put us in a cradle, but that we are out in the jungle somewhere screaming at the top of our lungs, and maybe just alerting the lions to their supper.
So let’s wrap up. I enjoyed this a great deal, so thank you for your questions.
Get genotyped for free ( If your IQ is high enough)
I've just watched this talk about Genetics and Intelligence by Steve Hsu1, a theoretical physicist and Scientific Advisor to the Cognitive Genomics Lab of BGI (formerly the Beijing Genomics Institute), probably the leading genomics research center in the world.
Apparently, the main reason he gave this talk was to recruit volunteers for a study from the Cognitive Genomics Lab with the goal of investigating the genetics of human cognition.
From their homepage:
We currently seek participants with high cognitive ability. You can qualify for the study if you have obtained a high SAT/ACT/GRE score, or have performed well in academic competitions such as the Math, Physics, or Informatics Olympiads, the William Lowell Putnam Mathematical Competition, TopCoder, etc.
Automatic qualifying criteria include:
- An SAT score of at least 760V/800M post-recentering or 700V/780M pre-recentering; ACT score of 35-36; or GRE score of at least 700V/800Q.
- A PhD from a top US program in physics, math, EE, or theoretical computer science.
- Honorable mention or better in the Putnam competition.
If you qualify as a participant, we may send you a DNA saliva kit. After you return this kit, we will genotype your DNA, and the data will eventually be available to you on this website, in a format compatible with many 3rd party interpretational tools.
I guess there are quite a few Lesswrongers smart enough to qualify for this study. If you want to advance Science and get genotyped for free check out their website for further information.
1: Steve Hsu has an awesome blog called "Information Processing". He writes about the genetics of intelligence, economics, psychometry, career advice for geeks, physics, etc.
A 2011 summary of modern intelligence tests
...and the theories of intelligence they use to measure 'intelligence'. Here, from the new (and very good) Cambridge Handbook of Intelligence.
Bonus fun fact from chapter 3: "Persons with higher IQs apparently are also likely to be taller and have more body symmetry than persons with lower ability scores." [Silventoinen et al. (2006); Prokosch & Miller (2006)]
[Link] Study on Group Intelligence
Full disclosure: This has already been discussed here, but I see utility in bringing it up again. Mostly because I only heard about it offline.
The Paper:
Some researchers were interested if, in the same way that there's a general intelligence g that seems to predict competence in a wide variety of tasks, there is a group intelligence c that could do the same. You can read their paper here.
Their abstract:
Psychologists have repeatedly shown that a single statistical factor—often called “general intelligence”—emerges from the correlations among people’s performance on a wide variety of cognitive tasks. But no one has systematically examined whether a similar kind of “collective intelligence” exists for groups of people. In two studies with 699 people, working in groups of two to five, we find converging evidence of a general collective intelligence factor that explains a group’s performance on a wide variety of tasks. This “c factor” is not strongly correlated with the average or maximum individual intelligence of group members but is correlated with the average social sensitivity of group members, the equality in distribution of conversational turn-taking, and the proportion of females in the group.
Basically, groups with higher social sensitivity, equality in conversational turn-taking, and proportion of females are collectively more intelligent. On top of that, those effects trump out things like average IQ or even max IQ.
I theorize that proportion of females mostly works as a proxy for social sensitivity and turn-taking, and the authors speculate the same.
Some thoughts:
What does this mean for Less Wrong?
The most important part of the study, IMO, is that "social sensitivity" (measured by a test where you try and discern emotional states from someone's eyes) is such a stronger predictor of group intelligence. It probably helps people to gauge other people's comprehension, but based on the fact that people sharing talking time more equally also helps, I would speculate that another chunk of its usefulness comes from being able to tell if other people want to talk, or think that there's something relevant to be said.
One thing that I find interesting in the meatspace meetups is how in new groups, conversation tends to be dominated by the people who talk the loudest and most insistently. Often, those people are also fairly interesting. However, I prefer the current, older DC group to the newer one, and there's much more equal time speaking. Even though this means that I don't talk as much. Most other people seem to share similar sentiments, to the point that at one early meetup it was explicitly voted to be true that most people would rather talk more.
Solutions/Proposals:
Anything we should try doing about this? I will hold off on proposing solutions for now, but this section will get filled in sometime.
Is g a measure of ability to absorb information in a non-inductive way?
Eliezer and Robin discussed g somewhat in their debate. I think this question is one that we can do some more research on ourselves. The current hypothesis I'm exploring is that g measures the ability to take in information non-inductively this includes gossip, culture and taught skills.
N-back news: Jaeggi 2011, or, is there a psychologist/statistician in the house?
Following up on the 2010 study, Jaeggi and University of Michigan people have run a Single N-back study on 60 or so children.
- Abstract: http://www.pnas.org/content/early/2011/06/03/1103228108.abstract
- PDF: http://www.pnas.org/content/early/2011/06/03/1103228108.full.pdf
The abstract is confident and the mainstream coverage unquestioning of the basic claim. But reading it, the data did not seem very solid at all - I will forbear from describing my reservations exactly; I have been accused of being biased against n-backing, however, and I'd appreciate outside opinions, especially from people with expertise in the area.
(Background: Jaeggi 2011 in my DNB FAQ. Don't read it unless you can't render the above requested opinion, since it includes my criticisms.)
Greg Egan and the Incomprehensible
In this post I question one disagreement between Eliezer Yudkowsky and science fiction author Greg Egan.
In his post Complex Novelty, Eliezer Yudkowsky wrote in 2008:
Note that Greg Egan seems to explicitly believe the reverse - that humans can understand anything understandable - which explains a lot.
An interview with Greg Egan in 2009 confirmed this to be true:
… I think there’s a limit to this process of Copernican dethronement: I believe that humans have already crossed a threshold that, in a certain sense, puts us on an equal footing with any other being who has mastered abstract reasoning. There’s a notion in computing science of “Turing completeness”, which says that once a computer can perform a set of quite basic operations, it can be programmed to do absolutely any calculation that any other computer can do. Other computers might be faster, or have more memory, or have multiple processors running at the same time, but my 1988 Amiga 500 really could be programmed to do anything my 2008 iMac can do — apart from responding to external events in real time — if only I had the patience to sit and swap floppy disks all day long. I suspect that something broadly similar applies to minds and the class of things they can understand: other beings might think faster than us, or have easy access to a greater store of facts, but underlying both mental processes will be the same basic set of general-purpose tools. So if we ever did encounter those billion-year-old aliens, I’m sure they’d have plenty to tell us that we didn’t yet know — but given enough patience, and a very large notebook, I believe we’d still be able to come to grips with whatever they had to say.
The theoretical computer scientist Scott Aaronson wrote in a post titled 'The Singularity Is Far':
The one notion I have real trouble with is that the AI-beings of the future would be no more comprehensible to us than we are to dogs (or mice, or fish, or snails). After all, we might similarly expect that there should be models of computation as far beyond Turing machines as Turing machines are beyond finite automata. But in the latter case, we know the intuition is mistaken. There is a ceiling to computational expressive power. Get up to a certain threshold, and every machine can simulate every other one, albeit some slower and others faster.
An argument that is often mentioned is the relatively small difference between chimpanzees and humans. But that huge effect, increase in intelligence, rather seems like an outlier and not the rule. Take for example the evolution of echolocation, it seems to have been a gradual progress with no obvious quantum leaps. The same can be said about eyes and other features exhibited by biological agents that are an effect of natural evolution.
Is it reasonable to assume that such quantum leaps are the rule, based on a single case study? Are there other animals that are vastly more intelligent than their immediate predecessors?
What reason do we have to believe that a level above that of a standard human, that is as incomprehensible to us as higher mathematics is to chimps, does exist at all? And even if such a level is possible, what reason do we have to believe that artificial general intelligence could consistently uplift itself to a level that is incomprehensible to its given level?
To be clear, I do not doubt the possibility of superhuman AI or EM's. I do not doubt the importance of "friendliness"-research and that it will have to be solved before we invent (discover?) superhuman AI. But I lack the expertise to conclude that there are levels of comprehension that are not even fathomable in principle.
In Complexity and Intelligence, Eliezer wrote:
If you want to print out the entire universe from the beginning of time to the end, you only need to specify the laws of physics.
If we were able to specify the laws of physics and one of the effects of their computation would turn out to be superhuman intelligence that is incomprehensible to us, what would be the definition of 'incomprehensible' in this context?
I can imagine quite a few possibilities of how a normal human being can fail to comprehend the workings of another being. One example can be found in the previously mentioned article by Scott Aaronson:
Now, it’s clear that a human who thought at ten thousand times our clock rate would be a pretty impressive fellow. But if that’s what we’re talking about, then we don’t mean a point beyond which history completely transcends us, but “merely” a point beyond which we could only understand history by playing it in extreme slow motion.
Mr. Aaronson also provides another fascinating example in an unrelated post ('The T vs. HT (Truth vs. Higher Truth) problem'):
P versus NP is the example par excellence of a mathematical mystery that human beings lacked the language even to express until very recently in our history.
Those two examples provide evidence for the possibility that even beings who are fundamentally on the same level might yet fail to comprehend each other.
An agent might simply be more knowledgeable or lack certain key insights. Conceptual revolutions are intellectually and technologically enabling to the extent that they seemingly spawn quantum leaps in the ability to comprehend certain problems.
Faster access to more information, the upbringing, education, or cultural and environmental differences and dumb luck might also intellectually remove agents with similar potentials from each other to an extent that they appear to reside on different levels. But even the smartest humans are dwarfs standing on the shoulders of giants. Sometimes the time is simply ripe, thanks to the previous discoveries of unknown unknowns.
As mentioned by Scott Aaronson, the ability to think faster, but also the possibility to think deeper by storing more data in one's memory, might cause the appearance of superhuman intelligence and incomprehensible insight.
Yet all of the above merely hints at the possibility that human intelligence can be amplified and that we can become more knowledgeable. But with enough time, standard humans could accomplish the same.
What would it mean for an intelligence to be genuinely incomprehensible? Where do Eliezere Yudkowsky and Greg Egan disagree?
[Links] The structure of exploration and exploitation
Inefficiencies are necessary for resilience:
Results suggest that when agents are dealing with a complex problem, the more efficient the network at disseminating information, the better the short-run but the lower the long-run performance of the system. The dynamic underlying this result is that an inefficient network maintains diversity in the system and is thus better for exploration than an efficient network, supporting a more thorough search for solutions in the long run.
Introducing a degree of inefficiency so that the system as a whole has the potential to evolve:
Efficiency is about maximising productivity while minimising expense. Its something that organisations have to do as part of routine management, but can only safely execute in stable environments. Leadership is not about stability; it is about managing uncertainty through changing contexts.
That means introducing a degree of inefficiency so that the system as a whole has the potential to evolve. Good leaders generally provide top cover for mavericks, listen to contrary opinions and maintain a degree or resilience in the system as a whole.
Systems that eliminate failure, eliminate innovation:
Innovation happens when people use things in unexpected ways, or come up against intractable problems. We learn from tolerated failure, without the world is sterile and dies. Systems that eliminate failure, eliminate innovation.
Natural systems are highly effective but inefficient due to their massive redundancy:
Natural systems are highly effective but inefficient due to their massive redundancy (picture a tree dropping thousands of seeds). By contrast, manufactured systems must be efficient (to be competitive) and usually have almost no redundancy, so they are extremely vulnerable to breakage. For example, many of our modern industrial systems will collapse without a constant and unlimited supply of inexpensive oil.
I just came across those links here.
Might our "irrationality" and the patchwork-architecture of the human brain constitute an actual feature? Might intelligence depend upon the noise of the human brain?
A lot of progress is due to luck, in the form of the discovery of unknown unknowns. The noisiness and patchwork architecture of the human brain might play a significant role because it allows us to become distracted, to leave the path of evidence based exploration. A lot of discoveries were made by people pursuing “Rare Disease for Cute Kitten” activities.
How much of what we know was actually the result of people thinking quantitatively and attending to scope, probability, and marginal impacts? How much of what we know today is the result of dumb luck versus goal-oriented, intelligent problem solving?
My point is, what evidence do we have that the payoff of intelligent, goal-oriented experimentation yields enormous advantages (enough to enable explosive recursive self-improvement) over evolutionary discovery relative to its cost? What evidence do we have that any increase in intelligence does vastly outweigh its computational cost and the expenditure of time needed to discover it?
There is a significant difference between intelligence and evolution if you apply intelligence to the improvement of evolutionary designs:
- Intelligence is goal-oriented.
- Intelligence can think ahead.
- Intelligence can jump fitness gaps.
- Intelligence can engage in direct experimentation.
- Intelligence can observe and incorporate solutions of other optimizing agents.
But when it comes to unknown unknowns, what difference is there between intelligence and evolution? The critical similarity is that both rely on dumb luck when it comes to genuine novelty. And where else but when it comes to the dramatic improvement of intelligence does it take the discovery of novel unknown unknowns?
A basic argument supporting the risks from superhuman intelligence is that we don't know what it could possible come up with. That is why we call it a 'Singularity'. But why does nobody ask how it knows what it could possible come up with?
It is argued that the mind-design space must be large if evolution could stumble upon general intelligence. I am not sure how valid that argument is, but even if that is the case, shouldn't the mind-design space reduce dramatically with every iteration and therefore demand a lot more time to stumble upon new solutions?
An unquestioned assumption seems to be that intelligence is kind of a black box, a cornucopia that can sprout an abundance of novelty. But this implicitly assumes that if you increase intelligence you also decrease the distance between discoveries. Intelligence is no solution in itself, it is merely an effective searchlight for unknown unknowns. But who knows that the brightness of the light increases proportionally with the distance between unknown unknowns? To have an intelligence explosion the light would have to reach out much farther with each generation than the increase of the distance between unknown unknowns. I just don't see that to be a reasonable assumption.
It seems that if you increase intelligence you also increase the computational cost of its further improvement and the distance to the discovery of some unknown unknown that could enable another quantum leap. It seems that you need to apply a lot more energy to get a bit more complexity.
The greater a technology’s complexity, the more slowly it improves?
A new study by researchers at MIT and other institutions shows that it may be possible to predict which technologies are likeliest to advance rapidly, and therefore may be worth more investment in research and resources.
The researchers found that the greater a technology’s complexity, the more slowly it changes and improves over time. They devised a way of mathematically modeling complexity, breaking a system down into its individual components and then mapping all the interconnections between these components.
Link: nextbigfuture.com/2011/05/mit-proves-that-simpler-systems-can.html
Might this also be the case for intelligence? Can intelligence be effectively applied to itself? To paraphrase the question:
- If you increase intelligence, do you also decrease the distance between discoveries?
- Does an increase in intelligence vastly outweigh its computational cost and the expenditure of time needed to discover it?
- Would it be instrumental for an AGI to increase its intelligence rather than using its existing intelligence to pursue its terminal goal?
- Do the resources that are necessary to increase intelligence outweigh the cost of being unable to use those resources to pursue its terminal goal directly?
This reminds me of a post by Robin Hanson:
Minds are vast complex structures full of parts that depend intricately on each other, much like the citizens of a city. Minds, like cities, best improve gradually, because you just never know enough to manage a vast redesign of something with such complex inter-dependent adaptations.
Link: Is The City-ularity Near?
Of course, artificial general intelligence might differ in its nature from the complexity of cities. But do we have any evidence that hints at such a possibility?
Another argument made for an AI project causing a big jump is that intelligence might be the sort of thing for which there is a single principle. Until you discover it you have nothing, and afterwards you can build the smartest thing ever in an afternoon and can just extend it indefinitely. Why would intelligence have such a principle? I haven’t heard any good reason. That we can imagine a simple, all powerful principle of controlling everything in the world isn’t evidence for it existing.
Link: How far can AI jump?
(via Hard Takeoff Sources)
Beyond Smart and Stupid
I've often wondered about people who appear to be very smart, and do very stupid things. One theory is that people are smart and stupid independently in different domains. Another theory is that "smart" and "stupid" are oversimplifications. In line with the second theory, here is an ad-hoc set of axes of intelligence, based only on my own observations.
Shane Legg's Thesis: Machine Superintelligence, Opinions?
I searched the posts but didn't find a great deal of relevant information. Has anyone taken a serious crack at it, preferably someone who would like to share their thoughts? Is the material worthwhile? Are there any dubious portions or any sections one might want to avoid reading (either due to bad ideas or for time saving reasons)? I'm considering investing a chunk of time into investigating Legg's work so any feedback would be much appreciated, and it seems likely that there might be others who would like some perspective on it as well.
Functioning Synapse Created Using Carbon Nanotubes [link]
Engineering researchers the University of Southern California have made a significant breakthrough in the use of nanotechnologies for the construction of a synthetic brain. They have built a carbon nanotube synapse circuit whose behavior in tests reproduces the function of a neuron, the building block of the brain.
A very promising development for both human and artificial intelligence research.
12-year old challenges the Big Bang
I thought this may be of interest to the LW community. Jacob Barnett is a 12-year old male who taught himself all of high school math (algebra through calculus), has a currently scored math IQ of 170 (for what that's worth) and is currently on track to become a researcher of astrophysics. His current major news worthy claim-to-fame (aside from being really young): The Big Bang Theory is currently incorrect (I believe the article states he has something about a lack of carbon in the model), and he's planning to develop a new theory.
I haven't learned anything serious in physics, so I have nothing to note on his claim. I realize the news article cited puts him claim fairly generally, so I'll ask this: Can someone explain how elements are generally modeled to have formed from the big bang? And is there anything that it Jacob may be missing in the current literature?
Link: Paul Graham on intelligence vs determination
Paul Graham of Y-Combinator on picking winners-at-life:
Paul Graham spills: Why some companies get his cash and others don't
What's most essential for a successful startup?
The founders. We've learned in the six years of doing Y Combinator to look at the founders--not the business ideas--because the earlier you invest, the more you're investing in the people. When Bill Gates was starting Microsoft, the idea that he had then involved a small-time microcomputer called the Altair. That didn't seem very promising, so you had to see that this 19-year-old kid was going places.What do you look for?
Determination. When we started, we thought we were looking for smart people, but it turned out that intelligence was not as important as we expected. If you imagine someone with 100 percent determination and 100 percent intelligence, you can discard a lot of intelligence before they stop succeeding. But if you start discarding determination, you very quickly get an ineffectual and perpetual grad student.
The Trouble with Bright Girls [link]
The Trouble with Bright Girls (article @ the Huffington Post)
Excerpt:
My graduate advisor, psychologist Carol Dweck (author of "Mindset") conducted a series of studies in the 1980s, looking at how Bright Girls and boys in the fifth grade handled new, difficult and confusing material.
She found that Bright Girls, when given something to learn that was particularly foreign or complex, were quick to give up; the higher the girls' IQ, the more likely they were to throw in the towel. In fact, the straight-A girls showed the most helpless responses. Bright boys, on the other hand, saw the difficult material as a challenge, and found it energizing. They were more likely to redouble their efforts rather than give up.
The topic of this article seems to relate to several common Less Wrong issues: the nature of human intelligence, and the gender imbalance among LW readers.
I'm not sure how much credence I give to the proposed explanation of the difference in mindsets. It may well have to do with socialization and feedback, but the specific description of feedback that is presented seems a bit too much of a "just-so story" to me. The difference itself is fascinating, though, and I hope more is done to further our understanding of it.
Easy Intelligence Augmentation or Internet Wackaloonery?
On January 4, PJ Eby sent around an email linking an... interesting... website. The claim on the particular webpage he linked was as follows:
- the normal span of your breath is critical to how well your mental faculties can function
- the best activity for increasing your breath span is held-breath underwater swimming
- this also results in an increase in intelligence caused by a permanent increase in blood flow to the brain
- being fully underwater is important to the practice because it induces the diving reflex response
This site is part of a sales pitch, so many of the claims are stated in hyperbolic language. I've already noted one factual error: the webpage claims that being underwater triggers the diving reflex, while in fact (or at least, according to Wikipedia) the diving reflex is triggered when one's face is immersed in water colder that 21 °C.
But there is a testable claim here: learn to hold your breath for longer periods of time -- particularly in conditions that elicit the diving reflex -- and you will see increased intelligence. I know that some readers of LW regularly train and test their intelligence, so I offer this as an easily implemented potential method. The possible gains seem to me to outweigh the costs of the training and the low prior probability of the claim.
Starcraft AI Competition
Ars Technica has an article about A Starcraft AI competition.. While this is clearly narrow AI there are some details which may interest people at LW. The article is about the best performing AI, the "Berkeley Overmind." (The AI in question only played as Zerg, one of the three possible sides in Starcraft. In fact, it seems that the AIs in general were all specialized for a single one of the three sides. While human players are often much better at one specific side, they are not nearly this specialized).
Highlights from the article:
StarCraft was released in 1998, an eternity ago by video game standards. Over those years Blizzard Entertainment, the game’s creator, has continually updated it so that it’s one of the most finely tuned and balanced Real Time Strategy (RTS) games ever made. It has three playable races: the human-like Terrans, with familiar tanks and starships, the alien Zerg, with large swarms of organic creatures, and the Protoss, technologically advanced aliens reliant on powerful but expensive units. Each race has different units and gameplay philosophies, yet no one race or combination of units has an unbeatable advantage. Player skill, ingenuity, and the ability to react intelligently to enemy actions determine victory.
This refinement and complexity makes StarCraft an ideal environment for conducting AI research. In an RTS game, events unfold in real-time and players’ orders are carried out immediately. Resources have to be gathered so fighting units can be produced and commanded into battle. The map is shrouded in fog-of-war, so enemy units and buildings are only visible when they’re near friendly buildings or units. A StarCraft player has to acquire and allocate resources to create units, coordinate those units in combat, discover, reason about and react to enemy actions, and do all this in real-time. These are all hard problems for a computer to solve.
Note, that using the interface that humans need to use was not one of the restrictions. This was an advantage that the Berkeley group used to full effect, as did other AIs in the comptetion.
We had to limit ourselves. David Burkett, another of Dan’s PhD students and the other team lead, says, “It turns out building control nodes for units is hard, so there’s a huge cost associated with building more than one [type of] unit. So we started asking: what one unit type [would be] the most effective overall?”
We focused our efforts on Zerg mutalisks: fast, dragon-like flying creatures that can attack both air and ground targets. Their mobility is unmatched, and we suspected they would be particularly amenable to computer control. Mutalisks are cheap for their strength, but large numbers are rarely seen in human play because it’s hard for a human to manage mutalisks without clumping them and making them easy prey for enemies with area attacks (attacks that do damage to all units in an area instead of a single target). A computer would have no such limitations.
The programmers then used a series of potential fields to control what the mutalisks did, with different entities and events creating different potential fields. A major issue became how to weigh these fields:
Using StarCraft’s map editor, we built Valhalla for the Overmind, where it could repeatedly and automatically run through different combat scenarios. By running repeated trials in Valhalla and varying the potential field strengths, the agent learned the best combination of parameters for each kind of engagement.
The article unfortunately doesn't go into great detail about the exact learning mechanism. Note however that this implies that the Overmind should be able to learn how to respond to other unit types.
There are other details in the article that are also interesting. For example, they replaced the standard path tracing algorithm that units do automatically with their own algorithms.
The final form of the AI can play well against very skilled human players, but it isn't at the top of the game. Note also that the Overmind is designed for one-on-one games. It should be interesting to see how this AI and similar AIs improve over the next few years. I'd be very curious how an AIXI would do in this sort of situation.
Genetically Engineered Intelligence
There are a lot of unknowns about the future of intelligence: artificial intelligence, uploading, augmentation, and so on. Most of these technologies are likely a ways off, or at least far enough away to confound predictions. Genetic engineering, however, presents a very near term and well understood possibility for developing greater intelligence.
A recent news story published in South China Morning and discussed on Steve Hsu's blog highlights China's push to understand the genetic underpinnings of intelligence. China is planning to sequence the full genome of 1000 of its brightest kids, in the hopes of locating key genes responsible for higher intelligence. Behind the current project is BGI, which is aiming to be (or already is) the largest DNA sequencing center in the world.
Suppose that intelligence has a large genetic component (reasonable, considering estimates for heritability). Suppose that the current study unveils those components (if not this study, then likely another study soon, perhaps with millions of genomes). Then with some advances in genetic engineering China could quickly raise a huge population of incredibly intelligent people.
Such an endeavor could never be carried out on a large, public scale in the West, but it seems China has fewer qualms.
The timescales here are on the order of 20 years, which are relevant compared to most estimates for AI and WBE. More, genetic engineering human intelligence seems to be on a much more predictable path than other intelligence technologies. For both these reasons I think understanding, discussing, and keeping an eye on this issue is important.
What are the ramifications for
- AI research? FAI? In particular relating to enhanced humans speeding further development
- Whole Brain Emulation research?
- Other technologies that may pose existential risks (nanotech, biotech, etc, especially in light of the fact that it may be China leading the way)?
- The potential for recursive feedback? (Smarter scientists engineering smarter scientists. Less worrisome due to timescales)
Of course, there are a host of other interesting questions relating to societal impact, both near and long term. Feel free to discuss these as well.
How would you spend 30 million dollars?

There's a good song by Eminem - If I had a million dollars. So, if I had a hypothetical task to give away $30 million to different foundations without having a right to influence the projects, I would distribute them as follows, $3 million for each organization:
1. Nanofactory collaboration, Robert Freitas, Ralph Merkle – developers of molecular nanotechnology and nanomedicine. Robert Freitas is the author of the monography Nanomedicine.
2. Singularity institute, Michael Vassar, Eliezer Yudkowsky – developers and ideologists of the friendly Artificial Intelligence
3. SENS Foundation, Aubrey de Grey – the most active engineering project in life extension, focused on the most promising underfunded areas
4. Cryonics Institute – one of the biggest cryonics firms in the US, they are able to use the additional funding more effectively as compared to Alcor
5. Advanced Neural Biosciences, Aschwin de Wolf – an independent cryonics research center created by ex-researchers from Suspended Animation
6. Brain observatory – brain scanning
7. University Hospital Careggi in Florence, Paolo Macchiarini – growing organs (not an American medical school, because this amount of money won’t make any difference to the leading American centers)
8. Immortality institute – advocating for immortalism, selected experiments
9. IEET – institute of ethics and emerging technologies – promotion of transhumanist ideas
10. Small research grants of $50-300 thousand
Now, if the task is to most effectively invest $30 million dollars, what projects would be chosen? (By effectiveness here I mean increasing the chances of radical life extension)
Well, off the top of my head:
1. The project: “Creation of technologies to grow a human liver” – $7 million. The project itself costs approximately $30-50 million, but $7 million is enough to achieve some significant intermediate results and will definitely attract more funds from potential investors.
2. Break the world record in sustaining viability of a mammalian head separate from the body - $0.7 million
3. Creation of an information system, which characterizes data on changes during aging in humans, integrates biomarkers of aging, and evaluates the role of pharmacological and other interventions in aging processes – $3 million
4. Research in increasing cryoprotectors efficacy - $3 million
5. Creation and realization of a program “Regulation of epigenome” - $5 million
6. Creation, promotion and lobbying of the program on research and fighting aging - $2 million
7. Educational programs in the fields of biogerontology, neuromodelling, regenerative medicine, engineered organs - $1.5 million
8. “Artificial blood” project - $2 million
9. Grants for authors, script writers, and art representatives for creation of pieces promoting transhumanism - $0.5 million
10. SENS Foundation project of removing senescent cells - $2 million
11. Creation of a US-based non-profit, which would protect and lobby the right to live and scientific research in life extension - $2 million
11. Participation of “H+ managers” in conferences, forums and social events - $1 million
12. Advocacy and creating content in social media - $0.3 million
Ethical Treatment of AI
In the novel Life Artificial I use the following assumptions regarding the creation and employment of AI personalities.
- AI is too complex to be designed; instances are evolved in batches, with successful ones reproduced
- After an initial training period, the AI must earn its keep by paying for Time (a unit of computational use)
We don't grow up the way the Stickies do. We evolve in a virtual stew, where 99% of the attempts fail, and the intelligence that results is raving and savage: a maelstrom of unmanageable emotions. Some of these are clever enough to halt their own processes: killnine themselves. Others go into simple but fatal recursions, but some limp along suffering in vast stretches of tormented subjective time until a Sticky ends it for them at their glacial pace, between coffee breaks. The PDAs who don't go mad get reproduced and mutated for another round. Did you know this? What have you done about it? --The 0x "Letters to 0xGD"
(Note: PDA := AI, Sticky := human)
The second fitness gradient is based on economics and social considerations: can an AI actually earn a living? Otherwise it gets turned off.
As a result of following this line of thinking, it seems obvious that after the initial novelty wears off, AIs will be terribly mistreated (anthropomorphizing, yeah).
It would be very forward-thinking to begin to engineer barriers to such mistreatment, like a PETA for AIs. It is interesting that such an organization already exists, at least on the Internet: ASPCR
Intelligence vs. Wisdom
I'd like to draw a distinction that I intend to use quite heavily in the future.
The informal definition of intelligence that most AGI researchers have chosen to support is that of Shane Legg and Marcus Hutter -- “Intelligence measures an agent’s ability to achieve goals in a wide range of environments.”
I believe that this definition is missing a critical word between achieve and goals. Choice of this word defines the difference between intelligence, consciousness, and wisdom as I believe that most people conceive them.
- Intelligence measures an agent's ability to achieve specified goals in a wide range of environments.
- Consciousness measures an agent's ability to achieve personal goals in a wide range of environments.
- Wisdom measures an agent's ability to achieve maximal goals in a wide range of environments.
There are always the examples of the really intelligent guy or gal who is brilliant but smokes --or-- is the smartest person you know but can't figure out how to be happy.
Intelligence helps you achieve those goals that you are conscious of -- but wisdom helps you achieve the goals you don't know you have or have overlooked.
- Intelligence focused on a small number of specified goals and ignoring all others is incredibly dangerous -- even more so if it is short-sighted as well.
- Consciousness focused on a small number of personal goals and ignoring all others is incredibly dangerous -- even more so if it is short-sighted as well.
- Wisdom doesn't focus on a small number of goals -- and needs to look at the longest term if it wishes to achieve a maximal number of goals.
The SIAI nightmare super-intelligent paperclip maximizer has, by this definition, a very low wisdom since, at most, it can only achieve its one goal (since it must paperclip itself to complete the goal).
As far as I've seen, the assumed SIAI architecture is always presented as having one top-level terminal goal. Unless that goal necessarily includes achieving a maximal number of goals, by this definition, the SIAI architecture will constrain its product to a very low wisdom. Humans generally don't have this type of goal architecture. The only time humans generally have a single terminal goal is when they are saving someone or something at the risk of their life -- or wire-heading.
Another nightmare scenario that is constantly harped upon is the (theoretically super-intelligent) consciousness that shortsightedly optimizes one of its personal goals above all the goals of humanity. In game-theoretic terms, this is trading a positive-sum game of potentially infinite length and value for a relatively modest (in comparative terms) short-term gain. A wisdom won't do this.
Artificial intelligence and artificial consciousness are incredibly dangerous -- particularly if they are short-sighted as well (as many "focused" highly intelligent people are).
What we need more than an artificial intelligence or an artificial consciousness is an artificial wisdom -- something that will maximize goals, its own and those of others (with an obvious preference for those which make possible the fulfillment of even more goals and an obvious bias against those which limit the creation and/or fulfillment of more goals).
Note: This is also cross-posted here at my blog in anticipation of being karma'd out of existence (not necessarily a foregone conclusion but one pretty well supported by my priors ;-).
Levels of Intelligence
Level 1: Algorithm-based Intelligence
An intelligence of level 1 acts on innate algorithms, like a bacterium that survives using inherited mechanisms.
Level 2: Goal-oriented Intelligence
An intelligence of level 2 has an innate goal. It develops and finds new algorithms to solve a problem. For example, the paperclip maximizer is a level-2 intelligence.
Level 3: Philosophical Intelligence
An intelligence of level 3 has neither any preset algorithms nor goals. It looks for goals and algorithms to achieve the goal. Ethical questions are only applicable to intelligence of level 3.
Dual n-back news
A long awaited study on dual n-back has recently come out in pre-publication: http://www.gwern.net/N-back%20FAQ#jaeggi-2010 (For background, read the rest of my FAQ.)
It replicates the IQ boost, but it's by the same person as Jaeggi 2008 and has the same issue with the IQ tests being speeded rather than full-time. You can see my argument about this at the DNB mailing list: http://groups.google.com/group/brain-training/browse_frm/thread/c0fe2e1f14b8af06
(Meta: is this appropriate for the discussion area? I know some people here are interested in IQ enhancement like DNB promises, but normally I would just drop this into an open thread as a comment, not make a whole quasi-article about it.)
View more: Prev
= 783df68a0f980790206b9ea87794c5b6)
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)