This is great news! FAI just got a huge increase in legitimacy - in fact, this is the biggest such boost I can think of.
provided that the machine is docile enough to tell us how to keep it under control
I must have been sleeping through all the other quotations of this. It's the first time I noticed this was a part of the original text.
It was left off: http://singinst.org/summit/overview/whatisthesingularity/
It's left off the wikipedia entry that references it: http://en.wikipedia.org/wiki/Technological_singularity
And this other random high Google hit: http://www.committedsardine.com/blogpost.cfm?blogID=1771
I guess one upshot is that I pulled up the original article to verify (and no the comment about Vernor Vinge was not in the original). Scholarship!
Marvin Minsky once suggested that an AI program designed to solve the Riemann Hypothesis might end up taking over all the resources of Earth to build more powerful supercomputers to help achieve its goal.
I would like to know why he doesn't actively work on FAI, or voices his concerns more loudly. I might ask him about it in an e-mail, if nobody else wants to do it instead.
One thing I will note is that I'm not sure why they say AGI has its roots in Solomonoff's induction paper. There is such a huge variety in approaches to AGI... what do they all have to do with that paper?
AIXI is based on Solomonoff, and to the extent that you regard all other AGIs as approximations to AIXI...
Or to look at it another way, Solomonoff was the first mathematical specification of a system that could, in principle if not in the physical universe, learn anything learnable by a computable system.
I think the interesting feature of Solomonoff induction is that it does no worse than any other object from the same class (lower-semicomputable semimeasures), not just objects from a lower class (computable humans). I'm currently trying to solve a related problem where it's easy to devise an agent that beats all humans, but difficult to devise one that's optimal in its own class.
I estimate brains spend about 80% of their time doing inductive inference (the rest is evaluation, tree-pruning, etc). Solomonoff's induction is a general theory of inductive inference. Thus the connection.
A while ago I took a similar (very quick) look at another AI text, Nils Nilsson's AI History: The Quest for Artificial Intelligence.
Norvig & Russell: Yudkowsky (2008) goes into more detail about how to design a Friendly AI. He asserts that friendliness (a desire not to harm humans) should be designed in from the start, but that the designers should recognize both that their own designs may be flawed, and that the robot will learn and evolve over time. Thus the challenge is one of mechanism design - to define a mechanism for evolving AI systems under a system of checks and balances, and to give the systems utility functions that will remain friendly in the face of such changes.
"Mechanism design"? "Checks and balances"? Do you know what they mean by "Yudkowsky (2008)" and where I can find a copy? I'd like to see this for myself.
OK, so am I misreading Yudkowsky, or are Norvig and Russell misreading Yudkowsky, or am I misreading Norvig and Russell? Because if you take "mechanism design" and "checks and balances" to have the obvious economic and political meanings in the theories of multi-agent systems, then I am pretty sure that Yudkowsky does not claim that this is where the challenge lies.
This is an introductory textbook for students who haven't been exposed to these ideas before. The paragraph makes a lot more sense under that assumption, than under the assumption they are trying to be technically correct down to every term they use.
Perhaps. But considering that we are talking about chapter 26 of a 27 chapter textbook, and that the authors spent 5 pages explaining the concept of "mechanism design" back in section 17.6, and also considering that every American student learns about the political concept of "checks and balances" back in high school, I'm going to stick with the theory that they either misunderstood Yudkowsky, or decided to disagree with him without calling attention to the fact.
ETA: Incidentally, if the authors are inserting their own opinion and disagreeing with Yudkowsky, I tend to agree with them. In my (not yet informed opinion), Eliezer dismisses the possibility of a multi-agent solution too quickly.
Eliezer dismisses the possibility of a multi-agent solution too quickly.
A multi-machine solution? Is that so very different from one machine with a different internal architecture?
I favor a multi-agent solution which includes both human and machine agents. But, yes, a multi-machine solution may well differ from a unified artificial rational agent. For one thing, the composite will not be itself a rational agent (it may split its charitable contributions between two different charities, for example. :)
ETA: More to the point, a singleton must self-modify to 'grow' in power and intelligence, and will strive to preserve its utility function (values) in the course of doing so. A coalition, on the other hand, grows in power by creating or enlisting new members. So, for example, rogue AI's can be incorporated into a coalition, whereas a singleton will have to defeat and destroy them. Furthermore, the political balance within a coalition may shift over time, as agents who are willing to delay gratification gain in power, and agents who demand instant gratification lose relative power. And as the political balance shifts, so does the effective composite utility function.
More to the point, a singleton must self-modify to 'grow' in power and intelligence, and will strive to preserve its utility function (values) in the course of doing so. A coalition, on the other hand, grows in power by creating or enlisting new members. So, for example, rogue AI's can be incorporated into a coalition, whereas a singleton will have to defeat and destroy them. Furthermore, the political balance within a coalition may shift over time, as agents who are willing to delay gratification gain in power, and agents who demand instant gratification lose relative power. And as the political balance shifts, so does the effective composite utility function.
It sounds as though you are thinking about the early days.
ISTM that a single creature could grow in the manner you describe a coalition growing - by assimilation and compromise. Its might not naturally favour behaving in that way - but it is possible to make an agent with whatever values you like.
More to the point, if a single creature forms from a global government, or the internet, it will probably start off in a pretty inclusive state. Only the terrorists will be excluded. There is no monopolies and mergers commission at that level, just a hangover from past, fragmented times.
Is there any post in LW guiving a LWer guidelines for reading AIMA?
Meaning which chapters are more or less relevant for those whose needs are of abstract and intelectual kind, not those who need to actually do the AI stuff?
An AIMA for non-AI people thing.
Read the first two chapters (part I). Skim the introductory paragraphs and summary section and read the bibliography section of each chapter in parts II - VII. Read chapters 26 and 27 (part VIII).
Yes, I realize that's basically the introductory, high level summary, and epilogue materials. AIMA is a technical book on AI. If you're not an AI person then I'm not sure what the point would be in reading it...
Nifty. I've been looking forward to reading AIMA. I get most of my textbooks online these days, but this looks like the sort of thing I'd like to have on my bookshelf next to PT:LOS, "Causality", "Introduction to Algorithms", etc. I just wish it weren't $109...
I would like to share some thoughts on this topic in general in terms of AI, and Singularity.
I am a speculator and find that a right decision typically does not exist. A decision is, more like a judgement and selection of the better alternative. Most executives when making a decision will need to use more opinion than fact in top decisions, especially when high amounts of uncertainty are involved.
In many cases, outcomes do not come out as intended.
Relative, this AI singularity matter. In an effort to create a potential hedge of protection for the good of mankind, we consider the idea, "to create AI machines that are intended to be human friendly before any other AI machines are made".
This may be the last invention man will ever make...
Please consider:
These are a few of many considerations that require analysis and consideration. Determining the right questions to ask is another hard part.
This post will not even attempt to solve this problem.
I hope this adds value to the discussion, if not here, that it be directed to the best place to achieve the most value to the decision making process.
AI: A Modern Approach is by far the dominant textbook in the field. It is used in 1200 universities, and is the 25th most-cited publication in computer science. If you're going to learn AI, this is how you learn it.
Luckily, the concepts of AGI and Friendly AI get pretty good treatment in the 3rd edition, released in 2009.
The Singularity is mentioned in the first chapter on page 12. Both AGI and Friendly AI are also mentioned in the first chapter, on page 27:
Chapter 26 is about the philosophy AI, and section 26.3 is "The Ethics and Risks of Developing Artificial Intelligence." They are:
Each of those sections is one or two paragraphs long. The final risk of AI takes up 3.5 pages: (6) The Success of AI might mean the end of the human race. Here's a snippet:
Then they mention Moravec, Kurzweil, and transhumanism, before returning to a more concerned tone about AI. They cover Asimov's three laws of robotics, and then:
It's good this work is getting such mainstream coverage!