Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Breaking the vicious cycle

43 XiXiDu 23 November 2014 06:25PM

You may know me as the guy who posts a lot of controversial stuff about LW and MIRI. I don't enjoy doing this and do not want to continue with it. One reason being that the debate is turning into a flame war. Another reason is that I noticed that it does affect my health negatively (e.g. my high blood pressure (I actually had a single-sided hearing loss over this xkcd comic on Friday)).

This all started in 2010 when I encountered something I perceived to be wrong. But the specifics are irrelevant for this post. The problem is that ever since that time there have been various reasons that made me feel forced to continue the controversy. Sometimes it was the urge to clarify what I wrote, other times I thought it was necessary to respond to a reply I got. What matters is that I couldn't stop. But I believe that this is now possible, given my health concerns.

One problem is that I don't want to leave possible misrepresentations behind. And there very likely exist misrepresentations. There are many reasons for this, but I can assure you that I never deliberately lied and that I never deliberately tried to misrepresent anyone. The main reason might be that I feel very easily overwhelmed and never had the ability to force myself to invest the time that is necessary to do something correctly if I don't really enjoy doing it (for the same reason I probably failed school). Which means that most comments and posts are written in a tearing hurry, akin to a reflexive retraction from the painful stimulus.

<tldr>

I hate this fight and want to end it once and for all. I don't expect you to take my word for it. So instead, here is an offer:

I am willing to post counterstatements, endorsed by MIRI, of any length and content[1] at the top of any of my blog posts. You can either post them in the comments below or send me an email (da [at] kruel.co).

</tldr>

I have no idea if MIRI believes this to be worthwhile. But I couldn't think of a better way to solve this dilemma in a way that everyone can live with happily. But I am open to suggestions that don't stress me too much (also about how to prove that I am trying to be honest).

You obviously don't need to read all my posts. It can also be a general statement.

I am also aware that LW and MIRI are bothered by RationalWiki. As you can easily check from the fossil record, I have at points tried to correct specific problems. But, for the reasons given above, I have problems investing the time to go through every sentence to find possible errors and attempt to correct it in such a way that the edit is not reverted and that people who feel offended are satisfied.

[1] There are obviously some caveats regarding the content, such as no nude photos of Yudkowsky ;-)

What do you mean by Pascal's mugging?

4 XiXiDu 20 November 2014 04:38PM

Some people[1] are now using the term Pascal's mugging as a label for any scenario with a large associated payoff and a small or unstable probability estimate, a combination that can trigger the absurdity heuristic.

Consider the scenarios listed below: (a) Do these scenarios have something in common? (b) Are any of these scenarios cases of Pascal's mugging?

(1) Fundamental physical operations -- atomic movements, electron orbits, photon collisions, etc. -- could collectively deserve significant moral weight. The total number of atoms or particles is huge: even assigning a tiny fraction of human moral consideration to them or a tiny probability of them mattering morally will create a large expected moral value. [Source]

(2) Cooling something to a temperature close to absolute zero might be an existential risk. Given our ignorance we cannot rationally give zero probability to this possibility, and probably not even give it less than 1% (since that is about the natural lowest error rate of humans on anything). Anybody saying it is less likely than one in a million is likely very overconfident. [Source]

(3) GMOS might introduce “systemic risk” to the environment. The chance of ecocide, or the destruction of the environment and potentially humans, increases incrementally with each additional transgenic trait introduced into the environment. The downside risks are so hard to predict -- and so potentially bad -- that it is better to be safe than sorry. The benefits, no matter how great, do not merit even a tiny chance of an irreversible, catastrophic outcome. [Source]

(4) Each time you say abracadabra, 3^^^^3 simulations of humanity experience a positive singularity.

If you read up on any of the first three scenarios, by clicking on the provided links, you will notice that there are a bunch of arguments in support of these conjectures. And yet I feel that all three have something important in common with scenario four, which I would call a clear case of Pascal's mugging.

I offer three possibilities of what these and similar scenarios have in common:

  • Probability estimates of the scenario are highly unstable and highly divergent between informed people who spent a similar amount of resources researching it.
  • The scenario demands skeptics to either falsify or accept its decision relevant consequences. The scenario is however either unfalsifiable by definition, too vague, or almost impossibly difficult to falsify.
  • There is no or very little direct empirical evidence in support of the scenario.[2]

In any case, I admit that it is possible that I just wanted to bring the first three scenarios to your attention. I stumbled upon each very recently and found them to be highly..."amusing".

 

[1] I am also guilty of doing this. But what exactly is wrong with using the term in that way? What's the highest probability for which the term is still applicable? Can you offer a better term?

[2] One would have to define what exactly counts as "direct empirical evidence". But I think that it is pretty intuitive that there exists a meaningful difference between the risk of an asteroid that has been spotted with telescopes and a risk that is solely supported by a priori arguments.

Computer Science and Programming: Links and Resources

29 XiXiDu 29 May 2012 01:17PM

Updated Version @ LW Wiki: wiki.lesswrong.com/wiki/Programming_resources

Contents

 

How Computers Work

1. CODE The Hidden Language of Computer Hardware and Software

The book intends to show a layman the basic mechanical principles of how computers work, instead of merely summarizing how the different parts relate. He starts with basic principles of language and logic and then demonstrates how they can be embodied by electrical circuits, and these principles give him an opening to describe in principle how computers work mechanically without requiring very much technical knowledge. Although it is not possible in a medium sized book for layman to describe the entire technical summary of a computer, he describes how and why it is possible that elaborate electronics can act in the ways computers do. In the introduction, he contrasts his own work with those books which "include pictures of trains full of 1s and 0s."

2. The Elements of Computing Systems: Building a Modern Computer from First Principles

Indeed, the best way to understand how computers work is to build one from scratch, and this textbook leads students through twelve chapters and projects that gradually build a basic hardware platform and a modern software hierarchy from the ground up. In the process, the students gain hands-on knowledge of hardware architecture, operating systems, programming languages, compilers, data structures, algorithms, and software engineering. Using this constructive approach, the book exposes a significant body of computer science knowledge and demonstrates how theoretical and applied techniques taught in other courses fit into the overall picture.

3. The Write Great Code Series (A Solid Foundation in Software Engineering for Programmers)

Write Great Code Volume I: Understanding the Machine

This, the first of four volumes, teaches important concepts of machine organization in a language-independent fashion, giving programmers what they need to know to write great code in any language, without the usual overhead of learning assembly language to master this topic. The Write Great Code series will help programmers make wiser choices with respect to programming statements and data types when writing software.

Write Great Code Volume II: Thinking Low-Level, Writing High-Level

...a good question to ask might be "Is there some way to write high-level language code to help the compiler produce high-quality machine code?" The answer to this question is "yes" and Write Great Code, Volume II, will teach you how to write such high-level code. This volume in the Write Great Code series describes how compilers translate statements into machine code so that you can choose appropriate high-level programming language statements to produce executable code that is almost as good as hand-optimized assembly code.

4. The Art of Assembly Language Programming

Assembly is a low-level programming language that's one step above a computer's native machine language. Although assembly language is commonly used for writing device drivers, emulators, and video games, many programmers find its somewhat unfriendly syntax intimidating to learn and use.

Since 1996, Randall Hyde's The Art of Assembly Language has provided a comprehensive, plain-English, and patient introduction to assembly for non-assembly programmers. Hyde's primary teaching tool, High Level Assembler (or HLA), incorporates many of the features found in high-level languages (like C, C++, and Java) to help you quickly grasp basic assembly concepts. HLA lets you write true low-level code while enjoying the benefits of high-level language programming.

5. The Art of Computer Programming

This work is not about computer programming in the narrow sense, but about the algorithms and methods which lie at the heart of most computer systems.

At the end of 1999, these books were named among the best twelve physical-science monographs of the century by American Scientist, along with: Dirac on quantum mechanics, Einstein on relativity, Mandelbrot on fractals, Pauling on the chemical bond, Russell and Whitehead on foundations of mathematics, von Neumann and Morgenstern on game theory, Wiener on cybernetics, Woodward and Hoffmann on orbital symmetry, Feynman on quantum electrodynamics, Smith on the search for structure, and Einstein's collected papers.

An Overview of Computer Programming

1. Seven Languages in Seven Weeks: A Pragmatic Guide to Learning Programming Languages

Ruby, Io, Prolog, Scala, Erlang, Clojure, Haskell. With Seven Languages in Seven Weeks, by Bruce A. Tate, you'll go beyond the syntax-and beyond the 20-minute tutorial you'll find someplace online. This book has an audacious goal: to present a meaningful exploration of seven languages within a single book. Rather than serve as a complete reference or installation guide, Seven Languages hits what's essential and unique about each language. Moreover, this approach will help teach you how to grok new languages.

For each language, you'll solve a nontrivial problem, using techniques that show off the language's most important features. As the book proceeds, you'll discover the strengths and weaknesses of the languages, while dissecting the process of learning languages quickly--for example, finding the typing and programming models, decision structures, and how you interact with them.

2. Programming Language Pragmatics

The ubiquity of computers in everyday life in the 21st century justifies the centrality of programming languages to computer science education.  Programming languages is the area that connects the theoretical foundations of computer science, the source of problem-solving algorithms, to modern computer architectures on which the corresponding programs produce solutions.  Given the speed with which computing technology advances in this post-Internet era, a computing textbook must present a structure for organizing information about a subject, not just the facts of the subject itself.  In this book, Michael Scott broadly and comprehensively presents the key concepts of programming languages and their implementation, in a manner appropriate for computer science majors. 

3. An Introduction to Functional Programming Through Lambda Calculus

This well-respected text offers an accessible introduction to functional programming concepts and techniques for students of mathematics and computer science. The treatment is as nontechnical as possible, assuming no prior knowledge of mathematics or functional programming. Numerous exercises appear throughout the text, and all problems feature complete solutions.

4. How to Design Programs (An Introduction to Computing and Programming)

This introduction to programming places computer science in the core of a liberal arts education. Unlike other introductory books, it focuses on the program design process. This approach fosters a variety of skills--critical reading, analytical thinking, creative synthesis, and attention to detail--that are important for everyone, not just future computer programmers.The book exposes readers to two fundamentally new ideas. First, it presents program design guidelines that show the reader how to analyze a problem statement; how to formulate concise goals; how to make up examples; how to develop an outline of the solution, based on the analysis; how to finish the program; and how to test. Each step produces a well-defined intermediate product. Second, the book comes with a novel programming environment, the first one explicitly designed for beginners.

5. Structure and Interpretation of Computer Programs

Using a dialect of the Lisp programming language known as Scheme, the book explains core computer science concepts, including abstraction, recursion, interpreters and metalinguistic abstraction, and teaches modular programming.

The program also introduces a practical implementation of the register machine concept, defining and developing an assembler for such a construct, which is used as a virtual machine for the implementation of interpreters and compilers in the book, and as a testbed for illustrating the implementation and effect of modifications to the evaluation mechanism. Working Scheme systems based on the design described in this book are quite common student projects.

Computer Science and Computation

1. The Annotated Turing: A Guided Tour Through Alan Turing's Historic Paper on Computability and the Turing Machine

Mathematician Alan Turing invented an imaginary computer known as the Turing Machine; in an age before computers, he explored the concept of what it meant to be computable, creating the field of computability theory in the process, a foundation of present-day computer programming.

The book expands Turing’s original 36-page paper with additional background chapters and extensive annotations; the author elaborates on and clarifies many of Turing’s statements, making the original difficult-to-read document accessible to present day programmers, computer science majors, math geeks, and others.

2. New Turing Omnibus (New Turning Omnibus : 66 Excursions in Computer Science)

This text provides a broad introduction to the realm of computers. Updated and expanded, "The New Turing Omnibus" offers 66 concise articles on the major points of interest in computer science theory, technology and applications. New for this edition are: updated information on algorithms, detecting primes, noncomputable functions, and self-replicating computers - plus completely new sections on the Mandelbrot set, genetic algorithms, the Newton-Raphson Method, neural networks that learn, DOS systems for personal computers, and computer viruses.

3. Udacity

Udacity is a private educational organization founded by Sebastian Thrun, David Stavens, and Mike Sokolsky, with the stated goal of democratizing education

It is the outgrowth of free computer science classes offered in 2011 through Stanford University. As of May 2012 Udacity has six active courses.

The first two courses ever launched on Udacity both started on 20th February, 2012, entitled "CS 101: Building a Search Engine", taught by Dave Evans, from the University of Virginia, and "CS 373: Programming a Robotic Car" taught by Thrun. Both courses use Python.

4. Introduction to Artificial Intelligence

A bold experiment in distributed education, "Introduction to Artificial Intelligence" will be offered free and online to students worldwide from October 10th to December 18th 2011. The course will include feedback on progress and a statement of accomplishment. Taught by Sebastian Thrun and Peter Norvig, the curriculum draws from that used in Stanford's introductory Artificial Intelligence course. The instructors will offer similar materials, assignments, and exams.

Artificial Intelligence is the science of making computer software that reasons about the world around it. Humanoid robots, Google Goggles, self-driving cars, even software that suggests music you might like to hear are all examples of AI. In this class, you will learn how to create this software from two of the leaders in the field. Class begins October 10.

Supplementary Resources: Mathematics and Algorithms

1. Concrete Mathematics: A Foundation for Computer Science

This book introduces the mathematics that supports advanced computer programming and the analysis of algorithms. The primary aim of its well-known authors is to provide a solid and relevant base of mathematical skills - the skills needed to solve complex problems, to evaluate horrendous sums, and to discover subtle patterns in data. It is an indispensable text and reference not only for computer scientists - the authors themselves rely heavily on it! - but for serious users of mathematics in virtually every discipline.

2. Algorithms

The textbook Algorithms, 4th Edition by Robert Sedgewick and Kevin Wayne surveys the most important algorithms and data structures in use today.

3. Introduction to Algorithms

Some books on algorithms are rigorous but incomplete; others cover masses of material but lack rigor. Introduction to Algorithms uniquely combines rigor and comprehensiveness. The book covers a broad range of algorithms in depth, yet makes their design and analysis accessible to all levels of readers. Each chapter is relatively self-contained and can be used as a unit of study. The algorithms are described in English and in a pseudocode designed to be readable by anyone who has done a little programming. The explanations have been kept elementary without sacrificing depth of coverage or mathematical rigor.

Practice

1. Project Euler

Project Euler is a series of challenging mathematical/computer programming problems that will require more than just mathematical insights to solve. Although mathematics will help you arrive at elegant and efficient methods, the use of a computer and programming skills will be required to solve most problems.

2. The Python Challenge

Python Challenge is a game in which each level can be solved by a bit of (Python) programming.

3. CodeChef Programming Competition

CodeChef is a global programming community. We host contests, trainings and events for programmers around the world. Our goal is to provide a platform for programmers everywhere to meet, compete, and have fun.

4. Write your own programs.

Python

pyscripter

An open-source Python Integrated Development Environment (IDE)

Khan Academy

Introduction to programming and computer science (using Python)

1. Invent Your Own Computer Games with Python

“Invent Your Own Computer Games with Python” is a free book (as in, open source) and a free eBook (as in, no cost to download) that teaches you how to program in the Python programming language. Each chapter gives you the complete source code for a new game, and then teaches the programming concepts from the example.

“Invent with Python” was written to be understandable by kids as young as 10 to 12 years old, although it is great for anyone of any age who has never programmed before.

2. Learn Python The Hard Way

Have you always wanted to learn how to code but never thought you could? Are you looking to build a foundation for more complex coding? Do you want to challenge your brain in a new way? Then Learn Python the Hard Way is the book for you.

3. Python for Software Design: How to Think Like a Computer Scientist

Think Python is an introduction to Python programming for beginners. It starts with basic concepts of programming, and is carefully designed to define all terms when they are first used and to develop each new concept in a logical progression. Larger pieces, like recursion and object-oriented programming are divided into a sequence of smaller steps and introduced over the course of several chapters.

4. Python Programming: An Introduction to Computer Science

This book is suitable for use in a university-level first course in computing (CS1), as well as the increasingly popular course known as CS0. It is difficult for many students to master basic concepts in computer science and programming. A large portion of the confusion can be blamed on the complexity of the tools and materials that are traditionally used to teach CS1 and CS2. This textbook was written with a single overarching goal: to present the core concepts of computer science as simply as possible without being simplistic.

5. Practical Programming: An Introduction to Computer Science Using Python

Computers are used in every part of science from ecology to particle physics. This introduction to computer science continually reinforces those ties by using real-world science problems as examples. Anyone who has taken a high school science class will be able to follow along as the book introduces the basics of programming, then goes on to show readers how to work with databases, download data from the web automatically, build graphical interfaces, and most importantly, how to think like a professional programmer.

6. The Quick Python Book

The Quick Python Book, Second Edition, is a clear, concise introduction to Python 3, aimed at programmers new to Python. This updated edition includes all the changes in Python 3, itself a significant shift from earlier versions of Python.

The book begins with basic but useful programs that teach the core features of syntax, control flow, and data structures. It then moves to larger applications involving code management, object-oriented programming, web development, and converting code from earlier versions of Python.

Haskell

The Haskell Platform

The Haskell Platform is the easiest way to get started with programming Haskell. It comes with all you need to get up and running. Think of it as "Haskell: batteries included".

1. Haskell in 5 steps

This page will help you get started as quickly as possible.

2. Learn Haskell in 10 minutes

3. A brief introduction to Haskell

4. Programming in Haskell

Haskell is one of the leading languages for teaching functional programming, enabling students to write simpler and cleaner code, and to learn how to structure and reason about programs. This introduction is ideal for beginners: it requires no previous programming experience and all concepts are explained from first principles via carefully chosen examples. Each chapter includes exercises that range from the straightforward to extended projects, plus suggestions for further reading on more advanced topics. The author is a leading Haskell researcher and instructor, well-known for his teaching skills. The presentation is clear and simple, and benefits from having been refined and class-tested over several years. The result is a text that can be used with courses, or for self-learning. Features include freely accessible Powerpoint slides for each chapter, solutions to exercises and examination questions (with solutions) available to instructors, and a downloadable code that's fully compliant with the latest Haskell release.

5. Learn You a Haskell for Great Good!

Learn You a Haskell, the funkiest way to learn Haskell, which is the best functional programming language around. You may have heard of it. This guide is meant for people who have programmed already, but have yet to try functional programming.

6. Real World Haskell

This easy-to-use, fast-moving tutorial introduces you to functional programming with Haskell. You'll learn how to use Haskell in a variety of practical ways, from short scripts to large and demanding applications. Real World Haskell takes you through the basics of functional programming at a brisk pace, and then helps you increase your understanding of Haskell in real-world issues like I/O, performance, dealing with data, concurrency, and more as you move through each chapter.

7. The Haskell Road to Logic, Maths and Programming

The textbook by Doets and van Eijck puts the Haskell programming language systematically to work for presenting a major piece of logic and mathematics. The reader is taken through chapters on basic logic, proof recipes, sets and lists, relations and functions, recursion and co-recursion, the number systems, polynomials and power series, ending with Cantor's infinities. The book uses Haskell for the executable and strongly typed manifestation of various mathematical notions at the level of declarative programming. The book adopts a systematic but relaxed mathematical style (definition, example, exercise, ...); the text is very pleasant to read due to a small amount of anecdotal information, and due to the fact that definitions are fluently integrated in the running text. An important goal of the book is to get the reader acquainted with reasoning about programs. 

Common Lisp

1. Land of Lisp: Learn to Program in Lisp, One Game at a Time!

Lisp has been hailed as the world's most powerful programming language, but its cryptic syntax and academic reputation can be enough to scare off even experienced programmers. Those dark days are finally over—Land of Lisp brings the power of functional programming to the people!

With his brilliantly quirky comics and out-of-this-world games, longtime Lisper Conrad Barski teaches you the mysteries of Common Lisp. You'll start with the basics, like list manipulation, I/O, and recursion, then move on to more complex topics like macros, higher order programming, and domain-specific languages. Then, when your brain overheats, you can kick back with an action-packed comic book interlude!

2. Practical Common Lisp

Practical Common Lisp presents a thorough introduction to Common Lisp, providing you with an overall understanding of the language features and how they work. Over a third of the book is devoted to practical examples such as the core of a spam filter and a web application for browsing MP3s and streaming them via the Shoutcast protocol to any standard MP3 client software (e.g., iTunes, XMMS, or WinAmp). In other "practical" chapters, author Peter Seibel demonstrates how to build a simple but flexible in-memory database, how to parse binary files, and how to build a unit test framework in 26 lines of code.

3. ANSI Common LISP

Teaching users new and more powerful ways of thinking about programs, this two-in-one text contains a tutorial—full of examples—that explains all the essential concepts of Lisp programming, plus an up-to-date summary of ANSI Common Lisp, listing every operator in the language. Informative and fun, it gives users everything they need to start writing programs in Lisp both efficiently and effectively, and highlights such innovative Lisp features as automatic memory management, manifest typing, closures, and more. Dividing material into two parts, the tutorial half of the book covers subject-by-subject the essential core of Common Lisp, and sums up lessons of preceding chapters in two examples of real applications: a backward-chainer, and an embedded language for object-oriented programming. Consisting of three appendices, the summary half of the book gives source code for a selection of widely used Common Lisp operators, with definitions that offer a comprehensive explanation of the language and provide a rich source of real examples; summarizes some differences between ANSI Common Lisp and Common Lisp as it was originally defined in 1984; and contains a concise description of every function, macro, and special operator in ANSI Common Lisp. The book concludes with a section of notes containing clarifications, references, and additional code.

4. Paradigms of Artificial Intelligence Programming: Case Studies in Common Lisp

Paradigms of AI Programming is the first text to teach advanced Common Lisp techniques in the context of building major AI systems. By reconstructing authentic, complex AI programs using state-of-the-art Common Lisp, the book teaches students and professionals how to build and debug robust practical programs, while demonstrating superior programming style and important AI concepts. The author strongly emphasizes the practical performance issues involved in writing real working programs of significant size. Chapters on troubleshooting and efficiency are included, along with a discussion of the fundamentals of object-oriented programming and a description of the main CLOS functions. This volume is an excellent text for a course on AI programming, a useful supplement for general AI courses and an indispensable reference for the professional programmer.

5. Let Over Lambda

Let Over Lambda is one of the most hardcore computer programming books out there. Starting with the fundamentals, it describes the most advanced features of the most advanced language: COMMON LISP. The point of this book is to expose you to ideas that you might otherwise never be exposed to.

6. Lisp as the Maxwell’s equations of software

These are Maxwell’s equations. Just four compact equations. With a little work it’s easy to understand the basic elements of the equations – what all the symbols mean, how we can compute all the relevant quantities, and so on. But while it’s easy to understand the elements of the equations, understanding all their consequences is another matter. Inside these equations is all of electromagnetism – everything from antennas to motors to circuits. If you think you understand the consequences of these four equations, then you may leave the room now, and you can come back and ace the exam at the end of semester.

R

RStudio

RStudio™ is a free and open source integrated development environment (IDE) for R. You can run it on your desktop (Windows, Mac, or Linux) or even over the web using RStudio Server.

1. R Videos

2. R Tutorials

3. R Tutorials from Universities Around the World

Here is a list of FREE R tutorials hosted in official website of universities around the world.

4. R-bloggers

Here you will find daily news and tutorials about R, contributed by over 300 bloggers.

5. The Art of R Programming: A Tour of Statistical Software Design

R is the world's most popular language for developing statistical software: Archaeologists use it to track the spread of ancient civilizations, drug companies use it to discover which medications are safe and effective, and actuaries use it to assess financial risks and keep economies running smoothly.

The Art of R Programming takes you on a guided tour of software development with R, from basic types and data structures to advanced topics like closures, recursion, and anonymous functions. No statistical knowledge is required, and your programming skills can range from hobbyist to pro.

Along the way, you'll learn about functional and object-oriented programming, running mathematical simulations, and rearranging complex data into simpler, more useful formats.

6. Introduction to Statistical Thinking (With R, Without Calculus)

The target audience for this book is college students who are required to learn statistics, students with little background in mathematics and often no motivation to learn more.

7. Doing Bayesian Data Analysis: A Tutorial with R and BUGS

There is an explosion of interest in Bayesian statistics, primarily because recently created computational methods have finally made Bayesian analysis obtainable to a wide audience. Doing Bayesian Data Analysis, A Tutorial Introduction with R and BUGS provides an accessible approach to Bayesian data analysis, as material is explained clearly with concrete examples. The book begins with the basics, including essential concepts of probability and random sampling, and gradually progresses to advanced hierarchical modeling methods for realistic data. The text delivers comprehensive coverage of all scenarios addressed by non-Bayesian textbooks--t-tests, analysis of variance (ANOVA) and comparisons in ANOVA, multiple regression, and chi-square (contingency table analysis).

This book is intended for first year graduate students or advanced undergraduates. It provides a bridge between undergraduate training and modern Bayesian methods for data analysis, which is becoming the accepted research standard. Prerequisite is knowledge of algebra and basic calculus. Free software now includes programs in JAGS, which runs on Macintosh, Linux, and Windows.

Question about brains and big numbers

1 XiXiDu 17 April 2012 11:57AM

From time to time I encounter people who claim that our brains are really slow compared to even an average laptop computer and can't process big numbers.

At the risk of revealing my complete lack of knowledge of neural networks and how the brain works, I want to ask if this is actually true?

It took massive amounts of number crunching to create movies like James Cameron's Avatar. Yet I am able to create more realistic and genuine worlds in front of my minds eye, on the fly. I can even simulate other agents. For example, I can easily simulate sexual intercourse between me and another human. Which includes tactile and olfactory information.

I am further able to run real-time egocentric world-simulations to extrapolate and predict the behavior of physical systems and other agents. You can do that too. Having a discussion or playing football are two examples.

Yet any computer can outperform me at simple calculations.

But it seems to me, maybe naively so, that most of my human abilities involve massive amounts of number crunching that no desktop computer could do.

So what's the difference? Can someone point me to some digestible material that I can read up on to dissolve possible confusions I have with respect to my question?

Skoll World Forum: Catastrophic Risk and Threats to the Global Commons

3 XiXiDu 05 April 2012 09:44AM

More: Skoll Global Threats Fund | To Safeguard Humanity from Global Threats

The panel surfaced a number of issues that contribute to our inability to date to make serious strides on global challenges, including income inequality, failure of governance and lack of leadership.  It also explored some deeper issues around pysche and society  – people’s inability to convert information to wisdom, the loss of sense of self, the challenges of hyperconnectivity, and questions about economic models and motivations that have long underpinned concepts of growth and wellbeing.  The session was filmed, and we’ll make public that link once the file is available.  In the meantime, here are some of the more memorable quotes (which may not be verbatim, but this is how I wrote them down):

“When people say something is impossible, that just means it’s hard.”

“Inequality is becoming an existential threat.”

“We’re at a crossroads.  We can make progress against these big issues or we can kill ourselves.”

“We need inclusive globalization, to give everyone a stake in the future.”

‘Fatalism is our most deadly adversary.”

“What we’re lacking is not IQ, but wisdom.”

“We need to tap into the timeless to solve the urgent.”

What we mean by global threats

Global threats have the potential to kill or debilitate very large numbers of people or cause significant economic or social dislocation or paralysis throughout the world. Global threats cannot be solved by any one country; they require some sort of a collective response. Global threats are often non-linear, and are likely to become exponentially more difficult to manage if we don’t begin making serious strides in the right direction in the next 5-10 years.

More on existential risks: wiki.lesswrong.com/wiki/Existential_risk

Organisations

A list of organisations and charities concerned with existential risk research.

Resources

A Primer On Risks From AI

15 XiXiDu 24 March 2012 02:32PM

The Power of Algorithms

Evolutionary processes are the most evident example of the power of simple algorithms [1][2][3][4][5].

The field of evolutionary biology gathered a vast amount of evidence [6] that established evolution as the process that explains the local decrease in entropy [7], the complexity of life.

Since it can be conclusively shown that all life is an effect of an evolutionary process it is implicit that everything we do not understand about living beings is also an effect of evolution.

We might not understand the nature of intelligence [8] and consciousness [9] but we do know that they are the result of an optimization process that is neither intelligent nor conscious.

Therefore we know that it is possible for an physical optimization process to culminate in the creation of more advanced processes that feature superior qualities.

One of these qualities is the human ability to observe and improve the optimization process that created us. The most obvious example being science [10].

Science can be thought of as civilization-level self-improvement method. It allows us to work together in a systematic and efficient way and accelerate the rate at which further improvements are made.

The Automation of Science

We know that optimization processes that can create improved versions of themselves are possible, even without an explicit understanding of their own workings, as exemplified by natural selection.

We know that optimization processes can lead to self-reinforcing improvements, as exemplified by the adaptation of the scientific method [11] as an improved evolutionary process and successor of natural selection.

Which raises questions about the continuation of this self-reinforcing feedback cycle and its possible implications.

One possibility is to automate science [12][13] and apply it to itself and its improvement.

But science is a tool and its bottleneck are its users. Humans, the biased [14] effect of the blind idiot god that is evolution.

Therefore the next logical step is to use science to figure out how to replace humans by a better version of themselves, artificial general intelligence.

Artificial general intelligence, that can recursively optimize itself [15], is the logical endpoint of various converging and self-reinforcing feedback cycles.

Risks from AI

Will we be able to build an artificial general intelligence? Yes, sooner or later.

Even the unintelligent, unconscious and aimless process of natural selection was capable of creating goal-oriented, intelligent and conscious agents that can think ahead, jump fitness gaps and improve upon the process that created them to engage in prediction and direct experimentation.

The question is, what are the possible implications of the invention of an artificial, fully autonomous, intelligent and goal-oriented optimization process?

One good bet is that such an agent will recursively improve its most versatile, and therefore instrumentally useful, resource. It will improve its general intelligence, respectively cross-domain optimization power.

Since it is unlikely that human intelligence is the optimum, the positive feedback effect, that is a result of using intelligence amplifications to amplify intelligence, is likely to lead to a level of intelligence that is generally more capable than the human intelligence level.

Humans are unlikely to be the most efficient thinkers because evolution is mindless and has no goals. Evolution did not actively try to create the smartest thing possible.

Evolution is further not limitlessly creative, each step of an evolutionary design must increase the fitness of its host. Which makes it probable that there are artificial mind designs that can do what no product of natural selection could accomplish, since an intelligent artificer does not rely on the incremental fitness of each step in the development process.

It is actually possible that human general intelligence is the bare minimum. Because the human level of intelligence might have been sufficient to both survive and reproduce and that therefore no further evolutionary pressure existed to select for even higher levels of general intelligence.

The implications of this possibility might be the creation of an intelligent agent that is more capable than humans in every sense. Maybe because it does directly employ superior approximations of our best formal methods, that tell us how to update based on evidence and how to choose between various actions. Or maybe it will simply think faster. It doesn’t matter.

What matters is that a superior intellect is probable and that it will be better than us at discovering knowledge and inventing new technology. Technology that will make it even more powerful and likely invincible.

And that is the problem. We might be unable to control such a superior being. Just like a group of chimpanzees is unable to stop a company from clearing its forest [16].

But even if such a being is only slightly more capable than us. We might find ourselves at its mercy nonetheless.

Human history provides us with many examples [17][18][19] that make it abundantly clear that even the slightest advance can enable one group to dominate others.

What happens is that the dominant group imposes its values on the others. Which in turn raises the question of what values an artificial general intelligence might have and the implications of those values for us.

Due to our evolutionary origins, our struggle for survival and the necessity to cooperate with other agents, we are equipped with many values and a concern for the welfare of others [20].

The information theoretic complexity [21][22] of our values is very high. Which means that it is highly unlikely for similar values to automatically arise in agents that are the product of intelligent design, agents that never underwent the million of years of competition with other agents that equipped humans with altruism and general compassion.

But that does not mean that an artificial intelligence won’t have any goals [23][24]. Just that those goals will be simple and their realization remorseless [25].

An artificial general intelligence will do whatever is implied by its initial design. And we will be helpless to stop it from achieving its goals. Goals that won’t automatically respect our values [26].

A likely implication is the total extinction of all of humanity [27].

Further Reading

References

[1] Genetic Algorithms and Evolutionary Computation, talkorigins.org/faqs/genalg/genalg.html
[2] Fixing software bugs in 10 minutes or less using evolutionary computation, genetic-programming.org/hc2009/1-Forrest/Forrest-Presentation.pdf
[3] Automatically Finding Patches Using Genetic Programming, genetic-programming.org/hc2009/1-Forrest/Forrest-Paper-on-Patches.pdf
[4] A Genetic Programming Approach to Automated Software Repair, genetic-programming.org/hc2009/1-Forrest/Forrest-Paper-on-Repair.pdf
[5]GenProg: A Generic Method for Automatic Software Repair, virginia.edu/~weimer/p/weimer-tse2012-genprog.pdf
[6] 29+ Evidences for Macroevolution (The Scientific Case for Common Descent), talkorigins.org/faqs/comdesc/
[7] Thermodynamics, Evolution and Creationism, talkorigins.org/faqs/thermo.html
[8] A Collection of Definitions of Intelligence, vetta.org/documents/A-Collection-of-Definitions-of-Intelligence.pdf
[9] plato.stanford.edu/entries/consciousness/
[10] en.wikipedia.org/wiki/Science
[11] en.wikipedia.org/wiki/Scientific_method
[12] The Automation of Science, sciencemag.org/content/324/5923/85.abstract
[13] Computer Program Self-Discovers Laws of Physics, wired.com/wiredscience/2009/04/newtonai/
[14] List of cognitive biases, en.wikipedia.org/wiki/List_of_cognitive_biases
[15] Intelligence explosion, wiki.lesswrong.com/wiki/Intelligence_explosion
[16] 1% with Neil deGrasse Tyson, youtu.be/9nR9XEqrCvw
[17] Mongol military tactics and organization, en.wikipedia.org/wiki/Mongol_military_tactics_and_organization
[18] Wars of Alexander the Great, en.wikipedia.org/wiki/Wars_of_Alexander_the_Great
[19] Spanish colonization of the Americas, en.wikipedia.org/wiki/Spanish_colonization_of_the_Americas
[20] A Quantitative Test of Hamilton's Rule for the Evolution of Altruism, plosbiology.org/article/info:doi/10.1371/journal.pbio.1000615
[21] Algorithmic information theory, scholarpedia.org/article/Algorithmic_information_theory
[22] Algorithmic probability, scholarpedia.org/article/Algorithmic_probability
[23] The Nature of Self-Improving Artificial Intelligence, selfawaresystems.files.wordpress.com/2008/01/nature_of_self_improving_ai.pdf
[24] The Basic AI Drives, selfawaresystems.files.wordpress.com/2008/01/ai_drives_final.pdf
[25] Paperclip maximizer, wiki.lesswrong.com/wiki/Paperclip_maximizer
[26] Friendly artificial intelligence, wiki.lesswrong.com/wiki/Friendly_artificial_intelligence
[27] Existential Risk, existential-risk.org

Reply to Yvain on 'The Futility of Intelligence'

-5 XiXiDu 17 March 2012 01:28PM

This is a reply to a comment by Yvain and everyone who might have misunderstood what problem I tried to highlight.

Here is the problem. You can't estimate the probability and magnitude of the advantage an AI will have if you are using something that is as vague as the concept of 'intelligence'.

Here is a case that bears some similarity and might shed light on what I am trying to explain:

At his recent keynote speech at the New York Television Festival, former Star Trek writer and creator of the re-imagined Battlestar Galactica Ron Moore revealed the secret formula to writing for Trek.

He described how the writers would just insert "tech" into the scripts whenever they needed to resolve a story or plot line, then they'd have consultants fill in the appropriate words (aka technobabble) later.

"It became the solution to so many plot lines and so many stories," Moore said. "It was so mechanical that we had science consultants who would just come up with the words for us and we'd just write 'tech' in the script. You know, Picard would say 'Commander La Forge, tech the tech to the warp drive.' I'm serious. If you look at those scripts, you'll see that."

Moore then went on to describe how a typical script might read before the science consultants did their thing:

La Forge: "Captain, the tech is overteching."

Picard: "Well, route the auxiliary tech to the tech, Mr. La Forge."

La Forge: "No, Captain. Captain, I've tried to tech the tech, and it won't work."

Picard: "Well, then we're doomed."

"And then Data pops up and says, 'Captain, there is a theory that if you tech the other tech ... '" Moore said. "It's a rhythm and it's a structure, and the words are meaningless. It's not about anything except just sort of going through this dance of how they tech their way out of it."

The use of 'intelligence' is as misleading and dishonest in evaluating risks from AI as the use of 'tech' in Star Trek.

It is true that 'intelligence', just as 'technology' has some explanatory power. Just like 'emergence' has some explanatory power. As in "the morality of an act is an emergent phenomena of a physical system: it refers to the physical relations among the components of that system". But it does not help to evaluate the morality of an act or in predicting if a given physical system will exhibit moral properties.

What are YOU doing against risks from AI?

-5 XiXiDu 17 March 2012 11:56AM

This is directed at those who agree with SIAI but are not doing everything they can to support their mission.

Why are you not doing more?

Comments where people proclaim that they have contributed money to SIAI are upvoted 50 times and more. 180 people voted for 'unfriendly AI' to be the most fearsome risk.

If you are one of those people and are not fully committed to the cause, I am asking you, why are you not doing more?

The Futility of Intelligence

-2 XiXiDu 15 March 2012 02:25PM

The failures of phlogiston and vitalism are historical hindsight. Dare I step out on a limb, and name some current theory which I deem analogously flawed?

I name artificial intelligence or thinking machines - usually defined as the study of systems whose high-level behaviors arise from "thinking" or the interaction of many low-level elements.  (R. J. Sternberg quoted in a paper by Shane Legg:  “Viewed narrowly, there seem to be almost as many definitions of intelligence as there were experts asked to define it.”) Taken literally, that allows for infinitely many degrees of intelligence to fit every phenomenon in our universe above the level of individual quarks, which is part of the problem.  Imagine pointing to a chess computer and saying "It's not a stone!"  Does that feel like an explanation?  No?  Then neither should saying "It's a thinking machine!"

It's the noun "intelligence" that I protest, rather than to "evoke a dynamic state sequence from a machine by computing an algorithm".  There's nothing wrong with saying "X computes algorithm Y", where Y is some specific, detailed flowchart that represents an algorithm or process.  "Thinking about" is another legitimate phrase that means exactly the same thing:  The machine is thinking about a problem, according to an specific algorithm. The machine is thinking about how to put elements of a list in a certain order, according to the a specific algorithm called quicksort.

Now suppose I should say that a problem is explained by "thinking" or that the order of elements in a list is the result of a "thinking machine", and claim that as my explanation.

The phrase "evoke a dynamic state sequence from a machine by computing an algorithm" is acceptable, just like "thinking about" or "is caused by" are acceptable, if the phrase precedes some specification to be judged on its own merits.

However, this is not the way "intelligence" is commonly used. "Intelligence" is commonly used as an explanation in its own right.

I have lost track of how many times I have heard people say, "an artificial general intelligence would have a genuine intelligence advantage" as if that explained its advantage. This usage fits all the checklist items for a mysterious answer to a mysterious question. What do you know, after you have said that its "advantage" is "intelligence"?  You can make no new predictions.  You do not know anything about the behavior of real-world artificial general intelligence that you did not know before.  It feels like you believe a new fact, but you don't anticipate any different outcomes. Your curiosity feels sated, but it has not been fed.  The hypothesis has no moving parts - there's no detailed internal model to manipulate.  Those who proffer the hypothesis of "intelligence" confess their ignorance of the internals, and take pride in it; they contrast the science of "artificial general intelligence" to other sciences merely mundane.

And even after the answer of "How? Intelligence!" is given, the practical realization is still a mystery and possesses the same sacred impenetrability it had at the start.

A fun exercise is to eliminate the explanation "intelligence" from any sentence in which it appears, and see if the sentence says anything different:

  • Before: The AI is going to take over the world by using its superhuman intelligence to invent nanotechnology.
  • After:  The AI is going to take over the world by inventing nanotechnology.
  • Before: A friendly AI is going to use its superhuman intelligence to extrapolate the coherent volition of humanity.
  • After:  A friendly AI is going to extrapolate the coherent volition of humanity.
  • Even better: A friendly AI is a powerful algorithm. We can successfully extrapolate some aspects of the volition of individual humans using [FILL IN DETAILS] procedure, without any global societal variables, showing that we understand how the extrapolate the volition of humanity in theory and that it converges rather than diverges, that our wishes cohere rather than interfere.

Another fun exercise is to replace "intelligence" with "magic", the explanation that people had to use before the idea of an intelligence explosion was invented:

  • Before:  The AI is going to use its superior intelligence to quickly evolve vastly superhuman capabilities and reach singleton status within a matter of weeks.
  • After:  The AI is going to use magic to quickly evolve vastly superhuman capabilities and reach singleton status within a matter of weeks.
  • Before:  Superhuman intelligence is able to use the internet to gain physical manipulators and expand its computational capabilities.
  • After:  Superhuman magic is able to use the internet to gain physical manipulators and expand its computational capabilities.

Does not each statement convey exactly the same amount of knowledge about the phenomenon's behavior? Does not each hypothesis fit exactly the same set of outcomes?

"Intelligence" has become very popular, just as saying "magic" used to be very popular. "Intelligence" has the same deep appeal to human psychology, for the same reason. "Intelligence" is such a wonderfully easy explanation, and it feels good to say it; it gives you a sacred mystery to worship. Intelligence is popular because it is the junk food of curiosity. You can explain anything using intelligence , and so people do just that; for it feels so wonderful to explain things. Humans are still humans, even if they've taken a few science classes in college. Once they find a way to escape the shackles of settled science, they get up to the same shenanigans as their ancestors, dressed up in the literary genre of "science" but still the same species psychology.

Risks from AI and Charitable Giving

2 XiXiDu 13 March 2012 01:54PM

If you’re interested in being on the right side of disputes, you will refute your opponents' arguments. But if you're interested in producing truth, you will fix your opponents' arguments for them. To win, you must fight not only the creature you encounter; you [also] must fight the most horrible thing that can be constructed from its corpse.

-- Black Belt Bayesian

This is an informal post meant as a reply to a post by user:utilitymonster, 'What is the best compact formalization of the argument for AI risk from fast takeoff?'

I hope to find the mental strength to put more effort into it in future to improve it. But since nobody else seems to be willing to take a critical look at the overall topic I feel that doing what I can is better than doing nothing.

Please review the categories 'Further Reading' and 'Notes and References'.

Contents

 

Abstract

In this post I just want to take a look at a few premises (P#) that need to be true simultaneously to make the SIAI a wortwhile charity from the point of view of someone trying to do as much good as possible by contributing money. I am going to show that the case of risks from AI is strongly conjunctive, that without a concrete and grounded understanding of AGI an abstract analysis of the issues is going to be very shaky, and that therefore SIAI is likely to be a bad choice as a charity. In other words, that which speaks in favor of SIAI does mainly consist of highly specific, conjunctive, non-evidence-backed speculations on possible bad outcomes.

Requirements for an Intelligence Explosion

P1 Fast, and therefore dangerous, recursive self-improvement is logically possible.

It took almost four hundred years to prove Fermat’s Last Theorem. The final proof is over a hundred pages long. Over a hundred pages! And we are not talking about something like an artificial general intelligence that can magically make itself smart enough to prove such theorems and many more that no human being would be capable of proving. Fermat’s Last Theorem simply states “no three positive integers a, b, and c can satisfy the equation a^n + b^n = c^n for any integer value of n greater than two.”

Even artificial intelligence researchers admit that "there could be non-linear complexity constrains meaning that even theoretically optimal algorithms experience strongly diminishing intelligence returns for additional compute power." [1] We just don't know.

Other possible problems include the impossibility of a stable utility function and a reflective decision theory, the intractability of real world expected utility maximization or that expected utility maximizers stumble over Pascal's mugging, among other things [2].

For an AI to be capable of recursive self-improvement it also has to guarantee that its goals will be preserved when it improves itself. It is still questionable if it is possible to conclusively prove that improvements to an agent's intelligence or decision procedures maximize expected utility. If this isn't possible it won't be rational or possible to undergo explosive self-improvement.

P1.b The fast computation of a simple algorithm is sufficient to outsmart and overpower humanity.

Imagine a group of 100 world-renowned scientists and military strategists.

  • The group is analogous to the initial resources of an AI.
  • The knowledge that the group has is analogous to what an AI could come up with by simply "thinking" about it given its current resources.

Could such a group easily wipe away the Roman empire when beamed back in time?

  • The Roman empire is analogous to our society today.

Even if you gave all of them a machine gun, the Romans would quickly adapt and the people from the future would run out of ammunition.

  • Machine guns are analogous to the supercomputer it runs on.

Consider that it takes a whole technological civilization to produce a modern smartphone.

You can't just say "with more processing power you can do more different things", that would be analogous to saying that "100 people" from today could just build more "machine guns". But they can't! They can't use all their knowledge and magic from the future to defeat the Roman empire.

A lot of assumptions have to turn out to be correct to make humans discover simple algorithms over night that can then be improved to self-improve explosively.

You can also compare this to the idea of a Babylonian mathematician discovering modern science and physics given that he would be uploaded into a supercomputer (a possibility that is in and of itself already highly speculative). It assumes that he could brute-force conceptual revolutions.

Even if he was given a detailed explanation of how his mind works and the resources to understand it, self-improving to achieve superhuman intelligence assumes that throwing resources at the problem of intelligence will magically allow him to pull improved algorithms from solution space as if they were signposted.

But unknown unknowns are not signposted. It's rather like finding a needle in a haystack. Evolution is great at doing that and assuming that one could speed up evolution considerably is another assumption about technological feasibility and real-world resources.

That conceptual revolutions are just a matter of computational resources is pure speculation.

If one were to speed up the whole Babylonian world and accelerate cultural evolution, obviously one would arrive quicker at some insights. But how much quicker? How much are many insights dependent on experiments, to yield empirical evidence, that can't be speed-up considerably? And what is the return? Is the payoff proportionally to the resources that are necessary?

If you were going to speed up a chimp brain a million times, would it quickly reach human-level intelligence? If not, why then would it be different for a human-level intelligence trying to reach transhuman intelligence? It seems like a nice idea when formulated in English, but would it work?

Being able to state that an AI could use some magic to take over the earth does not make it a serious possibility.

Magic has to be discovered, adapted and manufactured first. It doesn't just emerge out of nowhere from the computation of certain algorithms. It emerges from a society of agents with various different goals and heuristics like "Treating Rare Diseases in Cute Kittens". It is an evolutionary process that relies on massive amounts of real-world feedback and empirical experimentation. Assuming that all that can happen because some simple algorithm is being computed is like believing it will emerge 'out of nowhere', it is magical thinking.

Unknown unknowns are not sign-posted. [3]

If people like Benoît B. Mandelbrot would have never decided to research Fractals then many modern movies wouldn't be possible, as they rely on fractal landscape algorithms. Yet, at the time Benoît B. Mandelbrot conducted his research it was not foreseeable that his work would have any real-world applications.

Important discoveries are made because many routes with low or no expected utility are explored at the same time [4]. And to do so efficiently it takes random mutation, a whole society of minds, a lot of feedback and empirical experimentation.

"Treating rare diseases in cute kittens" might or might not provide genuine insights and open up new avenues for further research. As long as you don't try it you won't know.

The idea that a rigid consequentialist with simple values can think up insights and conceptual revolutions simply because it is instrumentally useful to do so is implausible.

Complex values are the cornerstone of diversity, which in turn enables creativity and drives the exploration of various conflicting routes. A singleton with a stable utility-function lacks the feedback provided by a society of minds and its cultural evolution.

You need to have various different agents with different utility-functions around to get the necessary diversity that can give rise to enough selection pressure. A "singleton" won't be able to predict the actions of new and improved versions of itself by just running sandboxed simulations. Not just because of logical uncertainty but also because it is computationally intractable to predict the real-world payoff of changes to its decision procedures.

You need complex values to give rise to the necessary drives to function in a complex world. You can't just tell an AI to protect itself. What would that even mean? What changes are illegitimate? What constitutes "self"? That are all unsolved problems that are just assumed to be solvable when talking about risks from AI.

An AI with simple values will simply lack the creativity, due to a lack of drives, to pursue the huge spectrum of research that a society of humans does pursue. Which will allow an AI to solve some well-defined narrow problems, but it will be unable to make use of the broad range of synergetic effects of cultural evolution. Cultural evolution is a result of the interaction of a wide range of utility-functions.

Yet even if we assume that there is one complete theory of general intelligence, once discovered, one just has to throw more resources at it. It might be able to incorporate all human knowledge, adapt it and find new patterns. But would it really be vastly superior to human society and their expert systems?

Can intelligence itself be improved apart from solving well-defined problems and making more accurate predictions on well-defined classes of problems? The discovery of unknown unknowns does not seem to be subject to other heuristics than natural selection. Without goals, well-defined goals, terms like "optimization" have no meaning.

P2 Fast, and therefore dangerous, recursive self-improvement is physically possible.

Even if it could be proven that explosive recursive self-improvement is logically possible, e.g. that there are no complexity constraints, the question remains if it is physically possible.

Our best theories about intelligence are highly abstract and their relation to real world human-level general intelligence is often wildly speculative [5][6].

P3 Fast, and therefore dangerous, recursive self-improvement is economically feasible.

To exemplify the problem take the science fictional idea of using antimatter as explosive for weapons. It is physically possible to produce antimatter and use it for large scale destruction. An equivalent of the Hiroshima atomic bomb will only take half a gram of antimatter. But it will take 2 billion years to produce that amount of antimatter [7].

We simply don’t know if intelligence is instrumental or quickly hits diminishing returns [8].

P3.b AGI is able to create (or acquire) resources, empowering technologies or civilisatory support [9].

We are already at a point where we have to build billion dollar chip manufacturing facilities to run our mobile phones. We need to build huge particle accelerators to obtain new insights into the nature of reality.

An AI would either have to rely on the help of a whole technological civilization or be in control of advanced nanotech assemblers.

And if an AI was to acquire the necessary resources on its own, its plan for world-domination would have to go unnoticed. This would require the workings of the AI to be opaque to its creators yet comprehensible to itself.

But an AI capable of efficient recursive self improvement must be able to

  1. comprehend its own workings
  2. predict how improvements, respectively improved versions of itself, are going to act to ensure that its values are preserved

So if the AI can do that, why wouldn't humans be able to use the same algorithms to predict what the initial AI is going to do? And if the AI can't do that, how is it going to maximize expected utility if it is unable to predict what it is going to do?

Any AI capable of efficient self-modification must be able to grasp its own workings and make predictions about improvements to various algorithms and its overall decision procedure. If an AI can do that, why would the humans who build it be unable to notice any malicious intentions? Why wouldn't the humans who created it not be able to use the same algorithms that the AI uses to predict what it will do? If humans are unable to predict what the AI will do, how is the AI able to predict what improved versions of itself will do?

And even if an AI was able to somehow acquire large amounts of money. It is not easy to use the money. You can't "just" build huge companies with fake identities, or a straw man, to create revolutionary technologies easily. Running companies with real people takes a lot of real-world knowledge, interactions and feedback. But most importantly, it takes a lot of time. An AI could not simply create a new Intel or Apple over a few years without its creators noticing anything.

The goals of an AI will be under scrutiny at any time. It seems very implausible that scientists, a company or the military are going to create an AI and then just let it run without bothering about its plans. An artificial agent is not a black box, like humans are, where one is only able to guess its real intentions.

A plan for world domination seems like something that can't be concealed from its creators. Lying is no option if your algorithms are open to inspection.

P4 Dangerous recursive self-improvement is the default outcome of the creation of artificial general intelligence.

Complex goals need complex optimization parameters (the design specifications of the subject of the optimization process against which it will measure its success of self-improvement).

Even the creation of paperclips is a much more complex goal than telling an AI to compute as many decimal digits of Pi as possible.

For an AGI, that was designed to design paperclips, to pose an existential risk, its creators would have to be capable enough to enable it to take over the universe on its own, yet forget, or fail to, define time, space and energy bounds as part of its optimization parameters. Therefore, given the large amount of restrictions that are inevitably part of any advanced general intelligence (AGI), the nonhazardous subset of all possible outcomes might be much larger than that where the AGI works perfectly yet fails to hold before it could wreak havoc.

And even given a rational utility maximizer. It is possible to maximize paperclips in a lot of different ways. How it does it is fundamentally dependent on its utility-function and how precisely it was defined.

If there are no constraints in the form of design and goal parameters then it can maximize paperclips in all sorts of ways that don't demand recursive self-improvement.

"Utility" does only become well-defined if we precisely define what it means to maximize it. Just maximizing paperclips doesn't define how quickly and how economically it is supposed to happen.

The problem is that "utility" has to be defined. To maximize expected utility does not imply certain actions, efficiency and economic behavior, or the drive to protect yourself. You can also rationally maximize paperclips without protecting yourself if it is not part of your goal parameters.

You can also assign utility to maximize paperclips as long as nothing turns you off but don't care about being turned off. If an AI is not explicitly programmed to care about it, then it won't.

Without well-defined goals in form of a precise utility-function, it might be impossible to maximize expected "utility". Concepts like "efficient", "economic" or "self-protection" all have a meaning that is inseparable with an agent's terminal goals. If you just tell it to maximize paperclips then this can be realized in an infinite number of ways that would all be rational given imprecise design and goal parameters. Undergoing to explosive recursive self-improvement, taking over the universe and filling it with paperclips, is just one outcome. Why would an arbitrary mind pulled from mind-design space care to do that? Why not just wait for paperclips to arise due to random fluctuations out of a state of chaos? That wouldn't be irrational. To have an AI take over the universe as fast as possible you would have to explicitly design it to do so.

But for the sake of a thought experiment assume that the default case was recursive self-improvement. Now imagine that a company like Apple wanted to build an AI that could answer every question (an Oracle).

If Apple was going to build an Oracle it would anticipate that other people would also want to ask it questions. Therefore it can't just waste all resources on looking for an inconsistency arising from the Peano axioms when asked to solve 1+1. It would not devote additional resources on answering those questions that are already known to be correct with a high probability. It wouldn't be economically useful to take over the universe to answer simple questions.

It would neither be rational to look for an inconsistency arising from the Peano axioms while solving 1+1. To answer questions an Oracle needs a good amount of general intelligence. And concluding that asking it to solve 1+1 implies to look for an inconsistency arising from the Peano axioms does not seem reasonable. It also does not seem reasonable to suspect that humans desire an answer to their questions to approach infinite certainty. Why would someone build such an Oracle in the first place?

A reasonable Oracle would quickly yield good solutions by trying to find answers within a reasonable time which are with a high probability just 2–3% away from the optimal solution. I don't think anyone would build an answering machine that throws the whole universe at the first sub-problem it encounters.

P5 The human development of artificial general intelligence will take place quickly.

What evidence do we have that there is some principle that, once discovered, allows us to grow superhuman intelligence overnight?

If the development of AGI takes place slowly, a gradual and controllable development, we might be able to learn from small-scale mistakes, or have enough time to develop friendly AI, while having to face other existential risks.

This might for example be the case if intelligence can not be captured by a discrete algorithm, or is modular, and therefore never allow us to reach a point where we can suddenly build the smartest thing ever that does just extend itself indefinitely.

Therefore the probability of an AI to undergo explosive recursive self-improvement (P(FOOM)) is the probability of the conjunction (P#P#) of its premises:

P(FOOM) = P(P1∧P2∧P3∧P4∧P5)

Of course, there are many more premises that need to be true in order to enable an AI to go FOOM, e.g. that each level of intelligence can effectively handle its own complexity, or that most AGI designs can somehow self-modify their way up to massive superhuman intelligence. But I believe that the above points are enough to show that the case for a hard takeoff is not disjunctive, but rather strongly conjunctive.

Requirements for SIAI to constitute an optimal charity

In this section I will assume the truth of all premises in the previous section.

P6 SIAI can solve friendly AI.

Say you believe that unfriendly AI will wipe us out with a probability of 60% and that there is another existential risk that will wipe us out with a probability of 10% even if unfriendly AI turns out to be no risk or in all possible worlds where it comes later. Both risks have the same utility x (if we don't assume that an unfriendly AI could also wipe out aliens etc.). Thus .6x > .1x. But if the probability of solving friendly AI = A to the probability of solving the second risk = B is A ≤ (1/6)B then the expected utility of mitigating friendly AI is at best equal to the other existential risk because .6Ax ≤ .1Bx.

Consider that one order of magnitude more utility could easily be outweighed or trumped by an underestimation of the complexity of friendly AI.

So how hard is it to solve friendly AI?

Take for example Pascal's mugging, if you can't solve it then you need to implement a hack that is largely based on human intuition. Therefore, in order to estimate the possibility of solving friendly AI one needs to account for the difficulty in solving all sub-problems.

Consider that we don't even know "how one would start to research the problem of getting a hypothetical AGI to recognize humans as distinguished beings." [10]

P7 SIAI does not increase risks from AI.

By trying to solve friendly AI, SIAI has to think about a lot of issues related to AI in general and might have to solve problems that will make it easier to create artificial general intelligence.

It is far from being clear that SIAI is able to protect its findings against intrusion, betrayal, industrial or espionage.

P8 SIAI does not increase negative utility.

There are several possibilities by which SIAI could actually cause a direct increase in negative utility.

1) Friendly AI is incredible hard and complex. Complex systems can fail in complex ways. Agents that are an effect of evolution have complex values. To satisfy complex values you need to meet complex circumstances. Therefore any attempt at friendly AI, which is incredible complex, is likely to fail in unforeseeable ways. A half-baked, not quite friendly, AI might create a living hell for the rest of time, increasing negative utility dramatically [11].

2) Humans are not provably friendly. Given the power to shape the universe the SIAI might fail to act altruistic and deliberately implement an AI with selfish motives or horrible strategies [12].

P9 It makes sense to support SIAI at this time [13].

Therefore the probability of SIAI to be a worthwhile charity (P(CHARITY)) is the probability of the conjunction (P#P#) of its premises:

P(CHARITY) = P(P6∧P7∧P8∧P9)

As before, there are many more premises that need to be true in order for SIAI to be the best choice for someone who wants to maximize doing good by contributing money to a charity.

Further Reading

The following posts and resources elaborate on many of the above points and hint at a lot of additional problems.

Notes and References

[1] Q&A with Shane Legg on risks from AI

[2] http://lukeprog.com/SaveTheWorld.html

[3] "In many ways, this is a book about hindsight. Pythagoras could not have imagined the uses to which his equation would be put (if, indeed, he ever came up with the equation himself in the first place). The same applies to almost all of the equations in this book. They were studied/discovered/developed by mathematicians and mathematical physicists who were investigating subjects that fascinated them deeply, not because they imagined that two hundred years later the work would lead to electric light bulbs or GPS or the internet, but rather because they were genuinely curious."

17 Equations that changed the world

[4] Here is my list of "really stupid, frivolous academic pursuits" that have lead to major scientific breakthroughs.

  • Studying monkey social behaviors and eating habits lead to insights into HIV (Radiolab: Patient Zero)
  • Research into how algae move toward light paved the way for optogenetics: using light to control brain cells (Nature 2010 Method of the Year).
  • Black hole research gave us WiFi (ICRAR award)
  • Optometry informs architecture and saved lives on 9/11 (APA Monitor)
  • Certain groups HATE SETI, but SETI's development of cloud-computing service SETI@HOME paved the way for citizen science and recent breakthroughs in protein folding (Popular Science)
  • Astronomers provide insights into medical imaging (TEDxBoston: Michell Borkin)
  • Basic physics experiments and the Fibonacci sequence help us understand plant growth and neuron development

http://blog.ketyov.com/2012/02/basic-science-is-about-creating.html

[5] "AIXI is often quoted as a proof of concept that it is possible for a simple algorithm to improve itself to such an extent that it could in principle reach superhuman intelligence. AIXI proves that there is a general theory of intelligence. But there is a minor problem, AIXI is as far from real world human-level general intelligence as an abstract notion of a Turing machine with an infinite tape is from a supercomputer with the computational capacity of the human brain. An abstract notion of intelligence doesn’t get you anywhere in terms of real-world general intelligence. Just as you won’t be able to upload yourself to a non-biological substrate because you showed that in some abstract sense you can simulate every physical process."

Alexander Kruel, Why an Intelligence Explosion might be a Low-Priority Global Risk

[6] "…please bear in mind that the relation of Solomonoff induction and “Universal AI” to real-world general intelligence of any kind is also rather wildly speculative… This stuff is beautiful math, but does it really have anything to do with real-world intelligence? These theories have little to say about human intelligence, and they’re not directly useful as foundations for building AGI systems (though, admittedly, a handful of scientists are working on “scaling them down” to make them realistic; so far this only works for very simple toy problems, and it’s hard to see how to extend the approach broadly to yield anything near human-level AGI). And it’s not clear they will be applicable to future superintelligent minds either, as these minds may be best conceived using radically different concepts."

Ben Goertzel, 'Are Prediction and Reward Relevant to Superintelligences?'

[7] http://public.web.cern.ch/public/en/spotlight/SpotlightAandD-en.html

[8] "If any increase in intelligence is vastly outweighed by its computational cost and the expenditure of time needed to discover it then it might not be instrumental for a perfectly rational agent (such as an artificial general intelligence), as imagined by game theorists, to increase its intelligence as opposed to using its existing intelligence to pursue its terminal goals directly or to invest its given resources to acquire other means of self-improvement, e.g. more efficient sensors."

Alexander Kruel, Why an Intelligence Explosion might be a Low-Priority Global Risk

[9] Section 'Necessary resources for an intelligence explosion', Why an Intelligence Explosion might be a Low-Priority Global Risk, Alexander Kruel

[10] http://lesswrong.com/lw/3aa/friendly_ai_research_and_taskification/

[11] http://lesswrong.com/r/discussion/lw/ajm/ai_risk_and_opportunity_a_strategic_analysis/5ylx

[12] http://lesswrong.com/lw/8c3/qa_with_new_executive_director_of_singularity/5y77

[13] "I think that if you're aiming to develop knowledge that won't be useful until very very far in the future, you're probably wasting your time, if for no other reason than this: by the time your knowledge is relevant, someone will probably have developed a tool (such as a narrow AI) so much more efficient in generating this knowledge that it renders your work moot."

Holden Karnofsky in a conversation with Jaan Tallinn

View more: Next