Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Superintelligence Reading Group 3: AI and Uploads

6 KatjaGrace 30 September 2014 01:00AM

This is part of a weekly reading group on Nick Bostrom's book, Superintelligence. For more information about the group, and an index of posts so far see the announcement post. For the schedule of future topics, see MIRI's reading guide.


Welcome. This week we discuss the second section in the reading guide, AI & Whole Brain Emulation. This is about two possible routes to the development of superintelligence: the route of developing intelligent algorithms by hand, and the route of replicating a human brain in great detail.

This post summarizes the section, and offers a few relevant notes, and ideas for further investigation. My own thoughts and questions for discussion are in the comments.

There is no need to proceed in order through this post. Feel free to jump straight to the discussion. Where applicable, page numbers indicate the rough part of the chapter that is most related (not necessarily that the chapter is being cited for the specific claim).

Reading“Artificial intelligence” and “Whole brain emulation” from Chapter 2 (p22-36)


Summary

Intro

  1. Superintelligence is defined as 'any intellect that greatly exceeds the cognitive performance of humans in virtually all domains of interest'
  2. There are several plausible routes to the arrival of a superintelligence: artificial intelligence, whole brain emulation, biological cognition, brain-computer interfaces, and networks and organizations. 
  3. Multiple possible paths to superintelligence makes it more likely that we will get there somehow. 
AI
  1. A human-level artificial intelligence would probably have learning, uncertainty, and concept formation as central features.
  2. Evolution produced human-level intelligence. This means it is possible, but it is unclear how much it says about the effort required.
  3. Humans could perhaps develop human-level artificial intelligence by just replicating a similar evolutionary process virtually. This appears at after a quick calculation to be too expensive to be feasible for a century, however it might be made more efficient.
  4. Human-level AI might be developed by copying the human brain to various degrees. If the copying is very close, the resulting agent would be a 'whole brain emulation', which we'll discuss shortly. If the copying is only of a few key insights about brains, the resulting AI might be very unlike humans.
  5. AI might iteratively improve itself from a meagre beginning. We'll examine this idea later. Some definitions for discussing this:
    1. 'Seed AI': a modest AI which can bootstrap into an impressive AI by improving its own architecture.
    2. 'Recursive self-improvement': the envisaged process of AI (perhaps a seed AI) iteratively improving itself.
    3. 'Intelligence explosion': a hypothesized event in which an AI rapidly improves from 'relatively modest' to superhuman level (usually imagined to be as a result of recursive self-improvement).
  6. The possibility of an intelligence explosion suggests we might have modest AI, then suddenly and surprisingly have super-human AI.
  7. An AI mind might generally be very different from a human mind. 

Whole brain emulation

  1. Whole brain emulation (WBE or 'uploading') involves scanning a human brain in a lot of detail, then making a computer model of the relevant structures in the brain.
  2. Three steps are needed for uploading: sufficiently detailed scanning, ability to process the scans into a model of the brain, and enough hardware to run the model. These correspond to three required technologies: scanning, translation (or interpreting images into models), and simulation (or hardware). These technologies appear attainable through incremental progress, by very roughly mid-century.
  3. This process might produce something much like the original person, in terms of mental characteristics. However the copies could also have lower fidelity. For instance, they might be humanlike instead of copies of specific humans, or they may only be humanlike in being able to do some tasks humans do, while being alien in other regards.

Notes

  1. What routes to human-level AI do people think are most likely?
    Bostrom and Müller's survey asked participants to compare various methods for producing synthetic and biologically inspired AI. They asked, 'in your opinion, what are the research approaches that might contribute the most to the development of such HLMI?” Selection was from a list, more than one selection possible. They report that the responses were very similar for the different groups surveyed, except that whole brain emulation got 0% in the TOP100 group (100 most cited authors in AI) but 46% in the AGI group (participants at Artificial General Intelligence conferences). Note that they are only asking about synthetic AI and brain emulations, not the other paths to superintelligence we will discuss next week.
  2. How different might AI minds be?
    Omohundro suggests advanced AIs will tend to have important instrumental goals in common, such as the desire to accumulate resources and the desire to not be killed. 
  3. Anthropic reasoning 
    ‘We must avoid the error of inferring, from the fact that intelligent life evolved on Earth, that the evolutionary processes involved had a reasonably high prior probability of producing intelligence’ (p27) 

    Whether such inferences are valid is a topic of contention. For a book-length overview of the question, see Bostrom’s Anthropic Bias. I’ve written shorter (Ch 2) and even shorter summaries, which links to other relevant material. The Doomsday Argument and Sleeping Beauty Problem are closely related.

  4. More detail on the brain emulation scheme
    Whole Brain Emulation: A Roadmap is an extensive source on this, written in 2008. If that's a bit too much detail, Anders Sandberg (an author of the Roadmap) summarises in an entertaining (and much shorter) talk. More recently, Anders tried to predict when whole brain emulation would be feasible with a statistical model. Randal Koene and Ken Hayworth both recently spoke to Luke Muehlhauser about the Roadmap and what research projects would help with brain emulation now.
  5. Levels of detail
    As you may predict, the feasibility of brain emulation is not universally agreed upon. One contentious point is the degree of detail needed to emulate a human brain. For instance, you might just need the connections between neurons and some basic neuron models, or you might need to model the states of different membranes, or the concentrations of neurotransmitters. The Whole Brain Emulation Roadmap lists some possible levels of detail in figure 2 (the yellow ones were considered most plausible). Physicist Richard Jones argues that simulation of the molecular level would be needed, and that the project is infeasible.

  6. Other problems with whole brain emulation
    Sandberg considers many potential impediments here.

  7. Order matters for brain emulation technologies (scanning, hardware, and modeling)
    Bostrom points out that this order matters for how much warning we receive that brain emulations are about to arrive (p35). Order might also matter a lot to the social implications of brain emulations. Robin Hanson discusses this briefly here, and in this talk (starting at 30:50) and this paper discusses the issue.

  8. What would happen after brain emulations were developed?
    We will look more at this in Chapter 11 (weeks 17-19) as well as perhaps earlier, including what a brain emulation society might look like, how brain emulations might lead to superintelligence, and whether any of this is good.

  9. Scanning (p30-36)
    ‘With a scanning tunneling microscope it is possible to ‘see’ individual atoms, which is a far higher resolution than needed...microscopy technology would need not just sufficient resolution but also sufficient throughput.’

    Here are some atoms, neurons, and neuronal activity in a living larval zebrafish, and videos of various neural events.


    Array tomography of mouse somatosensory cortex from Smithlab.



    A molecule made from eight cesium and eight
    iodine atoms (from here).
  10. Efforts to map connections between neurons
    Here is a 5m video about recent efforts, with many nice pictures. If you enjoy coloring in, you can take part in a gamified project to help map the brain's neural connections! Or you can just look at the pictures they made.

  11. The C. elegans connectome (p34-35)
    As Bostrom mentions, we already know how all of C. elegans neurons are connected. Here's a picture of it (via Sebastian Seung):


In-depth investigations

If you are particularly interested in these topics, and want to do further research, these are a few plausible directions, some taken from Luke Muehlhauser's list:

  1. Produce a better - or merely somewhat independent - estimate of how much computing power it would take to rerun evolution artificially. (p25-6)
  2. How powerful is evolution for finding things like human-level intelligence? (You'll probably need a better metric than 'power'). What are its strengths and weaknesses compared to human researchers?
  3. Conduct a more thorough investigation into the approaches to AI that are likely to lead to human-level intelligence, for instance by interviewing AI researchers in more depth about their opinions on the question.
  4. Measure relevant progress in neuroscience, so that trends can be extrapolated to neuroscience-inspired AI. Finding good metrics seems to be hard here.
  5. e.g. How is microscopy progressing? It’s harder to get a relevant measure than you might think, because (as noted p31-33) high enough resolution is already feasible, yet throughput is low and there are other complications. 
  6. Randal Koene suggests a number of technical research projects that would forward whole brain emulation (fifth question).
If you are interested in anything like this, you might want to mention it in the comments, and see whether other people have useful thoughts.

How to proceed

This has been a collection of notes on the chapter.  The most important part of the reading group though is discussion, which is in the comments section. I pose some questions for you there, and I invite you to add your own. Please remember that this group contains a variety of levels of expertise: if a line of discussion seems too basic or too incomprehensible, look around for one that suits you better!

Next week, we will talk about other paths to the development of superintelligence: biological cognition, brain-computer interfaces, and organizations. To prepare, read Biological Cognition and the rest of Chapter 2The discussion will go live at 6pm Pacific time next Monday 6 October. Sign up to be notified here.

[Link] Forty Days

9 GLaDOS 29 September 2014 12:29PM

A post from Gregory Cochran's and Henry Harpending's excellent blog West Hunter.

One of the many interesting aspects of how the US dealt with the AIDS epidemic is what we didn’t do – in particular, quarantine.  Probably you need a decent test before quarantine is practical, but we had ELISA by 1985 and a better Western Blot test by 1987.

There was popular support for a quarantine.

But the public health experts generally opined that such a quarantine would not work.

Of course, they were wrong.  Cuba institute a rigorous quarantine.  They mandated antiviral treatment for pregnant women and mandated C-sections for those that were HIV-positive.  People positive for any venereal disease were tested for HIV as well.  HIV-infected people must provide the names of all sexual partners for the past sic months.

Compulsory quarantining was relaxed in 1994, but all those testing positive have to go to a sanatorium for 8 weeks of thorough education on the disease.  People who leave after 8 weeks and engage in unsafe sex undergo permanent quarantine.

Cuba did pretty well:  the per-capita death toll was 35 times lower than in the US.

Cuba had some advantages:  the epidemic hit them at least five years later than it did the US (first observed Cuban case in 1986, first noticed cases in the US in 1981).  That meant they were readier when they encountered the virus.  You’d think that because of the epidemic’s late start in Cuba, there would have been a shorter interval without the effective protease inhibitors (which arrived in 1995 in the US) – but they don’t seem to have arrived in Cuba until 2001, so the interval was about the same.

If we had adopted the same strategy as Cuba, it would not have been as effective, largely because of that time lag.  However, it surely would have prevented at least half of the ~600,000 AIDS deaths in the US.  Probably well over half.

I still see people stating that of course quarantine would not have worked: fairly often from dimwitted people with a Masters in Public Health.

My favorite comment was from a libertarian friend who said that although quarantine  certainly would have worked, better to sacrifice a few hundred thousand than validate the idea that the Feds can sometimes tell you what to do with good effect.

The commenter Ron Pavellas adds:

I was working as the CEO of a large hospital in California during the 1980s (I have MPH as my degree, by the way). I was outraged when the Public Health officials decided to not treat the HI-Virus as an STD for the purposes of case-finding, as is routinely and effectively done with syphilis, gonorrhea, etc. In other words, they decided to NOT perform classic epidemiology, thus sullying the whole field of Public Health. It was not politically correct to potentially ‘out’ individuals engaging in the kind of behavior which spreads the disease. No one has recently been concerned with the potential ‘outing’ of those who contract other STDs, due in large part to the confidential methods used and maintained over many decades. (Remember the Wassermann Test that was required before you got married?) As is pointed out in this article, lives were needlessly lost and untold suffering needlessly ensued.

The Wasserman Test.

Open thread, Sept. 29 - Oct.5, 2014

4 polymathwannabe 29 September 2014 01:28PM

If it's worth saying, but not worth its own post (even in Discussion), then it goes here.

Notes for future OT posters:

1. Please add the 'open_thread' tag.

2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.)

3. Open Threads should be posted in Discussion, and not Main.

4. Open Threads should start on Monday, and end on Sunday.

Tweets Thread

2 2ZctE 29 September 2014 04:17AM

Rationality Twitter is fun. Twitter's format can promote good insight porn/humor density. It might be worth capturing and voting on some of the good tweets here, because they're easy to miss and can end up seemingly buried forever. I mean for this to have a somewhat wider scope than the quotes thread. If you liked a tweet a lot for any reason this is the place for it.

 

 

Decision theories as heuristics

12 owencb 28 September 2014 02:36PM

Main claims:

  1. A lot of discussion of decision theories is really analysing them as decision-making heuristics for boundedly rational agents.
  2. Understanding decision-making heuristics is really useful.
  3. The quality of dialogue would be improved if it was recognised when they were being discussed as heuristics.

Epistemic status: I’ve had a “something smells” reaction to a lot of discussion of decision theory. This is my attempt to crystallise out what I was unhappy with. It seems correct to me at present, but I haven’t spent too much time trying to find problems with it, and it seems quite possible that I’ve missed something important. Also possible is that this just recapitulates material in a post somewhere I’ve not read.

Existing discussion is often about heuristics

Newcomb’s problem traditionally contrasts the decisions made by Causal Decision Theory (CDT) and Evidential Decision Theory (EDT). The story goes that CDT reasons that there is no causal link between a decision made now and the contents of the boxes, and therefore two-boxes. Meanwhile EDT looks at the evidence of past participants and chooses to one-box in order to get a high probability of being rich.

I claim that both of these stories are applications of the rules as simple heuristics to the most salient features of the case. As such they are robust to variation in the fine specification of the case, so we can have a conversation about them. If we want to apply them with more sophistication then the answers do become sensitive to the exact specification of the scenario, and it’s not obvious that either has to give the same answer the simple version produces.

First consider CDT. It has a high belief that there is no causal link between choosing to one- or two- box and Omega’s previous decision. But in practice, how high is this belief? If it doesn’t understand exactly how Omega works, it might reserve some probability to the possibility of a causal link, and this could be enough to tip the decision towards one-boxing.

On the other hand EDT should properly be able to consider many sources of evidence besides the ones about past successes of Omega’s predictions. In particular it could assess all of the evidence that normally leads us to believe that there is no backwards-causation in our universe. According to how strong this evidence is, and how strong the evidence that Omega’s decision really is locked in, it could conceivably two-box.

Note that I’m not asking here for a more careful specification of the set-up. Rather I’m claiming that a more careful specification could matter -- and so to the extent that people are happy to discuss it without providing lots more details they’re discussing the virtues of CDT and EDT as heuristics for decision-making rather than as an ultimate normative matter (even if they’re not thinking of their discussion that way).

Similarly So8res had a recent post which discussed Newcomblike problems faced by people, and they are very clear examples when the decision theories are viewed as heuristics. If you allow the decision-maker to think carefully through all the unconscious signals sent by her decisions, it’s less clear that there’s anything Newcomblike.

Understanding decision-making heuristics is valuable

In claiming that a lot of the discussion is about heuristics, I’m not making an attack. We are all boundedly rational agents, and this will very likely be true of any artificial intelligence as well. So our decisions must perforce be made by heuristics. While it can be useful to study what an idealised method would look like (in order to work out how to approximate it), it’s certainly useful to study heuristics and determine what their relative strengths and weaknesses are.

In some cases we have good enough understanding of everything in the scenario that our heuristics can essentially reproduce the idealised method. When the scenario contains other agents which are as complicated as ourselves or more so, it seems like this has to fail.

We should acknowledge when we’re talking about heuristics

By separating discussion of the decision-theories-as-heuristics from decision-theories-as-idealised-decision-processes, we should improve the quality of dialogue in both parts. The discussion of the ideal would be less confused by examples of applications of the heuristics. The discussion of the heuristics could become more relevant by allowing people to talk about features which are only relevant for heuristics.

For example, it is relevant if one decision theory tends to need a more detailed description of the scenario to produce good answers. It’s relevant if one is less computationally tractable. And we can start to formulate and discuss hypotheses such as “CDT is the best decision-procedure when the scenario doesn’t involve other agents, or only other agents so simple that we can model them well. Updateless Decision Theory is the best decision-procedure when the scenario involves other agents too complex to model well”.

In addition, I suspect that it would help to reduce disagreements about the subject. Many disagreements in many domains are caused by people talking past each other. Discussion of heuristics without labelling it as such seems like it could generate lots of misunderstandings.

Request for feedback on a paper about (machine) ethics

5 Caspar42 28 September 2014 12:03PM

I have written a paper on ethics with special concentration on machine ethics and formality with the following abstract:

Most ethical systems are formulated in a very intuitive, imprecise manner. Therefore, they cannot be studied mathematically. In particular, they are not applicable to make machines behave ethically. In this paper we make use of this perspective of machine ethics to identify preference utilitarianism as the most promising approach to formal ethics. We then go on to propose a simple, mathematically precise formalization of preference utilitarianism in very general cellular automata. Even though our formalization is incomputable, we argue that it can function as a basis for discussing practical ethical questions using knowledge gained from different scientific areas.

Here are some further elements of the paper (things the paper uses or the paper is about):

  • (machine) ethics
  • (in)computability
  • artificial life in cellular automata
  • Bayesian statistics
  • Solomonoff's a priori probability

As I propose a formal ethical system, things get mathy at some point but the first and by far most important formula is relatively simple - the rest can be skipped then, so no problem for the average LWer.

I already discussed the paper with a few fellow students, as well as Brian Tomasik and a (computer science) professor of mine. Both recommended me to try to publish the paper. Also, I received some very helpful feedback. But because this would be my first attempt to publish something, I could still use more help, both with the content itself and scientific writing in English (which, as you may have guessed, is not my first language), before I submit the paper and Brian recommended using the LW's discussion board. I would also be thankful for recommendations on which journal is appropriate for the paper.

I would like to send those interested a draft via PM. This way I can also make sure that I don't spend all potential reviewers on the current version.

DISCLAIMER: I am not a moral realist. Also and as mentioned in the abstract, the proposed ethical system is incomputable and can therefore be argued to have infinite Kolmogorov complexity. So, it does not really pose a conflict with LW-consensus (including Complexity of value).

Meetup : Sydney Rationality Dojo - Urge Propagation

1 luminosity 28 September 2014 01:56AM

Discussion article for the meetup : Sydney Rationality Dojo - Urge Propagation

WHEN: 05 October 2014 03:00:00PM (+1000)

WHERE: Humanist House, 10 Shepherd St Chippendale

We'll be examining how to connect your desire for goals or outcomes to specific emotional urges to perform the actions to bring about that outcome.

After the session is over, there will also be an optional group dinner.

Discussion article for the meetup : Sydney Rationality Dojo - Urge Propagation

Meetup : October Rationality Dojo - Non-Violent Communication

1 MelbourneLW 28 September 2014 12:31AM

Discussion article for the meetup : October Rationality Dojo - Non-Violent Communication

WHEN: 05 October 2014 03:30:00PM (+0800)

WHERE: Ross House Association, 247-251 Flinders Lane, Melbourne

[ATTN: Please remember the new location for the dojos: the Jenny Florence Room, Level 3, Ross House at 247 Flinders Lane, Melbourne. 3:30pm start / arrival - formal dojo activities will commence at 4:00pm.]

The Less Wrong Sunday Rationality Dojos are crafted to be serious self-improvement sessions for those committed to the Art of Rationality and personal growth. Each month a community member will run a session involving a presentation of content, discussion, and exercises. Continuing the succession of immensely successful dojos, Chris will run a session on Non-Violent Communication.

As always, we will review the personal goals we committed to at the previous Dojo (I will have done X by the next Dojo). Our goals are now being recorded via Google Forms here - https://docs.google.com/forms/d/1MCHH4MpbW0SI_2JyMSDlKnnGP4A0qxojQEZoMZIdopk/viewform, and Melbourne Less Wrong organisers have access to the form results if you wish to review the goals you set last month.

This month, we are also seeking 2-3 lightning talks from members. Speakers will be limited to 5 minutes with room for questions. We will be asking for talks from attendees present, but if you already have a talk topic in mind, please contact Louise at lvalmoria@gmail.com The Dojo is likely to run for 2-3 hours, after which some people will get dinner together.

If you have any trouble finding the venue or getting in, call Louise on 0419 192 367.

If you would like to present at a future Dojo or suggest a topic, please fill it in on the Rationality Dojo Roster: http://is.gd/dojoroster

To organise similar events, please send an email to melbournelw@gmail.com

Discussion article for the meetup : October Rationality Dojo - Non-Violent Communication

The Future of Humanity Institute could make use of your money

42 danieldewey 26 September 2014 10:53PM

Many people have an incorrect view of the Future of Humanity Institute's funding situation, so this is a brief note to correct that; think of it as a spiritual successor to this post. As John Maxwell puts it, FHI is "one of the three organizations co-sponsoring LW [and] a group within the University of Oxford's philosophy department that tackles important, large-scale problems for humanity like how to go about reducing existential risk." (If you're not familiar with our work, this article is a nice, readable introduction, and our director, Nick Bostrom, wrote Superintelligence.) Though we are a research institute in an ancient and venerable institution, this does not guarantee funding or long-term stability.

Academic research is generally funded through grants, but because the FHI is researching important but unusual problems, and because this research is multi-disciplinary, we've found it difficult to attract funding from the usual grant bodies. This has meant that we’ve had to prioritise a certain number of projects that are not perfect for existential risk reduction, but that allow us to attract funding from interested institutions.

With more assets, we could both liberate our long-term researchers to do more "pure Xrisk" research, and hire or commission new experts when needed to look into particular issues (such as synthetic biology, the future of politics, and the likelihood of recovery after a civilization collapse).

We are not in any immediate funding crunch, nor are we arguing that the FHI would be a better donation target than MIRI, CSER, or the FLI. But any donations would be both gratefully received and put to effective use. If you'd like to, you can donate to FHI here. Thank you!

Assessing oneself

13 polymer 26 September 2014 06:03PM

I'm sorry if this is the wrong place for this, but I'm kind of trying to find a turning point in my life.

I've been told repeatedly that I have a talent for math, or science (by qualified people). And I seem to be intelligent enough to understand large parts of math and physics. But I don't know if I'm intelligent enough to make a meaningful contribution to math or physics.

Lately I've been particularly sad, since my score on the quantitative general GRE, and potentially, the Math subject test aren't "outstanding". They are certainly okay (official 78 percentile, unofficial 68 percentile respectively). But that is "barely qualified" for a top 50 math program.

Given that I think these scores are likely correlated with my IQ (they seem to roughly predict my GPA so far 3.5, math and physics major), I worry that I'm getting clues that maybe I should "give up".

This would be painful for me to accept if true, I care very deeply about inference and nature. It would be nice if I could have a job in this, but the standard career path seems to be telling me "maybe?"

When do you throw in the towel? How do you measure your own intelligence? I've already "given up" once before and tried programming, but the average actual problem was too easy relative to the intellectual work (memorizing technical fluuf). And other engineering disciplines seem similar. Is there a compromise somewhere, or do I just need to grow up?

classes:

For what it's worth, the classes I've taken include Real and Complex Analysis, Algebra, Differential geometry, Quantum Mechanics, Mechanics, and others. And most of my GPA is burned by Algebra and 3rd term Quantum specifically. But part of my worry, is that somebody who is going to do well, would never get burned by courses like this. But I'm not really sure. It seems like one should fail sometimes, but rarely standard assessments.

 

Edit:

Thank you all for your thoughts, you are a very warm community. I'll give more specific thoughts tomorrow. For what it's worth, I'll be 24 next month.

View more: Next