You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

eli_sennesh comments on Steelmaning AI risk critiques - Less Wrong Discussion

26 Post author: Stuart_Armstrong 23 July 2015 10:01AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (98)

You are viewing a single comment's thread. Show more comments above.

Comment author: jacob_cannell 28 July 2015 10:29:58PM *  3 points [-]

So, to sum up, your plan is to create an arbitrarily safe VM, and use it to run brain-emulation-style denovo AIs

No. I said:

Stop thinking of AGI as some wierd mathy program. Instead think of brain emulations - and then you have obvious answers to all of these questions.

I used brain emulations as analogy to help aid your understanding. Because unless you have deep knowledge of machine learning and computational neuroscience, there are huge inferential distances to cross.

Human beings are not just general optimizers.

Yes we are. I have made a detailed, extensive, citation-full, and well reviewed case that human minds are just that.

All of our understanding about the future of AGI is based ultimately on our models of the brain and AI in general. I am claiming that the MIRI viewpoint is based on an outdated model of the brain, and a poor understanding of the limits of computation and intelligence.

I will summarize for one last time. I will then no longer repeat myself because it is not worthy of my time - any time spent arguing this is better spent preparing another detailed article, rather than a little comment.

There is extensive uncertainty concerning how the brain works and what types of future AI are possible in practice. In situations of such uncertainty, any good sane probabilistic reasoning agent should come up with a multimodal distribution that spreads belief across several major clusters. If your understanding of AI comes mainly from reading LW - you are probably biased beyond hope. I'm sorry, but this is true. You are stuck in box and don't even know it.

Here are the main key questions that lead to different belief clusters:

  • Are the brain's algorithms for intelligence complex or simple?
  • And related - are human minds mainly software or mainly hardware?
  • At the practical computational level, does the brain implement said algorithms efficiently or not?

If the human mind is built out of a complex mess of hardware specific circuits, and the brain is far from efficient, than there is little to learn from the brain. This is Yudkowsky/MIRI's position. This viewpoint leads to a focus on pure math and avoidance of anything brain-like (such as neural nets). In this viewpoint hard takeoff is likely, AI is predicted to be nothing like human minds, etc.

If you believe that the human is complex and messy hardware, but the brain is efficient, than you get Hanson's viewpoint where the future is dominated by brain emulations. The brain ems win over brain inspired AI because scanning real brain circuitry is easier than figuring out how it works.

Now what if the brain's algorithms are not complex, and the brain is efficient? Then you get my viewpoint cluster.

These questions are empirical - and they can be answered today. In fact, I realized all this years ago and spent a huge amount of time learning more about the future of computer hardware, the limits of computation, machine learning, and computational neuroscience.

Yudkowsky, Hanson, and to some extent Bostrom - were all heavily inspired by the highly influential evolved modularity hypothesis in ev psych from Tooby and Cosmides. In this viewpoint, the brain is complex, and most of our algorithmic content is hardware based rather than software. I have argued that this viewpoint has been tested empirically and now disproven. The brain is built out of relatively simple universal learning algorithms. It will essentially be almost impossible to build practical AGI that is very different from the brain (remember, AGI is defined as software which can do everything the brain does).

Bostrom/Yudkowksky have also argued that the brain is very far from efficient. For example, from true sources of disagreement:

Human neurons run at less than a millionth the speed of transistors, transmit spikes at less than a millionth the speed of light, and dissipate around a million times the heat per synaptic operation as the thermodynamic minimum for a one-bit operation at room temperature. Physically speaking, it ought to be possible to run a brain at a million times the speed without shrinking it, cooling it, or invoking reversible computing or quantum computing.

The first two statements are true, the third statement is problematic, and the thrust of the conclusion is incorrect. The minimum realistic energy for a brain-like circuit is probably close to what the brain actually uses:

  • the landauer bound depends on speed and reliability. The 10^-21 J/bit bound only applies to a signal of infinitely low frequency. For realistic fast reliable signals, the bound is 100 times higher: around 10^-19 J/bit.
  • the landauer bound applies to single 1 bit ops. The fundamental bound for a 32 bit flop is around 10^5 or 10^6 times higher. Moore's Law is ending and we are actually close to these bounds already. Synapses perform analog ops which have lower cost than a 32 bit flop, but still a much higher cost than a single bit op.
  • most of the energy consumption in any advanced computer comes from wire dissipation, not switch dissipation. Signaling in the brain uses roughly 0.5x10^-14 J/bit/mm (5 fJ/bit/mm) 2, which appears to be within an order of mag or two of optimal, and is perhaps one order of magnitude more efficient than current computers. Wire signal energy in computers is not improving significantly. For example, for 40nm tech in 2010, the wire energy is 240 fj/bit/mm, and is predicted to be around 150 to 115 by 2017 3. The practical limit is perhaps around 1 fJ/bit/mm, but that would probably require much lower speeds.

These errors add up to around 6 orders of magnitude or so. The brain is near the limits of energy efficiency for what it does in terms of irreversible computation. No practical machine we will ever build in the near future is going to be many orders of magnitude more efficient than the brain. Yes, eventually reversible and quantum computing could perhaps result in large improvements, but those technologies are far and will come long after neuromorphic AGI.

Comment author: [deleted] 03 August 2015 04:01:35AM 0 points [-]

Yes we are. I have made a detailed, extensive, citation-full, and well reviewed case that human minds are just that.

That isn't quite correct. We do have hard wiring that raises and lowers the from-the-inside importance of specific features present in our learning data. That is, we have a nontrivial inductive bias which not all possible minds will have, even when we start by assuming that all minds are semi-modular universal learners.