Eliezer Yudkowsky has written about the idea of a "codic cortex"; that is, a specialized mental module for modelling the behavior of executable code.
And something like that would be really useful! For instance, there's fundamentally no good reason to have any implementation bugs when writing code, or to not easily notice them when reading it. The techniques for how to prove code correctness are well known; but in practice, for a human programmer to actually use them is so expensive (in terms of productivity) that it's usually more efficient to skip them, and find and fix bugs after the fact.
This is despite the fact that humans already have (in absolute terms) very good models of how the code they write works; if we didn't, we couldn't do nontrivial programming at all. But those models are sloppy, and there's some details that are easy to miss for humans. If we built the models instead from a formal and precise analysis of the code on hand, we'd get much better predictions out of them.
A lot of programming language/environment development has been concerned with having the compiler or runtime handle certain things (translating high-level structures to machine code, garbage collection, type safety, etc.) so that the programmer doesn't need to worry about getting them right - both so they don't end up getting them wrong, and so that having to painstakingly get them right doesn't drain their productivity. But such approaches usually come with some performance cost, and in the end they're crutches to deal with the fact that humans are no good at programming. None of them would be necessary for an intelligence that had a decent specialized module to handle code modelling.
When programming, I frequently find myself kind of guessing at what code will work, pumping some input through the function I write and checking to see if the output is consistent with my expectations.
I do not always bother to figure out whether a loop should start at zero or one before I just try it.
Yes, this can cause problems, but that process seems to run counter to what is said here.
This is part of a weekly reading group on Nick Bostrom's book, Superintelligence. For more information about the group, and an index of posts so far see the announcement post. For the schedule of future topics, see MIRI's reading guide.
Welcome. This week we discuss the fifth section in the reading guide: Forms of superintelligence. This corresponds to Chapter 3, on different ways in which an intelligence can be super.
This post summarizes the section, and offers a few relevant notes, and ideas for further investigation. Some of my own thoughts and questions for discussion are in the comments.
There is no need to proceed in order through this post, or to look at everything. Feel free to jump straight to the discussion. Where applicable and I remember, page numbers indicate the rough part of the chapter that is most related (not necessarily that the chapter is being cited for the specific claim).
Reading: Chapter 3 (p52-61)
Summary
Notes
In-depth investigations
If you are particularly interested in these topics, and want to do further research, these are a few plausible directions, some inspired by Luke Muehlhauser's list, which contains many suggestions related to parts of Superintelligence. These projects could be attempted at various levels of depth.
How to proceed
This has been a collection of notes on the chapter. The most important part of the reading group though is discussion, which is in the comments section. I pose some questions for you there, and I invite you to add your own. Please remember that this group contains a variety of levels of expertise: if a line of discussion seems too basic or too incomprehensible, look around for one that suits you better!
Next week, we will talk about 'intelligence explosion kinetics', a topic at the center of much contemporary debate over the arrival of machine intelligence. To prepare, read Chapter 4, The kinetics of an intelligence explosion (p62-77). The discussion will go live at 6pm Pacific time next Monday 20 October. Sign up to be notified here.