Size of the smallest recursively self-improving AI?

4 alexflint 30 March 2011 11:31PM

For no reason in particular I'm wondering about the size of the smallest program that would constitute a starting point of a recursively self-improving AI.

The analysis of FOOM as a self-amplifying process would seem to indicate that in principle one could get it started from a relatively modest starting point -- perhaps just a few bytes of the right code could begin the process. Or could it? I wonder whether any other considerations give tighter lower-bounds.

One consideration is that FOOM hasn't already happened -- at least not here on Earth. If the smallest FOOM seed were very small (like a few hundred bytes) then we would expect evolution to have already bumped into it at some point. Although evolution is under no specific pressure to produce a FOOM, it has probably produced over the last few billion years all the interesting computations up to some minor level of complexity, and if there were a FOOM seed among those then we would see the results about us.

Then there is the more speculative analysis of what minimal expertise the algorithm constituting the FOOM seed would actually need.

Then there is the fact that any algorithm that naively enumerates some space of algorithms qualifies in some sense as a FOOM seed as it will eventually hit on some recursively self-improving AI. But that could take gigayears so is really not FOOM in the usual sense.

I wonder also whether the fact that mainstream AI hasn't yet produced FOOM could lower-bound the complexity of doing so.

Note that here I'm referring to recursively self-improving AI in general -- I'd be interested if the answers to these questions change substantially for the special case of friendly AIs.

Anyway, just idle thoughts, do add yours.

Audio from Eliezer's talk at the Oxford Transhumanists

8 alexflint 29 March 2011 09:31PM

In January we hosted Eliezer at an Oxford Transhumanists meeting. He spoke about why AI is such an incredibly consequential consideration, over and above other technologies. This will not be new material for regular lesswrong readers. The recordings from Eliezer's talk, along with previous talks, are available at http://groupspaces.com/oxfordtranshumanists/pages/past-talks.

rationality for turing machines

2 alexflint 23 March 2011 12:38AM

What can we say about rational behaviour among realizable algorithms, i.e. Turing machines? What limits can we put on their performance? What exactly does Cox's theorem have to say?

Cox's theorem tells us that optimally rational beliefs imply Bayesian reasoning. But such results are independent of specific models of computation. Can we rigorously apply machinery like Cox's theorem if we state the problem in terms of optimal behaviour among general recursive algorithms -- i.e. realizable algorithms.

To make that a little more concrete: take the problem of writing an algorithm (designing a Turing machine) that processes sensor inputs and writes actions as output. As a concrete Turing machine, the algorithm consists of discrete steps at which it either writes an output symbol (corresponding to some action) or does not (which we can interpret as some null action). There is a second Turing machine corresponding to the external world that responds to each action by changing its state, which feeds back into the sensor inputs of our algorithm. We seek the algorithm maximizing the expectation of some pre-specified utility function U, defined over states of the world. To define this rigorously we need to nail down how utility is summed over time, perhaps something along the lines of Legg and Hutter's formalism would be appropriate. Or perhaps we can just terminate the process after some pre-specified number of steps.

So it seems to me that Cox's theorem establishes that there must be a Bayes-optimal decision at any point in time, and a "Bayes oracle" that always outputs this decisions would be optimal among all algorithms. But such an oracle is certainly not directly realizable as a Turing machine, since at each time step a Turing machine simply changes state, moves the tape, and writes a symbol, whereas a full Bayesian update could require arbitrary amounts of computation. And there is no option of not acting, only of outputting the null action.

One approach would be to make a Bayes-optimal decision about how much computation to spend on each update that takes into account opportunity costs. But while this seems intuitively reasonable, there is also an opportunity cost to performing this computation, so the question really is how sure we can be that this is the best design choice.

Nevertheless, it seems quite reasonable to write Bayesian algorithms, and indeed to expect them to perform optimally on average. But can we formalize and prove a result along these lines? Does someone know of existing work in this direction? Perhaps Cox's theorem or something similar applies in some direct way that I haven't perceived?

 

 

Enjoying musical fashion: why not?

2 alexflint 21 February 2011 04:22PM

I just downloaded the latest Radiohead album, and I love it.

Thinking back, I started listening to Radiohead years ago when I found out that some of the cool kids in school were into it. With all the hype about the new album, the status/fashion processors in my brain going to ensure that I enjoy listening to it. I would probably fail a double-blind test with a bunch of imitation bands' fake "new Radiohead albums".

But I'm really enjoying listening to the album, and that doesn't seem like a bad or contradictory thing at all, even in light of the statements above. If, hypothetically, I was enjoying it for purely non-fashion reasons, then presumably that enjoyment could also be traced back though a causal chain to facts about brain development, evolutionary psychology, or whatever. But we would have no problem accepting that enjoyment as A Good Thing since explaining enjoyment does not diminish it. And so it seems in this case.

Automated theorem proving by learning from examples

3 alexflint 16 February 2011 01:38PM

Does anyone know of work that attempts to build a theorem prover by learning-from-examples? I'm imagining extracting a large corpus of theorems from back issues of mathematical journals, then applying unsupervised structure discovery techniques from machine learning to discover recurring patterns.

Perhaps a model of the "set of theorems that humans tend to produce" would be helpful in proving new theorems.

The unsupervised-structure-discovery bit does seem within the realm of current machine learning.

Any references to related work?

Not owning our beliefs

6 alexflint 15 February 2011 03:02PM

Julian Baggini argues that we might be more willing to judge our beliefs objectively if we avoid thinking of them as "our own". I hadn't thought before about explicitly distancing myself from my beliefs in this sense.

This Sunday: Oxford Rationality Meetup

7 alexflint 28 January 2011 03:06PM

This Sunday there will be a rationality meetup in Oxford.

Where: Entrance to Exeter College, Turl Street (map here)
When: 5-7pm, Sunday January 30th

We'll be discussing artificial intelligence and existential risks. We plan to split into two groups to ensure we have something for everything. This looks like it'll be a really interesting discussion.

Also, Anna Salamon from the Singularity Institute will be joining us! She's leaving the UK soon, so this is a great chance to discuss some ideas with her.

We also have a new game to play at the beginning ;)

My number is 07595983672. See you there!

Eliezer to speak in Oxford, 8pm Jan 25th

8 alexflint 17 January 2011 11:58PM

Next Tuesday Eliezer will be giving a talk in conjunction with the Oxford Transhumanists entitled "Smarter-Than-Human Artificial Intelligence: Predictably the Most Important Thing in the History of Ever". The talk is open for everybody and it would be great to meet any of you that are in the area and can come down. The talk will be followed by drinks at the Turf Tavern, and hopefully we will be able to drag Eliezer with us for further discussion.

When: 8pm, Tuesday January 25th
Where: Saskatchewan Room, Exeter College, Turl Street, OX1 3DP (there will be ushers to guide you to the room from the college entrance)
RSVPhttp://www.facebook.com/event.php?eid=182377671794308

Eliezer doesn't often speak in this part of the world so don't miss it!

I want to learn economics

8 alexflint 13 January 2011 11:02PM

I would like to learn more about economics but I don't know where to start. Can lesswrong suggest specific areas of economics that are particularly useful for understanding and optimising the world? Specific suggestions such as reading lists and resources would also be much appreciated.

Stanford historian on the singularity

4 alexflint 06 November 2010 10:01AM

Ian Morris on "why the west rules", which seems to be a provocative title for an interesting book on historical geographical trends and their projection into the future: http://www.youtube.com/watch?v=tvkHiL-H2io. He starts talking about the future at minute 27 and basically concludes that a singularity scenario is one of two possibilities for the 21st century, the other being collapse. Nothing new, but encouraging to see this increasingly in the mainstream.

View more: Prev | Next