RichardKennaway comments on Rationality Quotes May 2012 - Less Wrong

6 Post author: OpenThreadGuy 01 May 2012 11:37PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (696)

You are viewing a single comment's thread. Show more comments above.

Comment author: RichardKennaway 09 May 2013 07:44:03AM 0 points [-]

No. Given how strange and different AIXI works, it can easily stimulate new ideas.

The spin-off argument. Here's a huge compendium of spinoffs of previous approaches to AGI. All very useful, but not AGI. I'm not expecting better from AIXI.

Comment author: gwern 09 May 2013 03:45:25PM 3 points [-]

Hm, so let's see; you started off mocking the impossibility and infeasibility of AIXI and any computable version:

I am not persuaded that the harder Bayesians have any more concrete answer. Solmonoff induction is uncomputable and seems to unnaturally favour short hypotheses involving Busy-Beaver-sized numbers. And any computable approximation to it looks to me like brute-forcing an NP-hard problem.

Then you admitted that actually every working solution can be seen as a form of SI/AIXI:

There might well be a theorem formalising that statement. There might also be one formalising the statement that every remotely optimal form of induction or decision-making is uncomputable. If that's the way it is, well, that's the way it is... Since AIXI is, by construction, the best possible intelligent agent, all work on AGI can, in a rather useless sense, be described as an approximation to AIXI

And now you're down to arguing that it'll be "very useful, but not AGI".

Well, I guess I can settle for that.

Comment author: RichardKennaway 10 May 2013 08:33:24AM -1 points [-]

I stand by the first quote. Every working solution can in a useless sense be seen as a form of SI/AIXI. The sense that a hot-air balloon can be seen as an approach to landing on the Moon.

And now you're down to arguing that it'll be "very useful, but not AGI".

At the very most. Whether AIXI-like algorithms get into the next edition of Russell and Norvig, having proved of practical value, well, history will decide that, and I'm not interested in predicting it. I will predict that it won't prove to be a viable approach to AGI.

Comment author: gwern 10 May 2013 04:30:54PM 0 points [-]

The sense that a hot-air balloon can be seen as an approach to landing on the Moon.

How can a hot air balloon even in theory be seen as that? Hot air has a specific limit, does it not - where its density equals the outside density?