eli_sennesh comments on FAI Research Constraints and AGI Side Effects - Less Wrong

14 Post author: JustinShovelain 03 June 2015 07:25PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (58)

You are viewing a single comment's thread. Show more comments above.

Comment author: [deleted] 10 June 2015 11:59:43PM 0 points [-]

The planning model is just reflecting the fact that bounded agents don't always take the maximum expected utility action. The higher alpha is, the more bias there is towards good actions, but the more potentially expensive the computation is (e.g. if you use rejection sampling).

Ah, that makes sense!

Ah, that makes sense! I think I see how "trading computational power for algorithmic information" makes sense in this framework.

And before I could scribble a damned thing, Calude went and solved it six months ago. The Halting Problem, I mean.

I wonder how he feels about that, because my current feeling about this is HOLY FUCKING SHIT. By GOD, my AIT textbook cannot get here fast enough.

Comment author: jessicat 13 June 2015 08:05:59AM 0 points [-]

And before I could scribble a damned thing, Calude went and solved it six months ago. The Halting Problem, I mean.

Cool. If I get the meaning of the result well, it's that if you run a random program for some number of steps and it doesn't halt, then (depending on the exact numbers) it will be unlikely to halt when run on a supercomputer either, because halting times have low density. So almost all programs halt quickly or run a really really long time. Is this correct? This doesn't quite let you approximate Chaitin's omega, but it's interesting that you can approximate a bounded variant of Chaitin's omega (like what percentage of Turing machines halt when run for 10^50 steps). I can see how this would let you solve the halting problem well enough when you live in a bounded universe.

Comment author: [deleted] 13 June 2015 07:05:15PM *  0 points [-]

Cool. If I get the meaning of the result well, it's that if you run a random program for some number of steps and it doesn't halt, then (depending on the exact numbers) it will be unlikely to halt when run on a supercomputer either, because halting times have low density. So almost all programs halt quickly or run a really really long time. Is this correct?

Yep. Or, to put it a little tiny bit more accurately, you get a halting probability for your particular Turing machine, conditioned on the number of steps for which you've run it.

This doesn't quite let you approximate Chaitin's omega

Well technically, you can approximate Chaitin's omega from below just as Chaitin himself describes in his book. You'll just only be able to calculate finitely many digits.

I can see how this would let you solve the halting problem well enough when you live in a bounded universe.

Which we do ;-). You could run until you get past a desired threshold of probability (hypothesis testing), or you could use a bounded-rationality approach to vary your surety of nonhalting with your available processing power.

But overall, it gives you a way to "reason around" the Halting Problem, which, when we apply it to the various paradoxes of self-reference... you can see where I'm going with this.

Comment author: Manfred 11 June 2015 01:56:13AM 0 points [-]

my AIT textbook cannot get here fast enough.

You can always acquire it nautically to cut down on shipping time :P

What one did you get? I went with Li and Vitanyi, and have enjoyed my first readthrough.

Comment author: [deleted] 12 June 2015 04:15:55AM 0 points [-]

The first one I bought was Calude's, but it demands more background in pure maths than I have, so I went for Li and Vitanyi's.