You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Manfred comments on Efficient Induction - Less Wrong Discussion

3 Post author: paulfchristiano 27 December 2010 10:40AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (25)

You are viewing a single comment's thread.

Comment author: Manfred 27 December 2010 08:58:30PM 2 points [-]

You know more about this than me - is the three-body problem computable by the technical definition?

Comment author: JoshuaZ 28 December 2010 01:26:10AM 2 points [-]

Ok. I really want to know why this question got voted down. It seemed like a perfectly reasonable question for someone to ask if they don't know much about these subjects and have heard that the three body problem is unsolvable.

Comment author: paulfchristiano 27 December 2010 09:32:28PM 1 point [-]

You can numerically integrate the three-body problem. So there is a linear time algorithm to approximately compute what will happen after linear time. There just isn't a logarithmic time algorithm (which is what we would normally mean by "closed form").

Comment author: Manfred 28 December 2010 02:23:49AM *  0 points [-]

Ah, okay. So all that matters is that the right answer is computable given infinite resources. How do you reconcile this with the apparently finite computational resources of the universe?

Comment author: Sniffnoy 28 December 2010 02:41:35AM *  3 points [-]

To be clear - the three-body problem deals with real numbers. Computers can't work directly with real numbers, since real numbers can't be finitely represented. Hence when we speak about the computability of a problem "compute f(x)" that calls for a real number answer, we really mean the computability of the problem "compute f(x) to within epsilon" (where now epsilon is another input to the problem).

Comment author: paulfchristiano 28 December 2010 02:33:18AM 0 points [-]

You can compute what will happen efficiently. If you want to know where the 3 bodies are after t seconds to within some constant error, it only takes you about O(t) steps.

Comment author: rwallace 28 December 2010 04:26:29AM 1 point [-]

But errors accumulate over time. That means if you want the error to be bounded by a constant at the end of the calculation, the larger t is, the more you have to do along the way to avoid or correct error. If that just means you have to use higher precision calculations, that would give O(t log t), but if it means you have to use finer time steps, that would make the computational effort O(t^2) or worse.

Comment author: JoshuaZ 28 December 2010 04:54:31AM 0 points [-]

This is primarily a function of how accurately you can do division and multiplication. Even if it isn't O(t) it is almost probably O(t^(1+epsilon)) for any epsilon>0.

Comment author: rwallace 28 December 2010 11:08:55AM 5 points [-]

Suppose for the sake of argument you had infinite precision division and multiplication. There would still be finite error due to the use of finite time step size. (Runge-Kutta methods etc. give smaller error for a given step size than the more obvious numerical integration methods, but still not zero.) If you want to reduce the error, you need to use a smaller step size. Generally speaking, the error is a polynomial function of the step size (unlike arithmetic error, which decreases exponentially with the number of digits of precision), so you would expect O(t^(1+epsilon)) to be unattainable. Unless there's some method of reducing error exponentially with step size for the three body problem that I'm missing?

Comment author: paulfchristiano 28 December 2010 11:17:58PM 3 points [-]

Runge-Kutta takes O(delta^(-1/4)) time to get an approximation quality of delta, I think. I don't know if we can yet, but I suspect is is possible to get an approximation quality of delta in time O(delta^(-epsilon)) for any epsilon>0 (in the same sense that I suspect it will eventually possible to multiply two nxn matrices in time O(n^(2 + epsilon)) for any epsilon>0, even though its not practical at all). This would probably imply JoshuaZ's stated time bound. It doesn't require exponentially fast error reduction, just arbitrarily good polynomial error reduction.

Anyway, the model I described in the post doesn't actually have this problem. More precision just comes from using a finer discrete system to approximate the universe (if in fact it is continuous, which I would put a probability of less than 50% on) and still using a linear size circuit to do the simulation. You only pay logarithmically for using a finer grid, in any of the schemes I proposed.

Comment author: rwallace 29 December 2010 01:22:20AM 1 point [-]

An infinite sequence of algorithms converging on arbitrarily good polynomial error reduction? Fair enough, I certainly can't rule that out at this stage.

But I don't understand your last point: how can you pay only logarithmically for using a finer grid?

Comment author: paulfchristiano 29 December 2010 01:41:54AM 2 points [-]

The post had a concrete complexity measure, which pays logarithmically for a finer grid (that is, doubling the size of the universe is the same as adding one more bit of complexity). The point is, you can only afford to pay logarithmically in the size of the universe (if you want known physical theories to have good complexity as compared to stupid explanations for our observations). Making the grid twice as fine is just making the universe twice as large, so you only pay 1 more bit: the extra bit needed to describe the larger size of the universe. If you disagree with this then you probably disagree fundamentally with my approach. That is obviously valid; I don't really like my approach that much. But alternative approaches, like the speed prior, seem much worse to me.

Comment author: JoshuaZ 28 December 2010 01:27:37PM 3 points [-]

Hmm, yes you are correct. I was being stupid.

Comment author: Manfred 28 December 2010 03:08:28AM 0 points [-]

Well, it's simple to find a chaotic problem that's not efficient. I was just trying to understand what "the universe is computable" really means since the universe isn't exactly computable.

Comment author: saturn 28 December 2010 06:06:43AM 1 point [-]

It seems like you and some others in this thread are assuming that real numbers describe some actual behavior of the universe, but that's begging the question. If the universe is computable, it implies that all quantities are discrete.

Comment author: rwallace 28 December 2010 11:55:47AM 1 point [-]

Well, if it turns out the universe is continuous, then when we conjecture it to be computable, we typically mean the same thing we mean when we say pi is computable: there exists a fixed length program that could compute it to any desired degree of precision (assuming initial conditions specified to sufficient precision).

Comment author: Manfred 30 December 2010 01:38:09AM 0 points [-]

Continuous quantities are the simplest explanation for the evidence we have - there are some hints that it could be otherwise, but they're only hints.