Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

SilasBarta comments on Hedging our Bets: The Case for Pursuing Whole Brain Emulation to Safeguard Humanity's Future - Less Wrong

11 Post author: inklesspen 01 March 2010 02:32AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (244)

You are viewing a single comment's thread. Show more comments above.

Comment author: SilasBarta 03 March 2010 09:20:25PM 2 points [-]

Did you do the math on this one? Even with only 10% of programs caught in a loop, then it would take almost 400 years to get through all programs up to 24 bits long.

We need something faster.

(Do you see now why Hutter hasn't simply run AIXI with your shortcut?)

Comment author: wnoise 03 March 2010 09:55:55PM 0 points [-]

Of course, in practice many loops can be caught, but combinatorial explosions really does blow any technique out of the water.

Comment author: timtyler 03 March 2010 09:32:25PM 0 points [-]

Uh, I was giving a computable algorithm, not a rapid one.

The objection that compression is uncomputable strategy is a useless one - you just use a computable approximation instead - with no great loss.

Comment author: SilasBarta 03 March 2010 09:54:01PM *  2 points [-]

Uh, I was giving a computable algorithm, not a rapid one.

But you were implying that the uncomputability is somehow "not a problem" because of a quick fix you gave, when the quick fix actually means waiting at least 400 years -- under unrealistically optimistic assumptions.

The objection that compression is uncomputable strategy is a useless one - you just use a computable approximation instead - with no great loss.

Yes, I do use a computable approximation, and my computable approximation has already done the work of identifying the important part of the search space (and the structure thereof).

And that's the point -- compression algorithms haven't done so, except to the extent that a programmer has fed them the "insights" (known regularities of the search space) in advance. That doesn't tell you the algorithmic way to find those regularities in the first place.

Comment author: timtyler 03 March 2010 10:10:03PM *  -1 points [-]

Re: "But you were implying that the uncomputability is somehow "not a problem""

That's right - uncomputability in not a problem - you just use a computable compression algorithm instead.

Re: "And that's the point -- compression algorithms haven't done so, except to the extent that a programmer has fed them the "insights" (known regularities of the search space) in advance."

The universe itself exhibits regularities. In particular sequences generated by small automata are found relatively frequently. This principle is known as Occam's razor. That fact is exploited by general purpose compressors to compress a wide range of different data types - including many never seen before by the programmers.

Comment author: SilasBarta 03 March 2010 10:16:08PM 0 points [-]

"But you were implying that the uncomputability is somehow "not a problem""

That's right - uncomputability in not a problem - you just use a computable compression algorithm.

You said that it was not a problem with respect to creating superintelligent beings, and I showed that it is.

The universe itself exhibits regularities. ...

Yes, it does. But, again, scientists don't find them by iterating through the set of computable generating functions, starting with the smallest. As I've repeatedly emphasized, that takes too long. Which is why you're wrong to generalize compression as a practical, all-encompassing answer to the problem of intelligence.