timtyler comments on Hedging our Bets: The Case for Pursuing Whole Brain Emulation to Safeguard Humanity's Future - Less Wrong

11 Post author: inklesspen 01 March 2010 02:32AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (244)

You are viewing a single comment's thread. Show more comments above.

Comment author: SilasBarta 03 March 2010 06:04:31PM *  3 points [-]

Flight has some abstract principles that don't depend on all the messy biological details of cells, bones and feathers. It will - pretty obviously IMO - be much the same for machine intelligence.

I disagree that it is so obvious. Much of what we call "intelligence" in humans and other animals is actually tacit knowledge about a specific environment. This knowledge gradually accumulated over billions of years, and it works due to immodular systems that improved stepwise and had to retain relevant functionality at each step.

This is why you barely think about bipedal walking, and discovered it on your own, but even now, very few people can explain how it works. It's also why learning, for humans, largely consists of reducing a problem into something for which we have native hardware.

So intelligence, if it means successful, purposeful manipulation of the environment, does rely heavily on the particulars of our bodies, in a way that powered flight does not.

If we had good stream compressors we would be able to predict the future consequences of actions - a key ability in shaping the future. You don't need to scan a brain to build a compressor. That is a silly approach to the problem that pushes the solution many decades into the future. Compression is "just" another computer science problem - much like searching or sorting.

Yes, it's another CS problem, but not like searching or sorting. Those are computable, while (general) compression isn't. Not surprisingly, the optimal intelligence Hutter presents is uncomputable, as is every other method presented in every research paper that purports to be a general intelligence.

Now, you can make approximations to the ideal, perfect compressor, but that inevitably requires making decisions about what parts of the search space can be ignored at low enough cost -- which itself requires insight into the structure of the search space, the very thing you were supposed to be automating!

Attempts to reduce intelligence to comression butt up against the same limits that compression does: you can be good at compressing some kinds of data, only if you sacrifice ability to compress other kinds of data.

With that said, if you can make a computable, general compressor that identifies regularities in the environment many orders of magnitude faster than evolution, then you will have made some progress.

Comment author: timtyler 03 March 2010 08:44:07PM *  0 points [-]

What's with complaining that compressors are uncomputable?!? Just let your search through the space of possible programs skip on to the next one whenever you spend more than an hour executing. Then you have a computable compressor. That ignores a few especially tedious and boring areas of the search space - but so what?!? Those areas can be binned with no great loss.

Comment author: SilasBarta 03 March 2010 09:20:25PM 2 points [-]

Did you do the math on this one? Even with only 10% of programs caught in a loop, then it would take almost 400 years to get through all programs up to 24 bits long.

We need something faster.

(Do you see now why Hutter hasn't simply run AIXI with your shortcut?)

Comment author: wnoise 03 March 2010 09:55:55PM 0 points [-]

Of course, in practice many loops can be caught, but combinatorial explosions really does blow any technique out of the water.

Comment author: timtyler 03 March 2010 09:32:25PM 0 points [-]

Uh, I was giving a computable algorithm, not a rapid one.

The objection that compression is uncomputable strategy is a useless one - you just use a computable approximation instead - with no great loss.

Comment author: SilasBarta 03 March 2010 09:54:01PM *  2 points [-]

Uh, I was giving a computable algorithm, not a rapid one.

But you were implying that the uncomputability is somehow "not a problem" because of a quick fix you gave, when the quick fix actually means waiting at least 400 years -- under unrealistically optimistic assumptions.

The objection that compression is uncomputable strategy is a useless one - you just use a computable approximation instead - with no great loss.

Yes, I do use a computable approximation, and my computable approximation has already done the work of identifying the important part of the search space (and the structure thereof).

And that's the point -- compression algorithms haven't done so, except to the extent that a programmer has fed them the "insights" (known regularities of the search space) in advance. That doesn't tell you the algorithmic way to find those regularities in the first place.

Comment author: timtyler 03 March 2010 10:10:03PM *  -1 points [-]

Re: "But you were implying that the uncomputability is somehow "not a problem""

That's right - uncomputability in not a problem - you just use a computable compression algorithm instead.

Re: "And that's the point -- compression algorithms haven't done so, except to the extent that a programmer has fed them the "insights" (known regularities of the search space) in advance."

The universe itself exhibits regularities. In particular sequences generated by small automata are found relatively frequently. This principle is known as Occam's razor. That fact is exploited by general purpose compressors to compress a wide range of different data types - including many never seen before by the programmers.

Comment author: SilasBarta 03 March 2010 10:16:08PM 0 points [-]

"But you were implying that the uncomputability is somehow "not a problem""

That's right - uncomputability in not a problem - you just use a computable compression algorithm.

You said that it was not a problem with respect to creating superintelligent beings, and I showed that it is.

The universe itself exhibits regularities. ...

Yes, it does. But, again, scientists don't find them by iterating through the set of computable generating functions, starting with the smallest. As I've repeatedly emphasized, that takes too long. Which is why you're wrong to generalize compression as a practical, all-encompassing answer to the problem of intelligence.