timtyler comments on Hedging our Bets: The Case for Pursuing Whole Brain Emulation to Safeguard Humanity's Future - Less Wrong

11 Post author: inklesspen 01 March 2010 02:32AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (244)

You are viewing a single comment's thread. Show more comments above.

Comment author: SilasBarta 03 March 2010 06:04:31PM *  3 points [-]

Flight has some abstract principles that don't depend on all the messy biological details of cells, bones and feathers. It will - pretty obviously IMO - be much the same for machine intelligence.

I disagree that it is so obvious. Much of what we call "intelligence" in humans and other animals is actually tacit knowledge about a specific environment. This knowledge gradually accumulated over billions of years, and it works due to immodular systems that improved stepwise and had to retain relevant functionality at each step.

This is why you barely think about bipedal walking, and discovered it on your own, but even now, very few people can explain how it works. It's also why learning, for humans, largely consists of reducing a problem into something for which we have native hardware.

So intelligence, if it means successful, purposeful manipulation of the environment, does rely heavily on the particulars of our bodies, in a way that powered flight does not.

If we had good stream compressors we would be able to predict the future consequences of actions - a key ability in shaping the future. You don't need to scan a brain to build a compressor. That is a silly approach to the problem that pushes the solution many decades into the future. Compression is "just" another computer science problem - much like searching or sorting.

Yes, it's another CS problem, but not like searching or sorting. Those are computable, while (general) compression isn't. Not surprisingly, the optimal intelligence Hutter presents is uncomputable, as is every other method presented in every research paper that purports to be a general intelligence.

Now, you can make approximations to the ideal, perfect compressor, but that inevitably requires making decisions about what parts of the search space can be ignored at low enough cost -- which itself requires insight into the structure of the search space, the very thing you were supposed to be automating!

Attempts to reduce intelligence to comression butt up against the same limits that compression does: you can be good at compressing some kinds of data, only if you sacrifice ability to compress other kinds of data.

With that said, if you can make a computable, general compressor that identifies regularities in the environment many orders of magnitude faster than evolution, then you will have made some progress.

Comment author: timtyler 03 March 2010 08:46:14PM *  0 points [-]

Re: "Attempts to reduce intelligence to comression butt up against the same limits that compression does: you can be good at compressing some kinds of data, only if you sacrifice ability to compress other kinds of data."

That is not a meaningful limitation. There are general purpose universal compressors. It is part of the structure of reality that sequences generated by short programs are more commonly observed. That's part of the point of using a compressor - it is an automated way of applying Occam's razor.

Comment author: SilasBarta 03 March 2010 09:29:00PM 2 points [-]

That is not a meaningful limitation. There are general purpose universal compressors.

There are frequently useful general purpose compressors that work by anticipating the most common regularities in the set of files typically generated by humans. But they do not, and cannot, iterate through all the short programs that could have generated the data -- it's too time-consuming.

Comment author: timtyler 03 March 2010 09:40:57PM 0 points [-]

The point was that general purpose compression is possible. Yes, you sacrifice the ability to compress other kinds of data - but those other kinds of data are highly incompressible and close to random - not the kind of data which most intelligent agents are interested in finding patterns in in the first place.

Comment author: SilasBarta 03 March 2010 10:00:53PM *  0 points [-]

Yes, you sacrifice the ability to compress other kinds of data - but those other kinds of data are highly incompressible and close to random.

No, they look random and incompressible because effective compression algorithms optimized for this universe can't compress them. But algorithms optimized for other computable universes may regard them as normal and have a good way to compress them.

Which kinds of data (from computable processes) are likely to be observed in this universe? Ay, there's the rub.

Comment author: timtyler 03 March 2010 10:16:18PM *  0 points [-]

Re: "they look random and incompressible because effective compression algorithms optimized for this universe can't compress them"

Compressing sequences from this universe is good enough for me.

Re: "Which kinds of data (from computable processes) are likely to be observed in this universe? Ay, there's the rub."

Not really - there are well-known results about that - see:

http://en.wikipedia.org/wiki/Occam's_razor

http://www.wisegeek.com/what-is-solomonoff-induction.htm

Comment author: SilasBarta 03 March 2010 10:20:31PM 0 points [-]

Compressing sequences from this universe is good enough for me.

Except that the problem you were attacking at the beginning of this thread was general intelligence, which you claimed to be solvable just by good enough compression, but that requires knowing which parts of the search space in this universe are unlikely, which you haven't shown how to algorithmize.

"Which kinds of data (from computable processes) are likely to be observed in this universe? Ay, there's the rub."

Not really - there are well-known results about that - see: ...

Yes, but as I keep trying to say, those results are far from enough to get something workable, and it's not the methodology behind general compression programs.

Comment author: timtyler 03 March 2010 10:39:22PM 0 points [-]

Arithmetic compression, Huffman compression, Lempel-Ziv compression, etc are all excellent at compressing sequences produced by small programs. Things like:

1010101010101010 110110110110110110 1011011101111011111

...etc.

Those compressors (crudely) implement a computable approximation of Solomonoff induction without iterating through programs that generate the output. How they work is not very relevant here - the point is that they act as general-purpose compressors - and compress a great range of real world data types.

The complaint that we don't know what types of data are in the universe is just not applicable - we do, in fact, know a considerable amount about that - and that is why we can build general purpose compressors.