Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

SilasBarta comments on Hedging our Bets: The Case for Pursuing Whole Brain Emulation to Safeguard Humanity's Future - Less Wrong

11 Post author: inklesspen 01 March 2010 02:32AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (244)

You are viewing a single comment's thread. Show more comments above.

Comment author: SilasBarta 03 March 2010 09:29:00PM 2 points [-]

That is not a meaningful limitation. There are general purpose universal compressors.

There are frequently useful general purpose compressors that work by anticipating the most common regularities in the set of files typically generated by humans. But they do not, and cannot, iterate through all the short programs that could have generated the data -- it's too time-consuming.

Comment author: timtyler 03 March 2010 09:40:57PM 0 points [-]

The point was that general purpose compression is possible. Yes, you sacrifice the ability to compress other kinds of data - but those other kinds of data are highly incompressible and close to random - not the kind of data which most intelligent agents are interested in finding patterns in in the first place.

Comment author: SilasBarta 03 March 2010 10:00:53PM *  0 points [-]

Yes, you sacrifice the ability to compress other kinds of data - but those other kinds of data are highly incompressible and close to random.

No, they look random and incompressible because effective compression algorithms optimized for this universe can't compress them. But algorithms optimized for other computable universes may regard them as normal and have a good way to compress them.

Which kinds of data (from computable processes) are likely to be observed in this universe? Ay, there's the rub.

Comment author: timtyler 03 March 2010 10:16:18PM *  0 points [-]

Re: "they look random and incompressible because effective compression algorithms optimized for this universe can't compress them"

Compressing sequences from this universe is good enough for me.

Re: "Which kinds of data (from computable processes) are likely to be observed in this universe? Ay, there's the rub."

Not really - there are well-known results about that - see:

http://en.wikipedia.org/wiki/Occam's_razor

http://www.wisegeek.com/what-is-solomonoff-induction.htm

Comment author: SilasBarta 03 March 2010 10:20:31PM 0 points [-]

Compressing sequences from this universe is good enough for me.

Except that the problem you were attacking at the beginning of this thread was general intelligence, which you claimed to be solvable just by good enough compression, but that requires knowing which parts of the search space in this universe are unlikely, which you haven't shown how to algorithmize.

"Which kinds of data (from computable processes) are likely to be observed in this universe? Ay, there's the rub."

Not really - there are well-known results about that - see: ...

Yes, but as I keep trying to say, those results are far from enough to get something workable, and it's not the methodology behind general compression programs.

Comment author: timtyler 03 March 2010 10:39:22PM 0 points [-]

Arithmetic compression, Huffman compression, Lempel-Ziv compression, etc are all excellent at compressing sequences produced by small programs. Things like:

1010101010101010 110110110110110110 1011011101111011111

...etc.

Those compressors (crudely) implement a computable approximation of Solomonoff induction without iterating through programs that generate the output. How they work is not very relevant here - the point is that they act as general-purpose compressors - and compress a great range of real world data types.

The complaint that we don't know what types of data are in the universe is just not applicable - we do, in fact, know a considerable amount about that - and that is why we can build general purpose compressors.