Eugine_Nier comments on Rationality Quotes August 2012 - Less Wrong

6 Post author: Alejandro1 03 August 2012 03:33PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (426)

You are viewing a single comment's thread.

Comment author: Eugine_Nier 02 August 2012 09:04:20PM 8 points [-]

[M]uch mistaken thinking about society could be eliminated by the most straightforward application of the pigeonhole principle: you can't fit more pigeons into your pigeon coop than you have holes to put them in. Even if you were telepathic, you could not learn all of what is going on in everybody's head because there is no room to fit all that information in yours. If I could completely scan 1,000 brains and had some machine to copy the contents of those into mine, I could only learn at most about a thousandth of the information stored in those brains, and then only at the cost of forgetting all else I had known. That's a theoretical optimum; any such real-world transfer process, such as reading and writing an e-mail or a book, or tutoring, or using or influencing a market price, will pick up only a small fraction of even the theoretically acquirable knowledge or preferences in the mind(s) at the other end of said process, or if you prefer of the information stored by those brain(s). Of course, one can argue that some kinds of knowledge -- like the kinds you and I know? -- are vastly more important than others, but such a claim is usually more snobbery than fact. Furthermore, a society with more such computational and mental diversity is more productive, because specialized algorithms, mental processes, and skills are generally far more productive than generalized ones. As Friedrich Hayek pointed out, our mutual inability to understand a very high fraction of what others know has profound implications for our economic and political institutions.

-- Nick Szabo

Comment author: bramflakes 02 August 2012 10:45:16PM 6 points [-]

What about compression?

Comment author: Eugine_Nier 02 August 2012 11:42:23PM 5 points [-]

Do you mean lossy or lossless compression? If you mean lossy compression then that is precisely Szabo's point.

On the other hand, if you mean lossless, then if you had some way to losslessly compress a brain, this would only work if you were the only one with this compression scheme, since otherwise other people would apply it to their own brains and use the freed space to store more information.

Comment author: VKS 02 August 2012 11:51:41PM 8 points [-]

You'll probably have more success losslessly compressing two brains than losslessly compressing one.

Comment author: [deleted] 03 August 2012 07:29:00AM 1 point [-]

Still, I don't think you could compress the content of 1000 brains into one. (And I'm not sure about two brains, either. Maybe the brains of two six-year-olds into that of a 25-year-old.)

Comment author: VKS 03 August 2012 09:46:47AM 0 points [-]

I argue that my brain right now contains a lossless copy of itself and itself two words ago!

Getting 1000 brains in here would take some creativity, but I'm sure I can figure something out...

But this is all rather facetious. Breaking the quote's point would require me to be able to compute the (legitimate) results of the computations of an arbitrary number of arbitrarily different brains, at the same speed as them.

Which I can't.

For now.

Comment author: maia 03 August 2012 07:41:58PM 4 points [-]

a lossless copy of itself and itself two words ago

But our memories discard huge amounts of information all the time. Surely there's been at least a little degradation in the space of two words, or we'd never forget anything.

Comment author: VKS 03 August 2012 10:15:17PM *  0 points [-]

Certainly. I am suggesting that over sufficiently short timescales, though, you can deduce the previous structure from the current one. Maybe I should have said "epsilon" instead of "two words".

Surely there's been at least a little degradation in the space of two words, or we'd never forget anything.

Why would you expect the degradation to be completely uniform? It seems more reasonable to suspect that, given a sufficiently small timescale, the brain will sometimes be forgetting things and sometimes not, in a way that probably isn't synchronized with its learning of new things.

So, depending on your choice of two words, sometimes the brain would take marginally more bits to describe and sometimes marginally fewer.

Actually, so long as the brain can be considered as operating independently from the outside world (which, given an appropriately chosen small interval of time, makes some amount of sense), a complete description at time t will imply a complete description at time t + δ. The information required to describe the first brain therefore describes the second one too.

So I've made another error: I should have said that my brain contains a lossless copy of itself and itself two words later. (where "two words" = "epsilon")

Comment author: Eugine_Nier 04 August 2012 08:17:57PM 0 points [-]

It seems more reasonable to suspect that, given a sufficiently small timescale, the brain will sometimes be forgetting things and sometimes not, in a way that probably isn't synchronized with its learning of new things.

See the pigeon-hole argument in the original quote.

Comment author: RichardKennaway 03 August 2012 12:26:11PM 4 points [-]

I argue that my brain right now contains a lossless copy of itself and itself two words ago!

I'd argue that your brain doesn't even contain a lossless copy of itself. It is a lossless copy of itself, but your knowledge of yourself is limited. So I think that Nick Szabo's point about the limits of being able to model other people applies just as strongly to modelling oneself. I don't, and cannot, know all about myself -- past, current, or future, and that must have substantial implications about something or other that this lunch hour is too small to contain.

How much knowledge of itself can an artificial system have? There is probably some interesting mathematics to be done -- for example, it is possible to write a program that prints out an exact copy of itself (without having access to the file that contains it), the proof of Gödel's theorem involves constructing a proposition that talks about itself, and TDT depends on agents being able to reason about their own and other agents' source codes. Are there mathematical limits to this?

Comment author: VKS 03 August 2012 10:27:05PM 0 points [-]

I never meant to say that I could give you an exact description of my own brain and itself ε ago, just that you could deduce one from looking at mine.

Comment author: mfb 04 August 2012 10:10:36PM *  0 points [-]

If you can scan it, maybe you can simulate it? And if you can simulate one, wait some years and you can simulate 1000, probably connected in some way to form a single "thinking system".

Comment author: Eugine_Nier 05 August 2012 06:07:26PM 2 points [-]

But not on your own brain.