bramflakes comments on Rationality Quotes August 2012 - Less Wrong

6 Post author: Alejandro1 03 August 2012 03:33PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (426)

You are viewing a single comment's thread. Show more comments above.

Comment author: bramflakes 02 August 2012 10:45:16PM 6 points [-]

What about compression?

Comment author: Eugine_Nier 02 August 2012 11:42:23PM 5 points [-]

Do you mean lossy or lossless compression? If you mean lossy compression then that is precisely Szabo's point.

On the other hand, if you mean lossless, then if you had some way to losslessly compress a brain, this would only work if you were the only one with this compression scheme, since otherwise other people would apply it to their own brains and use the freed space to store more information.

Comment author: VKS 02 August 2012 11:51:41PM 8 points [-]

You'll probably have more success losslessly compressing two brains than losslessly compressing one.

Comment author: [deleted] 03 August 2012 07:29:00AM 1 point [-]

Still, I don't think you could compress the content of 1000 brains into one. (And I'm not sure about two brains, either. Maybe the brains of two six-year-olds into that of a 25-year-old.)

Comment author: VKS 03 August 2012 09:46:47AM 0 points [-]

I argue that my brain right now contains a lossless copy of itself and itself two words ago!

Getting 1000 brains in here would take some creativity, but I'm sure I can figure something out...

But this is all rather facetious. Breaking the quote's point would require me to be able to compute the (legitimate) results of the computations of an arbitrary number of arbitrarily different brains, at the same speed as them.

Which I can't.

For now.

Comment author: maia 03 August 2012 07:41:58PM 4 points [-]

a lossless copy of itself and itself two words ago

But our memories discard huge amounts of information all the time. Surely there's been at least a little degradation in the space of two words, or we'd never forget anything.

Comment author: VKS 03 August 2012 10:15:17PM *  0 points [-]

Certainly. I am suggesting that over sufficiently short timescales, though, you can deduce the previous structure from the current one. Maybe I should have said "epsilon" instead of "two words".

Surely there's been at least a little degradation in the space of two words, or we'd never forget anything.

Why would you expect the degradation to be completely uniform? It seems more reasonable to suspect that, given a sufficiently small timescale, the brain will sometimes be forgetting things and sometimes not, in a way that probably isn't synchronized with its learning of new things.

So, depending on your choice of two words, sometimes the brain would take marginally more bits to describe and sometimes marginally fewer.

Actually, so long as the brain can be considered as operating independently from the outside world (which, given an appropriately chosen small interval of time, makes some amount of sense), a complete description at time t will imply a complete description at time t + δ. The information required to describe the first brain therefore describes the second one too.

So I've made another error: I should have said that my brain contains a lossless copy of itself and itself two words later. (where "two words" = "epsilon")

Comment author: Eugine_Nier 04 August 2012 08:17:57PM 0 points [-]

It seems more reasonable to suspect that, given a sufficiently small timescale, the brain will sometimes be forgetting things and sometimes not, in a way that probably isn't synchronized with its learning of new things.

See the pigeon-hole argument in the original quote.

Comment author: RichardKennaway 03 August 2012 12:26:11PM 4 points [-]

I argue that my brain right now contains a lossless copy of itself and itself two words ago!

I'd argue that your brain doesn't even contain a lossless copy of itself. It is a lossless copy of itself, but your knowledge of yourself is limited. So I think that Nick Szabo's point about the limits of being able to model other people applies just as strongly to modelling oneself. I don't, and cannot, know all about myself -- past, current, or future, and that must have substantial implications about something or other that this lunch hour is too small to contain.

How much knowledge of itself can an artificial system have? There is probably some interesting mathematics to be done -- for example, it is possible to write a program that prints out an exact copy of itself (without having access to the file that contains it), the proof of Gödel's theorem involves constructing a proposition that talks about itself, and TDT depends on agents being able to reason about their own and other agents' source codes. Are there mathematical limits to this?

Comment author: VKS 03 August 2012 10:27:05PM 0 points [-]

I never meant to say that I could give you an exact description of my own brain and itself ε ago, just that you could deduce one from looking at mine.