Emile comments on The Importance of Self-Doubt - Less Wrong

23 Post author: multifoliaterose 19 August 2010 10:47PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (726)

You are viewing a single comment's thread. Show more comments above.

Comment author: JamesAndrix 23 August 2010 05:38:43AM 9 points [-]

How would you address this?

http://scienceblogs.com/pharyngula/2010/08/kurzweil_still_doesnt_understa.php

It seems to me like PZ Meyers really doesn't understand information theory. He's attacking Kurzweil and calling him a kook. Initially due to a relatively straightforward complexity estimate.

And I'm pretty confident that Myers is wrong on this, unless there is another information rich source of inheritance besides DNA, which Meyers knows about but Kurzweil and I do not.

This looks to me like a popular science blogger doing huge PR damage to everything singularity related, and being wrong about it. Even if he is later convinced of this point.

I don't see how to avoid this short of just holding back all claims which seem exceptional and that some 'reasonable' person might fail to understand and see as a sign of cultishness. If we can't make claims as basic as the design of the brain being in the genome, then we may as well just remain silent.

But then we wouldn't find out if we're wrong, and we're rationalists.

Comment author: Emile 23 August 2010 08:31:54AM *  4 points [-]

It seems to me like PZ Meyers really doesn't understand information theory. He's attacking Kurzweil and calling him a kook. Initially due to a relatively straightforward complexity estimate.

I see it that way too. The DNA can give us an upper bound on the information needed to create a human brain, but PZ Myers reads that as "Kurzweil is saying we will be able to take a strand of DNA and build a brain from that in the next 10 years!", and then procede to attack that straw man.

This, however:

His timeline is absurd. I'm a developmental neuroscientist; I have a very good idea of the immensity of what we don't understand about how the brain works. No one with any knowledge of the field is claiming that we'll understand how the brain works within 10 years. And if we don't understand all but a fraction of the functionality of the brain, that makes reverse engineering extremely difficult.

... I am quite enclined to trust. I would trust it more if it wasn't followed by wrong statements about information theory (that seem wrong to me, at least).

Looking at the comments is depressing. I wish there was some "sane" ways for two communities (readers of PZ Myers and "singularitarians") to engage without it degenerating into name-calling.

Brian: "We should unite against our common enemy!"

Others: "The Judean People's Front?"

Brian: "No! The Romans!"

Though there are software solutions for that (takeonit and other stuff that's been discussed here), it wouldn't help either if the "leaders" (PZ Myers, Kurzweil, etc.) were a bit more responsible and made a genuine effort to acknowledge the other's points when there are strong. So they could converge or at least agree to disagree on something narrow.

But nooo, it's much more fun to get angry, and it gets you more traffic too!

Comment author: RobinZ 23 August 2010 01:09:16PM 0 points [-]

The DNA can give us an upper bound on the information needed to create a human brain [...]

Why do you say this? If humans were designed by human engineers, the 'blueprints' would actually be complete blueprints, sufficient unto the task of determining the final organism ... but they weren't. There's no particular reason to doubt that a significant amount of the final data is encoded in the gestational environment.

Comment author: Emile 23 August 2010 02:20:28PM 4 points [-]

I'm not sure about what you mean about the "complete blueprints" - I agree that the DNA isn't a complete blueprint, and that an alien civilization with a different chemistry would (probably) find it impossible to rebuild a human if they were just given it's DNA. The gestational environment is essential, I just don't think it encodes much data on the actual working of the brain.

It seems to me that the interaction between the baby and the gestational environment is relatively simple, at least compared to organ development and differentiation. There are a lot of essential things for it to go right, and hormones and nutrients, but 1) I don't see a lot of information transfer in there ("making the brain work a certain way" as opposed to "making the brain work period"), and 2) A lot of the information on how that works is probably encoded in the DNA too.

I would say that the important bits that may not be in the DNA (or in mitocondrial DNA) are the DNA interpretation system (transcription, translation).

Comment author: RobinZ 23 August 2010 03:23:05PM 0 points [-]

That's a strong point, but I think it's still worth bearing in mind that this subject is P. Z. Myers' actual research focus: developmental biology. It appears to me that Kurzweil should be getting Myers' help revising his 50 MB estimate*, not dismissing Myers arguments as misinformed.

Yes, Myers made a mistake in responding to a summary secondhand account rather than Kurzweil's actual position, but Kurzweil is making a mistake if he's ignoring expert opinion on a subject directly relating to his thesis.

* By the way: 50 MB? That's smaller than the latest version of gcc! If that's your complexity estimate, the complexity of the brain could be dominated by the complexity of the gestational environment!

Comment author: Emile 23 August 2010 04:02:08PM 1 point [-]

I agree that Kurzweil could have acknowledged P.Z.Myers' expertise a bit more, especially the "nobody in my field expects a brain simulation in the next ten years" bit.

50 MB - that's still a hefty amount of code, especially if it's 50MB of compiled code and not 50 MB of source code (comparing the size of the source code to the size of the compressed DNA looks fishy to me, but I'm not sure Kurzweil has been actually doing that - he's just been saying "it doesn't require trillions of lines of code").

Is the size of gcc the source code or the compiled version? I didn't see that info on Wikipedia, and don't have gcc on this machine.

Comment author: timtyler 23 August 2010 05:38:28PM 2 points [-]

As I see it, Myers delivered a totally misguided rant. When his mistakes were exposed he failed to apologise. Obviously, there is no such thing as bad publicity.

Comment author: RobinZ 23 August 2010 04:09:34PM 1 point [-]

I'm looking at gcc-4.5.0.tar.gz.

Comment author: Emile 23 August 2010 04:32:27PM 2 points [-]

That includes the source code, the binaries, the documentation, the unit tests, changelogs ... I'm not surpised it's pretty big!

I consider it pretty likely that it's possible to program a human-like intelligence with a compressed source code of less than 50 MB.

However, I'm much less confident that the source code of the first actual human-like intelligence coded by humans (if there is one) will be that size.

Comment author: Perplexed 23 August 2010 01:45:31PM 6 points [-]

There's no particular reason to doubt that a significant amount of the final data is encoded in the gestational environment.

To the contrary, there is every reason to doubt that. We already know that important pieces of the gestational environment (the genetic code itself, core metabolism, etc.) are encoded in the genome. By contrast, the amount of epigenetic information that we know of is miniscule. It is, of course, likely that we will discover more, but it is very unlikely that we will discover much more. The reason for this skepticism is that we don't know of any reliable epigenetic means of transmitting generic information from generation to generation. And the epigenetic information inheritance mechanisms that we do understand all require hundreds of times as much genetic information to specify the machinery as compared to the amount of epigenetic information that the machinery can transmit.

To my mind, it is very clear that (on this narrow point) Kurzweil was right and PZ wrong: The Shannon information content of the genome places a tight upper bound on the algorithmic (i.e. Kolmogorov) information content of the embryonic brain. Admittedly, when we do finally construct an AI, it may take it 25 years to get through graduate school, and it may have to read thru several hundred Wikipedia equivalents to get there, but I am very confident that specifying the process for generating the structure and interconnect of the embryonic AI brain will take well under 7 billion bits.

Comment author: timtyler 23 August 2010 05:08:44PM *  1 point [-]

To my mind, it is very clear that (on this narrow point) Kurzweil was right and PZ wrong: The Shannon information content of the genome places a tight upper bound on the algorithmic (i.e. Kolmogorov) information content of the embryonic brain.

I think you may have missed my devastating analysis of this issue a couple of years back:

"So, who is right? Does the brain's design fit into the genome? - or not?

The detailed form of proteins arises from a combination of the nucleotide sequence that specifies them, the cytoplasmic environment in which gene expression takes place, and the laws of physics.

We can safely ignore the contribution of cytoplasmic inheritance - however, the contribution of the laws of physics is harder to discount. At first sight, it may seem simply absurd to argue that the laws of physics contain design information relating to the construction of the human brain. However there is a well-established mechanism by which physical law may do just that - an idea known as the anthropic principle. This argues that the universe we observe must necessarily permit the emergence of intelligent agents. If that involves a coding the design of the brains of intelligent agents into the laws of physics then: so be it. There are plenty of apparently-arbitrary constants in physics where such information could conceivably be encoded: the fine structure constant, the cosmological constant, Planck's constant - and so on.

At the moment, it is not even possible to bound the quantity of brain-design information so encoded. When we get machine intelligence, we will have an independent estimate of the complexity of the design required to produce an intelligent agent. Alternatively, when we know what the laws of physics are, we may be able to bound the quantity of information encoded by them. However, today neither option is available to us."

Comment author: Perplexed 23 August 2010 06:24:06PM 3 points [-]

You suggest that the human brain might have a high Kolmogorov complexity, the information for which is encoded, not in the human genome (which contains a mere 7 gigabits of information), but rather in the laws of physics, which contain arbitrarily large amounts of information, encoded in the exact values of physical constants. For example, first 30 billion decimal digits of the fine structure constant contain 100 gigabits of information, putting the genome to shame.

Do I have that right?

Well, I will give you points for cleverness, but I'm not buying it. I doubt that it much matters what the constants are, out past the first hundred digits or so. Yes, I realize that the details of how the universe proceeds may be chaotic; it may involve sensitive dependence both on initial conditions and on physical constants. But I don't think that really matters. Physical constants haven't changed since the Cambrian, but genomes have. And I think that it is the change in genomes which led to the human brain, the dolphin brain, the parrot brain, and the octopus brain. Alter the fine structure constant in the 2 billionth decimal place, and those brain architectures would still work, and those genomes would still specify development pathways leading to them. Or so I believe.

Comment author: timtyler 23 August 2010 06:44:17PM *  0 points [-]

I doubt that it much matters what the constants are, out past the first hundred digits or so

What makes you think that?

I realize that the details of how the universe proceeds may be chaotic; it may involve sensitive dependence both on initial conditions and on physical constants. But I don't think that really matters.

...and why not?

Physical constants haven't changed since the Cambrian, but genomes have. And I think that it is the change in genomes which led to the human brain, the dolphin brain, the parrot brain, and the octopus brain.

Under the hypothesis that physics encodes relevant information, a lot of the required information was there from the beginning. The fact that brains only became manifest after the Cambrian doesn't mean the propensity for making brains was not there from the beginning. So: that observation doesn't tell you very much.

Alter the fine structure constant in the 2 billionth decimal place, and those brain architectures would still work, and those genomes would still specify development pathways leading to them. Or so I believe.

Right - but what evidence do you have of that? You are aware of chaos theory, no? Small changes can lead to dramatic changes surprisingly quickly.

Organisms inherit the laws of physics (and indeed the initial conditions of the universe they are in) - as well as their genomes. Information passes down the generations both ways. If you want to claim the design information is in one inheritance channel more than the other one, it seems to me that you need some evidence relating to that issue. The evidence you have presented so far seems pretty worthless - the delayed emergence of brains seems equally compatible with both of the hypotheses under consideration.

So: do you have any other relevant evidence?

Comment author: WrongBot 23 August 2010 06:59:07PM *  0 points [-]

No other rational [ETA: I meant physical and I am dumb] process is known to rely on physical constants to the degree you propose. What you propose is not impossible, but it is highly improbable.

Comment author: timtyler 23 August 2010 07:08:00PM *  1 point [-]

What?!? What makes you think that?

Sensitive dependence on initial conditions is an extremely well-known phenomenon. If you change the laws of physics a little bit, the result of a typical game of billiards will be different. This kind of phenomenon is ubiquitous in nature, from the orbit of planets, to the paths rivers take.

If a butterfly's wing flap can cause a tornado, I figure a small physical constant jog could easily make the difference between intelligent life emerging, and it not doing so billions of years later.

Sensitive dependence on initial conditions is literally everywhere. Check it out:

http://en.wikipedia.org/wiki/Chaos_theory

Comment author: Kingreaper 23 August 2010 07:11:11PM 1 point [-]

Did you miss this bit:

to the degree you propose

Sensitivity to initial conditions is one thing. Sensitivity to 1 billion SF in a couple of decades?

Comment author: JamesAndrix 24 August 2010 07:02:53AM 0 points [-]

I figure a small physical constant jog could easily make the difference between intelligent life emerging, and it not doing so billions of years later.

First, that is VERY different than the design information being in the constant, but not in the genome. (you could more validly say that the genome is what it is because the constant is precisely what it is.)

Second, the billiard ball example is invalid. It doesn't matter exactly where the billiard balls are if you're getting hustled. Neurons are not typically sensitive to the precise positions of their atoms. Information processing relies on the ability to largely overlook noise.

Comment author: WrongBot 23 August 2010 08:43:26PM 0 points [-]

What physical process would cease to function if you increased c by a billionth of a percent? Or one of the other Planck units? Processes involved in the functioning of both neurons and transistors don't count, because then there's no difference to account for.

Comment author: JamesAndrix 23 August 2010 05:13:41PM 1 point [-]

Artificial wombs

Comment author: RobinZ 23 August 2010 05:17:38PM *  0 points [-]

Don't currently exist. I'm not sure that's a strong argument.