I thought this may be of interest to the LW community. Jacob Barnett is a 12-year old male who taught himself all of high school math (algebra through calculus), has a currently scored math IQ of 170 (for what that's worth) and is currently on track to become a researcher of astrophysics. His current major news worthy claim-to-fame (aside from being really young): The Big Bang Theory is currently incorrect (I believe the article states he has something about a lack of carbon in the model), and he's planning to develop a new theory.

I haven't learned anything serious in physics, so I have nothing to note on his claim. I realize the news article cited puts him claim fairly generally, so I'll ask this: Can someone explain how elements are generally modeled to have formed from the big bang? And is there anything that it Jacob may be missing in the current literature?

New Comment
43 comments, sorted by Click to highlight new comments since: Today at 3:22 PM

Poor kid. He's a smart 12 year old who has some silly ideas, as smart 12 year olds often do, and now he'll never be able to live them down because some reporter wrote a fluff piece about him. Hopefully he'll grow up to be embarrassed by this, instead of turning into a crank.

His theories as quoted in the article don't seem to be very coherent -- I can't even tell if he's using the term "big bang" to mean the origin of the universe or a nova -- so I don't think there's much of a claim to be evaluated here.

Of course, it's very possible that the reporter butchered the quote. It's a human interest article and it's painfully obvious that the reporter parsed every word out of the kid's mouth as science-as-attire, with no attempt to understand the content.

[-][anonymous]13y40

Hopefully he'll grow up to be embarrassed by this, instead of turning into a crank.

I agree with this, but I'd bet this kid would be willing to drop his pet theory if he found it was wrong (if grudgingly). I really don't think this one article, or just being in the news mostly for his youth/intelligence combo will ruin him.

It's terribly common for highly intelligent boys to become slackers as adults. (More precisely, to strive to be "ordinary" and not overachieve). This book is a classic longitudinal study on this topic. I don't know how well this applies way out on the tail end of the bell curve where Jacob resides, as opposed to kids who are "just" in the top couple percent.

[-][anonymous]13y10

He's a smart 12 year old who has some silly ideas, as smart 12 year olds often do, and now he'll never be able to live them down because some reporter wrote a fluff piece about him.

Reminds me of this old article (04.19.01) about Yudkowsky:

Since then, Yudkowsky has become not just someone who predicts the Singularity, but a committed activist trying to speed its arrival. "My first allegiance is to the Singularity, not humanity," he writes in one essay. "I don't know what the Singularity will do with us. I don't know whether Singularities upgrade mortal races, or disassemble us for spare atoms.... If it comes down to Us or Them, I'm with Them."

[...]

Yudkowsky takes it a step further, writing that he believes AI "will be developed on symmetric-multiprocessing hardware, at least initially." He said he expects Singularity could happen in the very near future: "I wouldn't be surprised if tomorrow was the Final Dawn, the last sunrise before the Earth and Sun are reshaped into computing elements."

When one researcher booted up a program he hoped would be AI-like, Yudkowsky said he believed there was a 5 percent chance the Singularity was about to happen and human existence would be forever changed.

[...]

In an autobiographical essay, he writes: "I think my efforts could spell the difference between life and death for most of humanity, or even the difference between a Singularity and a lifeless, sterilized planet... I think that I can save the world, not just because I'm the one who happens to be making the effort, but because I'm the only one who can make the effort."

Yudkowsky said he believed there was a 5 percent chance the Singularity was about to happen and human existence would be forever changed.

Note: This is a LIE.

The correct quote was that I said on SL4 that when Douglas Lenat switched on Eurisko, essentially the first time anyone had ever built a Turing-complete freeform genuinely recursive self-modifier with heuristics modifying heuristics, he ought to have evaluated a 5% chance of it going FOOM.

I was 4 years old when Eurisko was switched on, and could not possibly have said anything at the time.

Declan McCullagh. Write it down. Never trust him.

No matter how many terrible things you've heard about the mainstream press, you truly cannot appreciate how bad it really, really is until you have been reported on yourself. It is at least two orders of magnitude worse than you think it is from reading Reddit.

The Wired article has a comments section, with 0 comments. You should probably put a response there.

No matter how many terrible things you've heard about the mainstream press, you truly cannot appreciate how bad it really, really is until you have been reported on yourself. It is at least two orders of magnitude worse than you think it is from reading Reddit.

Very true.

As someone who has worked in the industry i can tell you that the process of creating news stories is remarkably similar to that of producing chicken nuggets -- although, probably not as sanitary.

In particular, the tech press should be greeted with gunfire.

(There are exceptions - there's even two people at the Register I'd ever speak to under any circumstances - but even if you know and trust the journalist in question personally, be prepared for their editor to screw you both over.)

The mainstream press can't work technology more complicated than scissors, but they have occasionally heard the word "journalism."

Really - unless you're actually selling computer technology, there is no reason to deal with the tech press under any circumstances. The canonical example of "taking people seriously just because they pay you attention is often not a good idea." If only WIkipedia had worked that one out early on ...

(I wouldn't count Wired as "mainstream press", but the scary thing about your tale is that Declan McCullagh has a generally good reputation for a tech journalist.)

[-]Rain13y120

It truly is astonishing, the number of quotes that XiXiDu has about Eliezer. It's like he has a thick dossier, slowly accumulating negative content...

It would be interesting to see a list of all the material that has been deleted in cover-up operations over the years. We really need a SIAIWatch organisation.

[Added] some deletions that spring to mind:

Physics Workarounds (archived here)

Coding a Transhuman AI.(archived here)

Eliezer, the person (archived here)

The deleted posts from around the time of Roko's departure.

Algernon's Law: (archived here)

Love and Life Just Before the Singularity

Flare - though remanants survive.

SysopMind. (archived here)

Gaussian Humans (archived here)

The Seed AI page.

Becoming a Seed AI Programmer. (archived here)

The “Commitments” vanished from: http://singinst.org/aboutus/ourmission

They used to look like this:

Commitments

  • SIAI will not enter any partnership that compromises our values.
  • Technology developed by SIAI will not be used to harm human life.
  • The challenge, opportunity and risk of artificial intelligence is the common concern of all humanity. SIAI will not show ethnic, national, political, or religious favoritism in the discharge of our mission.

So, did anyone actually save Roko's comments before the mass deletion?

They did. There's a very brief synopsis here.

[-][anonymous]12y00

That was a surprisingly good summary of Roko's basilisk. Thanks for the link.

In case anyone's wondering, here's the standard answer I give to people who are unsure whether to worry about the basilisk: the AI won't adopt the awful strategy if adopting it hurts the AI overall instead of helping, which is something you can affect by (conditionally) refusing to donate. Of course this answer doesn't come with a guarantee of correctness, but feel free to accept it if it works for you.

[This comment is no longer endorsed by its author]Reply

So, did anyone actually save Roko's comments before the mass deletion?

Google Reader fetches every post and comment that is being made on lesswrong. Editing or deleting won't remove it. All comments and posts that have ever been made are still there, saved by Google. You just have to add the right RSS feeds to Google Reader.

Ok now what.

See, while I'm not sure about the Commitments and obviously I'm reasoning from partial data across the board, most of these look like aspects of Eliezer's pre-2003 mistake(s). I thought of that but decided calling it a cover-up didn't make much sense; he spent a lot of time explaining his mistake and how his later views on it motivated his posts on LW.

[edited slightly for clarity]

Deleting content that is no longer relevant is not the same thing as a cover up. It might be best to keep copies of such content around, but there's nothing inherently sinister about not doing so.

Citations needed.

I added some.

Eliezer, the person (archived here)

Updated link.

Reading it for the first time today, I'm amused by how much section 1.8 resembles my own Singularitarian conversion moment.

And boy, is this quote ever true of me: "I do my best thinking into a keyboard."

I'm amused by how much section 1.8 resembles my own Singularitarian conversion moment.

It is quite funny how my story differs. See the banner on my homepage in 2005?

"Towards the Singularity and a Posthuman Future"

I was a believer. It seemed completely obvious that we'll soon see superhuman AI. When reading 'Permutation City' and 'Diaspora' I was bothered by how there was no AI, just emulations. That didn't seem right.

I changed my mind. I now think that a lot of what I believed to know was based on extrapolations of current trends mixed with pure speculation. Those incredible amounts of technological optimism just seem naive now. It all sounds really cool and convincing when formulated in English, but that's not enough.

I was a believer. It seemed completely obvious that we'll soon see superhuman AI. When reading 'Permutation City' and 'Diaspora' I was bothered by how there was no AI, just emulations. That didn't seem right.

A common plot device - humans need human-like protagonists to identify with - or the story doesn't sell.

Such scenarios then get "reified" in people's minds, and a whole lot of nonsense results.

I only know of one cover-up operation, and that didn't include material directly about Eliezer or the SIAI.

Hey, maybe this was the point of that exercise -- a deliberately flawed cover-up, to make me underestimate how easily the SI can hide any facts they really want to keep secret!

I wouldn't exactly call it a cover-up. It looks to me like the actual goal was to ensure that a particular subject wouldn't develop further, by derailing any discussions about it into meta-discussions about censorship. Lots of noise was made, but no one ever published a sufficiently detailed description of the spell, so this did in fact succeed in averting a minor disaster.

Was this the "PUA Controversy"?

[This comment is no longer endorsed by its author]Reply

It truly is astonishing, the number of quotes that XiXiDu has about Eliezer. It's like he has a thick dossier, slowly accumulating negative content...

I don't save them anywhere and I never actively searched for negative content. It was either given to me by other people or I came across it by chance. That particular link is from a comment made by David Pearce on Facebook on a link posted there to the latest interview between Eliezer Yudkowsky and John Baez.

Do you think it is a bad idea to take a closer look at what is said about and by someone who is working on fooming AI?

[+]Miller13y-260

I tried to refute the Big Bang theory when I was his age too. When you're young and don't know anyone significantly smarter than you, it's easy not to develop good "somebody would already have thought of this" sensors.

When you're young and don't know anyone significantly smarter than you, it's easy not to develop good "somebody would already have thought of this" sensors.

And it just keeps getting harder and harder to develop them as you age.

Apparently a lot of LessWrongers try something like this. The commenters in that article became rationalists; this kid may have a good chance.

I hope he is eventually encouraged to go into medical research, rather than physics. (Not because it's so terrible to be wrong in a theory about the Big Bang, but because who cares and we need the smartest people working on life extension, please.)

New discoveries in theoretical physics may someday have major utility payouts. My main reason for dropping physics is that there's not much marginal utility in another smart person going into physics.

[-]Gray13y20

That's an interesting principle, applied generally. Wouldn't it be a good thing for society for intelligent and knowledgeable people to go into fields where there aren't many intelligent and knowledgeable people? The first field I think of like this is nutrition, it just seems there is a lot of shallow thinking in that field.

Can someone explain how elements are generally modeled to have formed from the big bang?

Only hydrogen, helium, and traces of lithium were formed in the big bang. All other elements get formed in stars and distributed into space by novas and supernovas and such. Carbon gets formed fairly easily, in fairly ordinary stars, but most elements are only formed in super-novas.

IIRC, normal stars are able to fuse together things up until iron, at which point (for some reason I don't understand) fusion ceases to be able to sufficiently power the star.

http://en.wikipedia.org/wiki/Iron_peak has a pretty straightforward explanation for why the usual channels for element production stop at Iron.

Can someone explain how elements are generally modeled to have formed from the big bang? And is there anything that it Jacob may be missing in the current literature?

Yep. Jacob's quite right about nucleosynthesis in the Big Bang, but that's not even close to the only nucleosynthesis pathway out there.

First-generation stars (called "Population III", confusingly) are thought to have contained almost no elements heavier than helium, which may have allowed them to stably reach much higher masses than the current generation can manage. Mid-weight elements up to the mass of iron are formed through one of the several fantastically complicated fusion processes which occur as older stars deplete their fusible hydrogen and start accumulating helium in their cores; carbon in particular is generated mainly through the triple-alpha process. Elements heavier than iron don't release energy when fused, so stars can't produce them in quantity; they're instead produced mainly by fusion events during the early stages of a supernova.