simpleton comments on 12-year old challenges the Big Bang - Less Wrong Discussion
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (42)
Poor kid. He's a smart 12 year old who has some silly ideas, as smart 12 year olds often do, and now he'll never be able to live them down because some reporter wrote a fluff piece about him. Hopefully he'll grow up to be embarrassed by this, instead of turning into a crank.
His theories as quoted in the article don't seem to be very coherent -- I can't even tell if he's using the term "big bang" to mean the origin of the universe or a nova -- so I don't think there's much of a claim to be evaluated here.
Of course, it's very possible that the reporter butchered the quote. It's a human interest article and it's painfully obvious that the reporter parsed every word out of the kid's mouth as science-as-attire, with no attempt to understand the content.
I agree with this, but I'd bet this kid would be willing to drop his pet theory if he found it was wrong (if grudgingly). I really don't think this one article, or just being in the news mostly for his youth/intelligence combo will ruin him.
It's terribly common for highly intelligent boys to become slackers as adults. (More precisely, to strive to be "ordinary" and not overachieve). This book is a classic longitudinal study on this topic. I don't know how well this applies way out on the tail end of the bell curve where Jacob resides, as opposed to kids who are "just" in the top couple percent.
William James Sidis is a classic example.
Reminds me of this old article (04.19.01) about Yudkowsky:
Note: This is a LIE.
The correct quote was that I said on SL4 that when Douglas Lenat switched on Eurisko, essentially the first time anyone had ever built a Turing-complete freeform genuinely recursive self-modifier with heuristics modifying heuristics, he ought to have evaluated a 5% chance of it going FOOM.
I was 4 years old when Eurisko was switched on, and could not possibly have said anything at the time.
Declan McCullagh. Write it down. Never trust him.
No matter how many terrible things you've heard about the mainstream press, you truly cannot appreciate how bad it really, really is until you have been reported on yourself. It is at least two orders of magnitude worse than you think it is from reading Reddit.
The Wired article has a comments section, with 0 comments. You should probably put a response there.
Very true.
As someone who has worked in the industry i can tell you that the process of creating news stories is remarkably similar to that of producing chicken nuggets -- although, probably not as sanitary.
In particular, the tech press should be greeted with gunfire.
(There are exceptions - there's even two people at the Register I'd ever speak to under any circumstances - but even if you know and trust the journalist in question personally, be prepared for their editor to screw you both over.)
The mainstream press can't work technology more complicated than scissors, but they have occasionally heard the word "journalism."
Really - unless you're actually selling computer technology, there is no reason to deal with the tech press under any circumstances. The canonical example of "taking people seriously just because they pay you attention is often not a good idea." If only WIkipedia had worked that one out early on ...
(I wouldn't count Wired as "mainstream press", but the scary thing about your tale is that Declan McCullagh has a generally good reputation for a tech journalist.)
It truly is astonishing, the number of quotes that XiXiDu has about Eliezer. It's like he has a thick dossier, slowly accumulating negative content...
It would be interesting to see a list of all the material that has been deleted in cover-up operations over the years. We really need a SIAIWatch organisation.
[Added] some deletions that spring to mind:
Physics Workarounds (archived here)
Coding a Transhuman AI.(archived here)
Eliezer, the person (archived here)
The deleted posts from around the time of Roko's departure.
Algernon's Law: (archived here)
Love and Life Just Before the Singularity
Flare - though remanants survive.
SysopMind. (archived here)
Gaussian Humans (archived here)
The Seed AI page.
Becoming a Seed AI Programmer. (archived here)
The “Commitments” vanished from: http://singinst.org/aboutus/ourmission
They used to look like this:
So, did anyone actually save Roko's comments before the mass deletion?
Google Reader fetches every post and comment that is being made on lesswrong. Editing or deleting won't remove it. All comments and posts that have ever been made are still there, saved by Google. You just have to add the right RSS feeds to Google Reader.
Ok now what.
They did. There's a very brief synopsis here.
See, while I'm not sure about the Commitments and obviously I'm reasoning from partial data across the board, most of these look like aspects of Eliezer's pre-2003 mistake(s). I thought of that but decided calling it a cover-up didn't make much sense; he spent a lot of time explaining his mistake and how his later views on it motivated his posts on LW.
[edited slightly for clarity]
Updated link.
Reading it for the first time today, I'm amused by how much section 1.8 resembles my own Singularitarian conversion moment.
And boy, is this quote ever true of me: "I do my best thinking into a keyboard."
It is quite funny how my story differs. See the banner on my homepage in 2005?
"Towards the Singularity and a Posthuman Future"
I was a believer. It seemed completely obvious that we'll soon see superhuman AI. When reading 'Permutation City' and 'Diaspora' I was bothered by how there was no AI, just emulations. That didn't seem right.
I changed my mind. I now think that a lot of what I believed to know was based on extrapolations of current trends mixed with pure speculation. Those incredible amounts of technological optimism just seem naive now. It all sounds really cool and convincing when formulated in English, but that's not enough.
A common plot device - humans need human-like protagonists to identify with - or the story doesn't sell.
Such scenarios then get "reified" in people's minds, and a whole lot of nonsense results.
Deleting content that is no longer relevant is not the same thing as a cover up. It might be best to keep copies of such content around, but there's nothing inherently sinister about not doing so.
Citations needed.
I added some.
I only know of one cover-up operation, and that didn't include material directly about Eliezer or the SIAI.
Hey, maybe this was the point of that exercise -- a deliberately flawed cover-up, to make me underestimate how easily the SI can hide any facts they really want to keep secret!
I wouldn't exactly call it a cover-up. It looks to me like the actual goal was to ensure that a particular subject wouldn't develop further, by derailing any discussions about it into meta-discussions about censorship. Lots of noise was made, but no one ever published a sufficiently detailed description of the spell, so this did in fact succeed in averting a minor disaster.
Was this the "PUA Controversy"?
I don't save them anywhere and I never actively searched for negative content. It was either given to me by other people or I came across it by chance. That particular link is from a comment made by David Pearce on Facebook on a link posted there to the latest interview between Eliezer Yudkowsky and John Baez.
Do you think it is a bad idea to take a closer look at what is said about and by someone who is working on fooming AI?