You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

ChristianKl comments on Open thread, January 25- February 1 - Less Wrong Discussion

8 Post author: NancyLebovitz 25 January 2014 02:52PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (316)

You are viewing a single comment's thread. Show more comments above.

Comment author: lukeprog 28 January 2014 06:09:04PM 15 points [-]

Robin Hanson on Facebook:

Academic futurism has low status. This causes people interested in futurism to ignore those academics and instead listen to people who talk about futurism after gaining high status via focusing on other topics. As a result, the people who are listened to on the future tend to be amateurs, not specialists. And this is why "we" know a lot less about the future than we could.

Consider the case of Willard Wells and his Springer-published book Apocalypse When?: Calculating How Long the Human Race Will Survive (2009). From a UCSD news story about a talk Wells gave about the book:

Larry Carter, a UCSD emeritus professor of computer science, didn’t mince words. The first time he heard about Wells’s theories, he thought, “Oh my God, is this guy a crackpot?”

But persuaded by Well’s credentials, which include a PhD from Caltech in math and theoretical physics, a career that led him L-3 Photonics and the Caltech/Jet Propulsion Laboratory, and an invention under his belt, Carter gave the ideas a chance. And was intrigued.

For a taste of the book, here is Wells' description of one specific risk:

When advanced robots arrive... the serious threat [will be] human hackers. They may deliberately breed a hostile strain of androids, which then infects normal ones with its virus. To do this, the hackers must obtain a genetic algorithm and pervert it, probably early in the robotic age before safeguards become sophisticated... Excluding hackers, it seems unlikely that androids will turn against us as they do in some movies... computer code for hostility is too complex... In the very long term, androids will become conscious for the same reasons humans did, whatever those reasons may be... In summary, the androids have powerful instincts to nurture humans, but these instincts will be unencumbered by concerns for human rights. Androids will feel free to impose a harsh discipline that saves us from ourselves while violating many of our so-called human rights.

Now, despite Larry Carter's being "persuaded by Wells' credentials" — which might have been exaggerated or made-up by the journalist, I don't know — I suspect very few people have taken Wells seriously, for good reason. He's clearly just making stuff up, with almost no study of the issue whatsoever. (On this topic, the only people he cites are Joy, Kurzweil, and Posner, despite the book being published in 2009.)

But reading that passage did drive home again what it must be like for most people to read FHI or MIRI on AI risk, or Robin Hanson on ems. They probably can't tell the difference between someone who is making stuff up and an argument that has gone through a gauntlet of 15 years of heated debate and both theoretical and empirical research.

Comment author: ChristianKl 28 January 2014 11:21:21PM 1 point [-]

Academic futurism has low status. This causes people interested in futurism to ignore those academics and instead listen to people who talk about futurism after gaining high status via focusing on other topics. As a result, the people who are listened to on the future tend to be amateurs, not specialists. And this is why "we" know a lot less about the future than we could.

I don't think that's the case. Most people who are listened to on the future don't tend to speak to an audience primarily consisting of futurists.

There are think tanks who employee people to think about the future and those think tanks tend generally to be quite good at influencing the public debate.

I also don't think that academic has any special claim to be specialists about the future. When I think about specialists on futurism names like Stewart Brand or Bruce Sterling.

Comment author: IlyaShpitser 29 January 2014 12:58:28PM *  1 point [-]

I don't think that's the case. Most people who are listened to on the future don't tend to speak to an audience primarily consisting of futurists.

This is a very important and general point. While it is important to communicate ideas to a general audience, generally excessive communication to general audiences at the expense of communication to peers should be "bad news" when it comes to evaluating experts. Folks like Witten mostly just get work done, they don't write popular science books.

Comment author: ChristianKl 29 January 2014 01:53:31PM 0 points [-]

Witten doesn't ring a bell with me. Googling the name might mean Edward Witten and Tarynn Madysyn Witten. Do you mean either or them or someone else?

Comment author: IlyaShpitser 29 January 2014 01:55:15PM 3 points [-]

I mean Edward Witten, one of the most prominent physicists alive. The fact that his name does not ring a bell is precisely my point. The names that do ring a bell are the names of folks who are "good at the media," not necessarily folks who are the best in their field.

Comment author: ChristianKl 29 January 2014 02:09:00PM 0 points [-]

Okay, given that the subject is theoretical physics and I'm not much into that field I understand why I have no recognition.

When looking at his Wikipedia I see he made Time 100 so it still might be worth knowing the name.

Comment author: Ander 30 January 2014 11:14:37PM 1 point [-]

Witten is one of the greatest physicists alive, if not the greatest. He is the one who unified the various string theories into M-theory. He is also the only physicist to receive a Fields Medal.