You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Kawoomba comments on Open thread, January 25- February 1 - Less Wrong Discussion

8 Post author: NancyLebovitz 25 January 2014 02:52PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (316)

You are viewing a single comment's thread. Show more comments above.

Comment author: lukeprog 28 January 2014 06:09:04PM 15 points [-]

Robin Hanson on Facebook:

Academic futurism has low status. This causes people interested in futurism to ignore those academics and instead listen to people who talk about futurism after gaining high status via focusing on other topics. As a result, the people who are listened to on the future tend to be amateurs, not specialists. And this is why "we" know a lot less about the future than we could.

Consider the case of Willard Wells and his Springer-published book Apocalypse When?: Calculating How Long the Human Race Will Survive (2009). From a UCSD news story about a talk Wells gave about the book:

Larry Carter, a UCSD emeritus professor of computer science, didn’t mince words. The first time he heard about Wells’s theories, he thought, “Oh my God, is this guy a crackpot?”

But persuaded by Well’s credentials, which include a PhD from Caltech in math and theoretical physics, a career that led him L-3 Photonics and the Caltech/Jet Propulsion Laboratory, and an invention under his belt, Carter gave the ideas a chance. And was intrigued.

For a taste of the book, here is Wells' description of one specific risk:

When advanced robots arrive... the serious threat [will be] human hackers. They may deliberately breed a hostile strain of androids, which then infects normal ones with its virus. To do this, the hackers must obtain a genetic algorithm and pervert it, probably early in the robotic age before safeguards become sophisticated... Excluding hackers, it seems unlikely that androids will turn against us as they do in some movies... computer code for hostility is too complex... In the very long term, androids will become conscious for the same reasons humans did, whatever those reasons may be... In summary, the androids have powerful instincts to nurture humans, but these instincts will be unencumbered by concerns for human rights. Androids will feel free to impose a harsh discipline that saves us from ourselves while violating many of our so-called human rights.

Now, despite Larry Carter's being "persuaded by Wells' credentials" — which might have been exaggerated or made-up by the journalist, I don't know — I suspect very few people have taken Wells seriously, for good reason. He's clearly just making stuff up, with almost no study of the issue whatsoever. (On this topic, the only people he cites are Joy, Kurzweil, and Posner, despite the book being published in 2009.)

But reading that passage did drive home again what it must be like for most people to read FHI or MIRI on AI risk, or Robin Hanson on ems. They probably can't tell the difference between someone who is making stuff up and an argument that has gone through a gauntlet of 15 years of heated debate and both theoretical and empirical research.

Comment author: Kawoomba 28 January 2014 09:50:00PM 1 point [-]

It might be a worthwhile endeavor to modify our wiki such that it serves not only as a mostly local reference on current terms and jargon, but also as an independent guide to the various arguments for and against various concepts, where applicable. It could create a lot of credibility and exposure to establish a sort of neutral reference guide / an argument map / the history and iterations an idea has gone through, in a neutral voice. Ideally, neutrality regarding PoV works in favor of those with the balance of arguments in their favor.

This need not be entirely new material, but instead simply a few mandatory / recommended headers in each wiki entry, pertaining to history, counterarguments etc. Could be worth it lifting the wiki from relative obscurity, with a new landing page, and marketed potentially as a reference guide for journalists researching current topics. Kruel's LW interview with Shane Legg got linked to in a NYTimes blog, why not a suitable LW wiki article, too?