You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Sean_o_h comments on GCRI: Updated Strategy and AMA on EA Forum next Tuesday - Less Wrong Discussion

7 Post author: RyanCarey 23 February 2015 12:35PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (10)

You are viewing a single comment's thread.

Comment author: Sean_o_h 23 February 2015 09:59:50PM 2 points [-]

Seth is a very smart, formidably well-informed and careful thinker - I'd highly recommend jumping on the opportunity to ask him questions.

His latest piece in the Bulletin of the Atomic Scientists is worth a read too. It's on the "Stop Killer Robots" campaign. He agrees with Stuart Russell (and others)'s view that this is a bad road to go down, and also presents it as a test case for existential risk - a pre-emptive ban on a dangerous future technology:

"However, the most important aspect of the Campaign to Stop Killer Robots is the precedent it sets as a forward-looking effort to protect humanity from emerging technologies that could permanently end civilization or cause human extinction. Developments in biotechnology, geoengineering, and artificial intelligence, among other areas, could be so harmful that responding may not be an option. The campaign against fully autonomous weapons is a test-case, a warm-up. Humanity must get good at proactively protecting itself from new weapon technologies, because we react to them at our own peril."

http://thebulletin.org/stopping-killer-robots-and-other-future-threats8012