Discussion of Slate Star Codex: "Extremism in Thought Experiments is No Vice"

15 Artaxerxes 28 March 2015 09:17AM

Link to Blog Post: "Extremism in Thought Experiments is No Vice"

_____

Phil Robertson is being criticized for a thought experiment in which an atheist’s family is raped and murdered. On a talk show, he accused atheists of believing that there was no such thing as objective right or wrong, then continued:

I’ll make a bet with you. Two guys break into an atheist’s home. He has a little atheist wife and two little atheist daughters. Two guys break into his home and tie him up in a chair and gag him.

Then they take his two daughters in front of him and rape both of them and then shoot them, and they take his wife and then decapitate her head off in front of him, and then they can look at him and say, ‘Isn’t it great that I don’t have to worry about being judged? Isn’t it great that there’s nothing wrong with this? There’s no right or wrong, now, is it dude?’

Then you take a sharp knife and take his manhood and hold it in front of him and say, ‘Wouldn’t it be something if [there] was something wrong with this? But you’re the one who says there is no God, there’s no right, there’s no wrong, so we’re just having fun. We’re sick in the head, have a nice day.’

If it happened to them, they probably would say, ‘Something about this just ain’t right’.

The media has completely proportionally described this as Robinson “fantasizing about” raping atheists, and there are the usual calls for him to apologize/get fired/be beheaded.

So let me use whatever credibility I have as a guy with a philosophy degree to confirm that Phil Robertson is doing moral philosophy exactly right.

_____

This is a LW discussion post for Yvain's blog posts at Slate Star Codex, as per tog's suggestion:

Like many Less Wrong readers, I greatly enjoy Slate Star Codex; there's a large overlap in readership. However, the comments there are far worse, not worth reading for me. I think this is in part due to the lack of LW-style up and downvotes. Have there ever been discussion threads about SSC posts here on LW? What do people think of the idea occasionally having them? Does Scott himself have any views on this, and would he be OK with it?

Scott/Yvain's permission to repost on LW was granted (from facebook):

I'm fine with anyone who wants reposting things for comments on LW, except for posts where I specifically say otherwise or tag them with "things i will regret writing"


Musk on AGI Timeframes

19 Artaxerxes 17 November 2014 01:36AM

Elon Musk submitted a comment to edge.org a day or so ago, on this article. It was later removed.

The pace of progress in artificial intelligence (I'm not referring to narrow AI) is incredibly fast. Unless you have direct exposure to groups like Deepmind, you have no idea how fast-it is growing at a pace close to exponential. The risk of something seriously dangerous happening is in the five year timeframe. 10 years at most. This is not a case of crying wolf about something I don't understand.

I am not alone in thinking we should be worried. The leading AI companies have taken great steps to ensure safety. The recognize the danger, but believe that they can shape and control the digital superintelligences and prevent bad ones from escaping into the Internet. That remains to be seen...


Now Elon has been making noises about AI safety lately in general, including for example mentioning Bostrom's Superintelligence on twitter. But this is the first time that I know of that he's come up with his own predictions of the timeframes involved, and I think his are rather quite soon compared to most. 

The risk of something seriously dangerous happening is in the five year timeframe. 10 years at most.

We can compare this to MIRI's post in May this year, When Will AI Be Created, which illustrates that it seems reasonable to think of AI as being further away, but also that there is a lot of uncertainty on the issue.

Of course, "something seriously dangerous" might not refer to full blown superintelligent uFAI - there's plenty of space for disasters of magnitude in between the range of the 2010 flash crash and clippy turning the universe into paperclips to occur.

In any case, it's true that Musk has more "direct exposure" to those on the frontier of AGI research than your average person, and it's also true that he has an audience, so I think there is some interest to be found in his comments here.