ArisKatsaris comments on Tactics against Pascal's Mugging - Less Wrong

16 Post author: ArisKatsaris 25 April 2013 12:07AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (59)

You are viewing a single comment's thread. Show more comments above.

Comment author: ArisKatsaris 28 April 2013 12:15:35PM *  3 points [-]

But somehow the asteroid example does not feel like Pascal's mugging.

If we're to talk directly about what Dmytry is talking about without beating around the bush, an asteroid killed the dinosaurs and AI did not, therefore discussion of asteroids is not Pascal's mugging and discussion of AI risks is; the former makes you merely scientifically aware, and the latter makes you a fraudster, a crackpot or a sucker. If AI risks were real, then surely the dinosaurs would have been killed by an AI instead.

So, let's all give money to prevent an asteroid extinction event that we know only happens once in a few hundred million years, and let's not pay any attention to AI risks, because AI risks afterall never happened to the dinosaurs, and must therefore be impossible, much like self-driving cars, heavier-than-air flight, or nuclear bombs.

Comment deleted 28 April 2013 03:09:23PM [-]
Comment author: MugaSofer 29 April 2013 01:26:32PM -2 points [-]

... doesn't GiveWell recommend that for pretty much every charity, because you should be giving it to the Top Three Effective Charities?

Comment deleted 29 April 2013 07:23:58PM [-]
Comment author: MugaSofer 29 April 2013 08:51:21PM -1 points [-]

Which mission? The FAI mission? The GiveWell mission? I am confused :(

I don't suppose he said this somewhere linkable?

Comment author: CarlShulman 30 April 2013 03:38:39AM *  5 points [-]

Here he claims that the default outcome of AI is very likely safe, but attempts at Friendly AI are very likely deadly if they do anything (although I would argue this neglects the correlation between what AI approaches are workable in general and for would-be FAI efforts, and what is dangerous for both types, as well as assuming some silly behaviors and that competitive pressures aren't severe):

I believe that unleashing an all-powerful "agent AGI" (without the benefit of experimentation) would very likely result in a UFAI-like outcome, no matter how carefully the "agent AGI" was designed to be "Friendly." I see SI as encouraging (and aiming to take) this approach. I believe that the standard approach to developing software results in "tools," not "agents," and that tools (while dangerous) are much safer than agents. A "tool mode" could facilitate experiment-informed progress toward a safe "agent," rather than needing to get "Friendliness" theory right without any experimentation. Therefore, I believe that the approach SI advocates and aims to prepare for is far more dangerous than the standard approach, so if SI's work on Friendliness theory affects the risk of human extinction one way or the other, it will increase the risk of human extinction. Fortunately I believe SI's work is far more likely to have no effect one way or the other

Comment author: shminux 28 April 2013 07:58:30PM 2 points [-]

Downvoted for ranting.

Comment author: [deleted] 28 April 2013 08:07:59PM -2 points [-]

I thought it was satire?

Comment author: MugaSofer 29 April 2013 01:25:12PM *  -2 points [-]

It is. It's also a rant.

It makes a good point, mind, and I upvoted it, but it's still needlessly ranty.

Comment author: shminux 28 April 2013 08:46:34PM -1 points [-]

Then maybe my sarcasm/irony/satire detector is broken.

Comment author: MugaSofer 29 April 2013 01:24:54PM 0 points [-]

Well, I seriously doubt Aris actually thinks AI risks are Pascal's Mugging by definition. That doesn't prevent this from being a rant, it's just a sarcastic rant.