asr comments on Hawking/Russell/Tegmark/Wilczek on dangers of Superintelligent Machines [link] - Less Wrong

18 Post author: Dr_Manhattan 21 April 2014 04:55PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (28)

You are viewing a single comment's thread.

Comment author: asr 25 April 2014 04:59:37PM 1 point [-]

It's a big deal. In particular, I was startled to see Russell signing it. I don't put much weight on the physicists, who are well outside their area of expertise. But Russell is a totally respectable guy and this is exactly his nominal area. I interacted with him a few times as a student and he impressed me as a smart thoughtful guy who wasn't given to pet theories.

Has he ever stopped by MIRI to chat? The Berkeley CS faculty are famously busy, but I'd think if he bothers to name-check MIRI in a prominent article, he'd be willing to come by and have a real technical discussion.

Comment author: Dr_Manhattan 26 April 2014 12:38:50AM *  0 points [-]

I don't know, but I found his omission of MIRI in this interview (found via lukeprog's FB) surprising http://is.gd/Dx0lw0

Comment author: IlyaShpitser 30 April 2014 10:00:10AM *  0 points [-]

It's not surprising to me at all, I think you might have an overly inflated opinion of MIRI. MIRI has no mainstream academic status, and isn't getting more any time soon.

Comment author: Dr_Manhattan 30 April 2014 01:09:58PM 0 points [-]

Not sure if you're saying I have an inflated opinion of MIRI or of MIRI's status. If it's the earlier, in my own opinion FWIW, is that what MIRI lacks in terms of academic status it well makes up by (initially) being the only org doing reasonably productive work in the area of safety research.

More specifically, AIMA mentions Friendly AI, and Yudkowsky by name, which is why I found the omission somewhat surprising.