Comment author: Dr_Manhattan 07 May 2015 12:22:45AM 0 points [-]

If you like this, and have seen The Thing, give this a whirl:

https://web.archive.org/web/20150214095915/http://clarkesworldmagazine.com/watts_01_10/

Comment author: Dr_Manhattan 06 April 2015 11:59:33PM 0 points [-]
Comment author: Dr_Manhattan 02 April 2015 04:49:18PM 0 points [-]

Alex Garland has two of the most intelligent sci-fi movies of the decade (28 days later, Sunshine) to his credit. Much looking forward to a movie of that level of sophistication in the AI area.

Comment author: [deleted] 25 March 2015 01:44:54PM 0 points [-]

Yes. Or rather, a yet-unknown mixture of a weak-to-mid-strong evidence and posterior rationalization.

In response to comment by [deleted] on Useful Habits Repository
Comment author: Dr_Manhattan 25 March 2015 04:36:02PM 0 points [-]

Not sure how you do your sampling, but most videos I watch end up pretty high-value and luck subtitles in most cases.

Comment author: [deleted] 25 March 2015 08:58:24AM 1 point [-]

speed up all informative video/audio

My "solution": ragequit when an interesting looking link ends up being a video. Rationalization: if nobody bothered to write a transcript, it cannot be that interesting.

In response to comment by [deleted] on Useful Habits Repository
Comment author: Dr_Manhattan 25 March 2015 01:24:39PM 0 points [-]

Rationalization

Comment author: Dr_Manhattan 17 March 2015 01:32:23AM 3 points [-]

I am now the King of New York!

Comment author: Vaniver 07 March 2015 03:38:05PM 3 points [-]

My impression was that imminence is a point of contention, much less orthogonality.

This article is a good place to start in clarifying the MIRI position. Since their estimate for imminence seems to boil down to "we asked the community what they thought and made a distribution," I don't see that as contention.

There is broad uncertainty about timelines, but the MIRI position is "uncertainty means we should not be confident we have all the time we need," not "we're confident it will happen soon," which is where someone would need to be for me to say they're "for imminence."

Comment author: Dr_Manhattan 07 March 2015 04:07:41PM 0 points [-]

Interesting. I considered imminence more of a point of contention b/c the most outspoken "AI risk is overhyped" people are mostly using it as an argument (and I consider this bunch way more serious than Searle and Brooks: Yann LeCun, Yoshua Bengio, Andrew Ng).

Comment author: Vaniver 06 March 2015 08:51:00PM *  5 points [-]

I am interpreting IlyaShpitser as commenting on OphilaDros's presentation; why say Ng "disses" UFAI concerns instead of "dismisses" them?

It also doesn't help that the underlying content is a handful of different issues that all bleed together: the orthogonality question, the imminence question, and the Hollywood question. Ng is against Hollywood and against imminence, and I haven't read enough of his writing on the subject to be sure of his thoughts on orthogonality, which is one of the actual meaningful points of contention between MIRI and other experts on the issue. (And even those three don't touch on Ng's short objection, that he doesn't see a fruitful open problem!)

Comment author: Dr_Manhattan 07 March 2015 03:19:22PM 1 point [-]

My impression was that imminence is a point of contention, much less orthogonality. Who specifically do you have in mind?

Comment author: IlyaShpitser 06 March 2015 10:24:49AM 2 points [-]

Tribal talk.

Comment author: Dr_Manhattan 06 March 2015 08:17:41PM *  1 point [-]

Which tribe is Ng in? (if that's what you are talking about)

Comment author: alienist 14 February 2015 06:12:25AM 4 points [-]

I find it interesting that no one has yet mentioned Grothendieck's rather eccentric later behavior.

Comment author: Dr_Manhattan 18 February 2015 12:45:52AM 2 points [-]

do you feel it is relevant to the article's thesis?

View more: Prev | Next