Psychohistorian comments on Why Support the Underdog? - Less Wrong

35 Post author: Yvain 05 April 2009 12:01AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (86)

You are viewing a single comment's thread. Show more comments above.

Comment author: JulianMorrison 05 April 2009 09:07:08PM *  0 points [-]

(I yesterday heard someone who ought to know say AI at human level, and not provably friendly, in 16 years. Yes my jaw hit the floor too.)

I hadn't thought of the "park it, we have bigger problems", or "park it, Omega will fix it" approach, but it might make sense. That raises the question, and I hope it's not treading to far into off-LW-topic: to what extent ought a reasoning person act as if they expected a gradual and incremental change in the status quo, and to what extent ought their planning to be dominated by expectation of large disruptions in the near future?

Comment deleted 05 April 2009 10:16:29PM [-]
Comment author: JulianMorrison 06 April 2009 12:03:07AM *  1 point [-]

The question I was struggling to articulate was more like: should I give credence to my own beliefs? How much? And how to deal with instinct that doesn't want to put AI and postmen in the same category of "real"?

Comment author: Eliezer_Yudkowsky 05 April 2009 09:15:57PM 0 points [-]

Who on Earth do you think ought to know that?

Comment author: JulianMorrison 05 April 2009 09:17:34PM 1 point [-]

Shane Legg, who was at London LW meetup.

Comment deleted 05 April 2009 10:13:19PM [-]
Comment author: JulianMorrison 05 April 2009 11:29:25PM *  2 points [-]

From what he explained, the job of reverse engineering a biological mind is looking much easier than expected - there's no need to grovel around at the level of single neurons, since the functional units are bunches of neurons, and they implement algorithms that are recognizable from conventional AI.

Comment author: Eliezer_Yudkowsky 06 April 2009 11:31:54AM 0 points [-]

This sounds like a statement made by some hopeful neuromodeler looking for funding rather than a known truth of science.

Comment author: JulianMorrison 06 April 2009 03:39:51PM 2 points [-]

You want the details? Ask the pirate, not the parrot.

Rawwrk. Pieces of eight.

Comment author: whpearson 05 April 2009 10:22:07PM *  1 point [-]

I think it would make an interesting group effort to try and estimate the speed of neuro research to get some idea of how fast we can expect neuro-inspired AI.

I'm going to try and figure out the number of researcher working on figuring out the algorithms for long term changes to neural organisation (LTP, neuro plasticity and neuro genesis). I get the feeling it is a lot less than those working on figuring out short term functionality, but I'm not an expert and not submerged in the field.

Comment author: Nick_Tarleton 06 April 2009 04:45:00AM 1 point [-]

Please do; this sounds extremely valuable.

Comment deleted 06 April 2009 10:24:39AM [-]
Comment author: Eliezer_Yudkowsky 06 April 2009 12:12:49PM 0 points [-]

Ja, going off-topic.