JulianMorrison comments on Why Support the Underdog? - Less Wrong

35 Post author: Yvain 05 April 2009 12:01AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (86)

You are viewing a single comment's thread. Show more comments above.

Comment author: JulianMorrison 05 April 2009 12:43:28PM 0 points [-]

OK, what's YOUR position, and how much do you know? Then Yvain can dump historical facts on you, and we'll see how far you shift and in what direction.

Comment deleted 05 April 2009 02:04:21PM *  [-]
Comment author: loqi 05 April 2009 07:26:31PM 2 points [-]

This is indeed a pretty utilitarian position. I think the objection you're likely to run into is that by evaluating the situation purely in terms of the present, it sweeps historic precedents under the rug.

Put another way, the "this conflict represents a risk, let's just cool it" argument can just as easily be made by any aggressor directly after initiating the conflict.

Comment author: Eliezer_Yudkowsky 05 April 2009 07:29:14PM 2 points [-]

Yup. If you don't punish aggressors and just demand "peace at any price" once the war starts, that peace sure won't last long.

Comment deleted 05 April 2009 08:36:17PM *  [-]
Comment author: JulianMorrison 05 April 2009 09:07:08PM *  0 points [-]

(I yesterday heard someone who ought to know say AI at human level, and not provably friendly, in 16 years. Yes my jaw hit the floor too.)

I hadn't thought of the "park it, we have bigger problems", or "park it, Omega will fix it" approach, but it might make sense. That raises the question, and I hope it's not treading to far into off-LW-topic: to what extent ought a reasoning person act as if they expected a gradual and incremental change in the status quo, and to what extent ought their planning to be dominated by expectation of large disruptions in the near future?

Comment deleted 05 April 2009 10:16:29PM [-]
Comment author: JulianMorrison 06 April 2009 12:03:07AM *  1 point [-]

The question I was struggling to articulate was more like: should I give credence to my own beliefs? How much? And how to deal with instinct that doesn't want to put AI and postmen in the same category of "real"?

Comment author: Eliezer_Yudkowsky 05 April 2009 09:15:57PM 0 points [-]

Who on Earth do you think ought to know that?

Comment author: JulianMorrison 05 April 2009 09:17:34PM 1 point [-]

Shane Legg, who was at London LW meetup.

Comment deleted 05 April 2009 10:13:19PM [-]
Comment author: JulianMorrison 05 April 2009 11:29:25PM *  2 points [-]

From what he explained, the job of reverse engineering a biological mind is looking much easier than expected - there's no need to grovel around at the level of single neurons, since the functional units are bunches of neurons, and they implement algorithms that are recognizable from conventional AI.

Comment author: whpearson 05 April 2009 10:22:07PM *  1 point [-]

I think it would make an interesting group effort to try and estimate the speed of neuro research to get some idea of how fast we can expect neuro-inspired AI.

I'm going to try and figure out the number of researcher working on figuring out the algorithms for long term changes to neural organisation (LTP, neuro plasticity and neuro genesis). I get the feeling it is a lot less than those working on figuring out short term functionality, but I'm not an expert and not submerged in the field.

Comment author: JulianMorrison 05 April 2009 07:03:09PM 1 point [-]

Are you sure you're not playing "a deeply wise person doesn't pick sides, but scolds both for fighting"?