JulianMorrison comments on Why Support the Underdog? - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (86)
OK, what's YOUR position, and how much do you know? Then Yvain can dump historical facts on you, and we'll see how far you shift and in what direction.
This is indeed a pretty utilitarian position. I think the objection you're likely to run into is that by evaluating the situation purely in terms of the present, it sweeps historic precedents under the rug.
Put another way, the "this conflict represents a risk, let's just cool it" argument can just as easily be made by any aggressor directly after initiating the conflict.
Yup. If you don't punish aggressors and just demand "peace at any price" once the war starts, that peace sure won't last long.
(I yesterday heard someone who ought to know say AI at human level, and not provably friendly, in 16 years. Yes my jaw hit the floor too.)
I hadn't thought of the "park it, we have bigger problems", or "park it, Omega will fix it" approach, but it might make sense. That raises the question, and I hope it's not treading to far into off-LW-topic: to what extent ought a reasoning person act as if they expected a gradual and incremental change in the status quo, and to what extent ought their planning to be dominated by expectation of large disruptions in the near future?
The question I was struggling to articulate was more like: should I give credence to my own beliefs? How much? And how to deal with instinct that doesn't want to put AI and postmen in the same category of "real"?
Who on Earth do you think ought to know that?
Shane Legg, who was at London LW meetup.
From what he explained, the job of reverse engineering a biological mind is looking much easier than expected - there's no need to grovel around at the level of single neurons, since the functional units are bunches of neurons, and they implement algorithms that are recognizable from conventional AI.
I think it would make an interesting group effort to try and estimate the speed of neuro research to get some idea of how fast we can expect neuro-inspired AI.
I'm going to try and figure out the number of researcher working on figuring out the algorithms for long term changes to neural organisation (LTP, neuro plasticity and neuro genesis). I get the feeling it is a lot less than those working on figuring out short term functionality, but I'm not an expert and not submerged in the field.
Are you sure you're not playing "a deeply wise person doesn't pick sides, but scolds both for fighting"?