Appologies for the provacative phrasing--I was (inadvertently) asking for a heated reply...
But to clarify the point in light of your response (which no doubt will get another heated reply, though honestly trying to convey the point w/out provoking...):
Piles of radioactive material is not a good analogy here. But I think it's appearance here is a good illustration of the very thing I'm hoping to convey: There are a lot of (vague, wrong) theories of AGI which map well to the radioactive pile analogy. Just put enough of the ingredients together in a pile, ...
"but almost no one anywhere has ever heard of AI friendliness"
Ok, if this is your vantage point, I understand better. I must hang in the wrong circles 'cause I meet far more FAI than AGI folks.
Yes, I understand that. But it matters a lot what premises underlie AGI how self-modification is going to impact it. The stronger fast-FOOM arguments spring from older conceptions of AGI. Imo, a better understanding of AGI does not support it.
Thanks much for the interesting conversation, I think I am expired.
See reply below to drethlin.
Sigh.
Ok, I see the problem with this discussion, and I see no solution. If you understood AGI better, you would understand why your reply is like telling me I shouldn't play with electricity because Zeus will get angry and punish the village. But that very concern prevents you from understanding AGI better, so we are at an impasse.
It makes me sad, because with the pervasiveness of this superstition, we've lost enough minds from our side that the military will probably beat us to it.
Just to follow up, I'm seeing nothing new in IEM (or if it's there it's too burried in "hear me think" to find--Eliezer really would benefit from pruning down to essentials). Most of it concerns the point where AGI approaches or exceeds human intelligence. There's very little to support concern for the long ramp up to that point (other than some matter of genetic programming, which I haven't the time to address here). I could go on rather at length in rebuttal of the post-human-intelligence FOOM theory (not discounting it entirely, but putting...
Well, then, I hope it's someone like you or me that's at the button. But that's not going to be the case if we're working on FAI instead of AGI, is it...
Let's imagine you solve FAI tomorrow, but not AGI. (I see it as highly improbable that anyone will meaningfully solve FAI before solving AGI, but let's explore that optimistic scenario.) Meanwhile, various folks and institutions out there are ahead of you in AGI research by however much time you've spent on FAI. At least one of them won't care about FAI.
I have a hard time imagining any outcome from that scenario that doesn't involve you wishing you'd been working on AGI and gotten there first. How do you imagine the outcome?
Which do you think is more likely: That you will die of old age, or of unfriendly-AI? (Serious question, genuinely curious.)
Do you think people who can't implement AGI can solve FAI?
FOOM on the order of seconds can be strongly argued against (Eli does a fair job of it himself, but likes to leave everything open so he can cite himself later no matter what happens), and if it's weeks/months/years, then Hit Control C. Seriously. If your computer is trying to take over the world and is likely to succeed in the next few weeks, then kill -9 the thing. I realize that at that point you've likely got other AIs to worry about, but at least you're in a position to understand it well enough to have some hope at making yours friendly and re-act...
"The Intelligence Explosion Thesis says that an AI can potentially grow in capability on a timescale that seems fast relative to human experience due to recursive self-improvement. This in turn implies that strategies which rely on humans reacting to and restraining or punishing AIs are unlikely to be successful in the long run, and that what the first strongly self-improving AI prefers can end up mostly determining the final outcomes for Earth-originating intelligent life. " -- Eliezer Yudkowsky, IEM.
I.e., Eliezer thinks it'll take less time than it takes you to hit Ctrl-C. (Granted it takes Eliezer a whole paragraph to say what the essay captures in a phrase, but I digress.)
"I mean, no one thinks it'll take less time than it takes you to hit Ctrl-C" -- by the way, are you sure about this? Would it be more accurate to say "before you realize you should hit control-C"? Because it seems to me, if it aint goin' FOOM before you realize you should hit control-C (and do so) then.... it aint goin' FOOM.
Ah, thanks, better understand your position now. I will endeavor to read IEM (if it isn't too stocked with false presuppositions from the get go).
I agree the essay did not endeavor to disprove FOOM, but let's say it's just wrong on that claim, and that FOOM is really a possibility -- then are you saying you'd rather let the military AI go FOOM than something homebrewed? Or are you claiming that it's possible to reign in military efforts in this direction (world round)? Or give me a third option if neither of those applies.
Trying to understand here. What's the strawman in this case?
Can you point me to an essay that addresses the points in this one?
Clarify?
You should check out project Sifter, which is essentially what you are describing, started in San Diego in the late 90's but now worldwide including NY. http://sifter.org
It is fairly quiet lately due to lack of "heroes" but it only takes one to revive an area and the membership is there (and fairly easy to grow).
(Disclaimer: it's my site. But it's ad-free, no fee -- I just maintain it for the benefit of the members.)
Open to collaborations if you want to merge efforts. I have some solutions brewing for the heroes problem, and other ideas in the...
I think it will be incidental to AGI. That is, by the time you are approaching human-level AGI it will be essentially obvious (to the sort of person who groks human-level AGI in the first place). Motivation (as a component of the process of thinking) is integral to AGI, not some extra thing only humans and animals happen to have. Motivation needs be grokked before you will have AGI in the first place. Human motivational structure is quite complex, with far more alterior motives (clan affiliation, reproduction, etc) than straightforward ones. AGIs need... (read more)