Venu: Given the tiny minority of AIs that will FOOM at all, what is the probability that an AI which has been designed for a purpose other than FOOMing, will instead FOOM?
It seems to me like a pretty small probability that an AI not designed to self-improve will be the first AI that goes FOOM, when there are already many parties known to me who would like to deliberately cause such an event.
Why not anti-predict that no AIs will FOOM at all?
A reasonable question from the standpoint of antiprediction; here you would have to refer back to the articles on cascades, recursion, the article on hard takeoff, etcetera.
Re Tim's "suddenly develop the ability reprogram and improve themselves all-at-once" - the issue is whether something happens efficiently enough to be local or fast enough to accumulate advantage between the leading Friendly AI and the leading unFriendly AI, not whether things can happen with zero resource or instantaneously. But the former position seems to be routinely distorted into the straw latter.
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
Also, whoever saves a person to live another fifty years, it is as if they had saved fifty people to live one more year. Whoever saves someone who very much enjoys life, it is as if they saved many people who are not sure they really want to live. And whoever creates a life that would not have otherwise existed, it is as if they saved someone who had an entire lifetime yet to live.
Which is why I"m still puzzled by a simplistic moral dilemma that just won't go away for me: are we morally obligated to have children, and as many as we can? Sans using that using energy or money to more efficiently "save" lives, of course. It seems to me we should encourage people to have children, a common thing that many more people will actually do than donate philanthropically, in addition to other philanthropy encouragements.