It's a sufficiently amorphous proposal to shroud many AGI projects without essentially changing anything about them, including project members' understanding of AI risk. So on the net, this looks to me like a potentially negative development.
Is anyone surprised by this? A few weeks ago I wrote to cousin_it during a chat session:
Wei Dai: FAI seems to have enough momentum now that many future AI projects will at least claim to take Friendliness seriously
Wei Dai: or another word, like machine ethics
Is anyone surprised by this?
It's one of those details that is obviously important for memetic strategies to account for but will still get missed by nine out of ten naive intuitive-implicit models. There are an infinite number of ways for policy-centered thinking to kill a mind, both figuratively and literally, directly and indirectly.
Link: Ben Goertzel dismisses Yudkowsky's FAI and proposes his own solution: Nanny-AI
Some relevant quotes:
Apparently Goertzel doesn't think that building a Nanny-AI with the above mentioned qualities is almost as difficult as creating a FAI a la Yudkowsky.
But SIAI believes that once you can create an AI-Nanny you can (probably) create a full-blown FAI as well.
Or am I mistaken?