I get that the shut up and multiply calculation will make it worth trying even if the odds are really low, but my emotions don't respond very well to low odds.
I can override that to some degree, but, at the end of the day, it'd be easier if the odds were actually pretty decent.
Comment or PM me, it's not something I've heard much about.
"Friendliness" is a rag-bag of different things -- benevolence, absence of malevolence, the ability to control a system whether it's benevolent or malevolent , and so on. So the question is somewhat ill-posed.
As far as control goes, all AI projects involve an element of control, because if you can;t get the AI to do what you want, it is useless.So the idea that AI and FAI are disjoint is wrong.