Thank you. I didn't phrase my question very well but what I was trying to get at was whether making a friendly AGI might be, by some measurement, orders of magnitude more difficult than making a non-friendly one.
Yes, it is orders of magnitude more different. If we took a hypothetical FAI-capable team, how much less time would it take them to make a UFAI than a FAI, assuming similar levels of effort, and starting at today's knowledge levels?
One-tenth the time seems like a good estimate.
If you want people to ask you stuff reply to this post with a comment to that effect.
More accurately, ask any participating LessWronger anything that is in the category of questions they indicate they would answer.
If you want to talk about this post you can reply to my comment below that says "Discussion of this post goes here.", or not.