To construct a friendly AI, you need to be able to make vague concepts crystal clear, cutting reality at the joints when those joints are obscure and fractal - and them implement a system that implements that cut.
There are lots of suggestions on how to do this, and a lot of work in the area. But having been over the same turf again and again, it's possible we've got a bit stuck in a rut. So to generate new suggestions, I'm proposing that we look at a vaguely analogous but distinctly different question: how would you ban porn?
Suppose you're put in change of some government and/or legal system, and you need to ban pornography, and see that the ban is implemented. Pornography is the problem, not eroticism. So a lonely lower-class guy wanking off to "Fuck Slaves of the Caribbean XIV" in a Pussycat Theatre is completely off. But a middle-class couple experiencing a delicious frisson when they see a nude version of "Pirates of Penzance" at the Met is perfectly fine - commendable, even.
The distinction between the two case is certainly not easy to spell out, and many are reduced to saying the equivalent of "I know it when I see it" when defining pornography. In terms of AI, this is equivalent with "value loading": refining the AI's values through interactions with human decision makers, who answer questions about edge cases and examples and serve as "learned judges" for the AI's concepts. But suppose that approach was not available to you - what methods would you implement to distinguish between pornography and eroticism, and ban one but not the other? Sufficiently clear that a scriptwriter would know exactly what they need to cut or add to a movie in order to move it from one category to the other? What if the nude "Pirates of of Penzance" was at a Pussycat Theatre and "Fuck Slaves of the Caribbean XIV" was at the Met?
To get maximal creativity, it's best to ignore the ultimate aim of the exercise (to find inspirations for methods that could be adapted to AI) and just focus on the problem itself. Is it even possible to get a reasonable solution to this question - a question much simpler than designing a FAI?
Strongly disagree. The whole point of Bayesian reasoning is that it allows us to deal with uncertainty. And one huge source of uncertainty is that we don't have precise understandings of the concepts we use. When we first learn a new concept, we have a ton of uncertainty about its location in thingspace. As we collect more data (either through direct observation or indirectly through communication with other humans), we are able to decrease that uncertainty, but it never goes away completely. An AI which uses human concepts will have to be able to deal with concept-uncertainty and the complications that arise as a result.
The fact that humans can't always agree with each other on what constitutes porn vs. erotica demonstrates that we don't all carve reality up in the same places (and therefore there's no "objective" definition of porn). The fact that individual humans often have trouble classifying edge cases demonstrates that even when you look at a single person's concept, it will still contain some uncertainty. The more we discuss and negotiate the meanings of concepts, the less fuzzy the boundaries will become, but we can't remove the fuzziness completely. We can write out a legal definition of porn, but it won't necessarily correspond to the black-box classifiers that real people are using. And concepts change - what we think of as porn might be classified differently in 100 years. An AI can't just find a single carving of reality and stick with it; the AI needs to adapt its knowledge as the concepts mutate.
So I'm pretty sure that what you're asking is impossible. The concept-boundaries in thingspace remain fuzzy until humans negotiate them by discussing specific edge cases. (And even then, they are still fuzzy, just slightly less so.) So there's no way to find the concept boundaries without asking people about it; it's the interaction between human decision makers that define the concept in the first place.
Related paper. Also sections 1 and 2 of this paper.