(Inspired by.)
Einstein's four fundamental papers of 1905 were inspired by the statement of three open problems in science by Henri Poincaré:
Having earlier failed to get his doctorate in physics, [Einstein] had temporarily given up on the idea of an academic career, telling a friend that "the whole comedy has become boring." [But he] had recently read a book by Henri Poincaré, a French mathematician of enormous reputation, which identified three fundamental unsolved problems in science. The first concerned the 'photoelectric effect': how did ultraviolet light knock electrons off the surface of a piece of metal? The second concerned 'Brownian motion': why did pollen particles suspended in water move about in a random zigzag pattern? The third concerned the 'luminiferous ether' that was supposed to fill all of space and serve as the medium through which light waves moved, the way sound waves move through air, or ocean waves through water: why had experiments failed to detect the earth’s motion through this ether? Each of these problems had the potential to reveal what Einstein held to be the underlying simplicity of nature. Working alone, apart from the scientific community, the unknown junior clerk rapidly managed to dispatch all three. His solutions were presented in four papers, written in the months of March, April, May, and June of 1905.
A few years earlier, David Hilbert had published 23 open problems in mathematics, about half of which were solved during the 20th century.
More recently, Timothy Gowers has used his blog to promote open problems in mathematics that might be solved collaboratively, online. After just seven weeks, the first problem was "probably solved," resulting in some published papers under the pseudonym 'D.H.J. Polymath.'
The Clay Mathematics Institute offers a $1 million prize for the solution to any of 7 particularly difficult problems in mathematics. One of these problems has now been solved.
In 2006, researchers defined 14 open problems in artificial life, and their paper continues to guide research in that field.
And of course there are many more open problems. Many more.
One problem with Friendly AI research is that even those who could work on the project often don't have a clear picture of what the open problems are and how they interact with each other. There are a few papers that introduce readers to the problem space, but more work could be done to (1) define each open problem with some precision, (2) discuss how each open problem interacts with other open problems, (3) point readers to existing research on the problem, and (4) suggest directions for future research. Such an effort might even clarify the problem space for those who think they understand it.
(This is, in fact, where my metaethics sequence is headed.)
Defining a problem is the first step toward solving it.
Defining a problem publicly can bring it to the attention of intelligent minds who may be able to make progress on it.
Defining a problem publicly and offering a reward for its solution can motivate intelligent minds to work on that problem instead of some other problem.
A related post, 'Friendly AI Research and Taskification':
If the nature of ethical properties, statements, attitudes, and judgments does ultimately correlate with human brains, it might be possible to derive mathematical models of moral terms or judgments from brain data. The problem with arriving at the meaning of morality solely by means of contemplation is that you risk introducing new meanings based on high-order cognition and intuitions, rather than figuring out what humans as a whole mean by morality.
Two possible steps towards friendly AI/CEV (just some quick ideas):
1.) We want the AGI (CEV) to extrapolate our volition in a certain, ethical way. That is, it shouldn't for example create models of humans and hurt them just to figure out what we dislike. But in the end it won't be enough to write blog posts in English. We might have to put real people into brain scanners and derive mathematically precise thresholds for states like general indisposition and unethical behavior. Such models could then be implemented into the utility-function of an AGI, while blog posts written in natural language can't.
2.) We don't know if CEV is itself wished for and considered ethical by most humans. If you do not assume that all humans are alike, what makes you think that your personal solution, your answer to those questions will be universally accepted? A rich white atheist male living in a western country who is interested in topics like philosophy and mathematics does not seem to be someone who can speak for the rest of the world. If we are very concerned with the ethics of CEV in and of itself, we might have to come up with a way to execute an approximation of CEV before AGI is invented. We might need massive, large-scale social experiments and surveys to see if something like CEV is even desirable. Writing a few vague blog posts about it doesn't seem to get us the certainty we need before altering the universe irrevocably.
If CEV encounters a large proportion of the population that wish it was not run and will continue to do so after extrapolation, it simply stops and reports that fact. That's one of the points of the method. It is, in and of itself a large scale social survey of present and future humanity. And if the groups that wouldn't want it run now would after extrapolation, I'm fine with running it against their present wishes, and hope that if I were part of a group under similar circumstances someone else would do the same- "past me" is an idiot, I'm not much better, and "future me" is hopefully an even bigger improvement, while "desired future me" almost certainly is.