Last weekend, while this year's Singularity Summit took place in San Francisco, I was turning 40 in my Australian obscurity. 40 is old enough to be thinking that I should just pick a SENS research theme and work on it, and also move to wherever in the world is most likely to have the best future biomedicine (that might be Boston). But at least since the late 1990s, when Eliezer first showed up, I have perceived that superintelligence trumps life extension as a futurist issue. And since 2006, when I first grasped how something like CEV could be an answer to the problem of superintelligence, I've had it before me as a model of how the future could and should play out. I have "contrarian" ideas about how consciousness works, but they do not contradict any of the essential notions of seed AI and friendly AI; they only imply that those notions would need to be adjusted and fitted to the true ontology, whatever that may be.
So I think this is what I should be working on - not just the ontological subproblem, but all aspects of the problem. The question is, how to go about this. At the moment, I'm working on a lengthy statement of how I think a Friendly Singularity could be achieved - a much better version of my top-level posts here, along with new material. But the main "methodological" problem is economic and perhaps social - what can I live on while I do this, and where in the world and in society should I situate myself for maximum insight and productivity. That's really what this post is about.
The obvious answer is, apply to SIAI. I'm not averse to the idea, and on occasion I raise the possibility with them, but I have two reasons for hesitation.
The first is the problem of consciousness. I often talk about this in terms of vaguely specified ideas about quantum entanglement in the brain, but the really important part is the radical disjunction between the physical ontology of the natural sciences and the manifest nature of consciousness. I cannot emphasize enough that this is a huge gaping hole in the scientific understanding of the world, the equal of any gap in the scientific worldview that came before it, and that the standard "scientific" way of thinking about it is a form of property dualism, even if people won't admit this to themselves. All the quantum stuff you hear from me is just an idea about how to restore a type of monism. I actually think it's a conservative solution to a very big problem, but to believe that you would have to agree with me that the other solutions on offer can't work (as well as understanding just what it is that I propose instead).
This "reason for not applying to SIAI" leads to two sub-reasons. First, I'm not sure that the SIAI intellectual environment can accommodate my approach. Second, the problem with consciousness is of course not specific to SIAI, it is a symptom of the overall scientific zeitgeist, and maybe I should be working there, in the field of consciousness studies. If expert opinion changes, SIAI will surely notice, and so I should be trying to convince the neuroscientists, not the Friendly AI researchers.
The second top-level reason for hesitation is simply that SIAI doesn't have much money. If I can accomplish part of the shared agenda while supported by other means, that would be better. Mostly I think in terms of doing a PhD. A few years back I almost started one with Ben Goertzel as co-supervisor, which would have looked at implementing a CEV-like process in a toy physical model, but that fell through at my end. Lately I'm looking around again. In Australia we have David Chalmers and Marcus Hutter. I know Chalmers from my quantum-mind days in Arizona ten years ago, and I met with Hutter recently. The strong interdisciplinarity of my real agenda makes it difficult to see where I could work directly on the central task, but also implies that there are many fields (cognitive neuroscience, decision theory, various quantum topics) where I might be able to limp along with partial support from an institution.
So that's the situation. Are there any other ideas? (Private communications can go to mporter at gmail.)
The expected utility is the sum of utilities weighted by probability. The probabilities sum to 1, and since the utilities are all 1, the weighted sum is also 1. Therefore every action scores 1. See Expected utility hypothesis.
Thanks. (Edit: My intended meaning doesn't make sense, since # of possible outcomes doesn't change, only their probabilities do. Still a useful heuristic, but tying it to utility is incorrect).