Last weekend, while this year's Singularity Summit took place in San Francisco, I was turning 40 in my Australian obscurity. 40 is old enough to be thinking that I should just pick a SENS research theme and work on it, and also move to wherever in the world is most likely to have the best future biomedicine (that might be Boston). But at least since the late 1990s, when Eliezer first showed up, I have perceived that superintelligence trumps life extension as a futurist issue. And since 2006, when I first grasped how something like CEV could be an answer to the problem of superintelligence, I've had it before me as a model of how the future could and should play out. I have "contrarian" ideas about how consciousness works, but they do not contradict any of the essential notions of seed AI and friendly AI; they only imply that those notions would need to be adjusted and fitted to the true ontology, whatever that may be.
So I think this is what I should be working on - not just the ontological subproblem, but all aspects of the problem. The question is, how to go about this. At the moment, I'm working on a lengthy statement of how I think a Friendly Singularity could be achieved - a much better version of my top-level posts here, along with new material. But the main "methodological" problem is economic and perhaps social - what can I live on while I do this, and where in the world and in society should I situate myself for maximum insight and productivity. That's really what this post is about.
The obvious answer is, apply to SIAI. I'm not averse to the idea, and on occasion I raise the possibility with them, but I have two reasons for hesitation.
The first is the problem of consciousness. I often talk about this in terms of vaguely specified ideas about quantum entanglement in the brain, but the really important part is the radical disjunction between the physical ontology of the natural sciences and the manifest nature of consciousness. I cannot emphasize enough that this is a huge gaping hole in the scientific understanding of the world, the equal of any gap in the scientific worldview that came before it, and that the standard "scientific" way of thinking about it is a form of property dualism, even if people won't admit this to themselves. All the quantum stuff you hear from me is just an idea about how to restore a type of monism. I actually think it's a conservative solution to a very big problem, but to believe that you would have to agree with me that the other solutions on offer can't work (as well as understanding just what it is that I propose instead).
This "reason for not applying to SIAI" leads to two sub-reasons. First, I'm not sure that the SIAI intellectual environment can accommodate my approach. Second, the problem with consciousness is of course not specific to SIAI, it is a symptom of the overall scientific zeitgeist, and maybe I should be working there, in the field of consciousness studies. If expert opinion changes, SIAI will surely notice, and so I should be trying to convince the neuroscientists, not the Friendly AI researchers.
The second top-level reason for hesitation is simply that SIAI doesn't have much money. If I can accomplish part of the shared agenda while supported by other means, that would be better. Mostly I think in terms of doing a PhD. A few years back I almost started one with Ben Goertzel as co-supervisor, which would have looked at implementing a CEV-like process in a toy physical model, but that fell through at my end. Lately I'm looking around again. In Australia we have David Chalmers and Marcus Hutter. I know Chalmers from my quantum-mind days in Arizona ten years ago, and I met with Hutter recently. The strong interdisciplinarity of my real agenda makes it difficult to see where I could work directly on the central task, but also implies that there are many fields (cognitive neuroscience, decision theory, various quantum topics) where I might be able to limp along with partial support from an institution.
So that's the situation. Are there any other ideas? (Private communications can go to mporter at gmail.)
My main criterion for whether a computational property is objectively present in a physical system, or is a matter of interpretation, is whether it involves semantics. Pure physics only gives you state machines with no semantics. In this case, I think quicksort comes quite close to being definable at the state-machine level. "List" sounds representational, because usually it means "list of items represented by computational tokens", but if you think of it simply as a set of physical states with an ordering produced by the intrinsic physical dynamics, then "sorting a list" can refer to a meta-dynamics which rearranges that ordering, and "current pivot position" can be an objective and strictly physical property.
The property dualism I'm talking about occurs when basic sensory qualities like color are identified with such computational properties. Either you end up saying "seeing the color is how it feels" - and "feeling" is the extra, dual property - or you say there's no "feeling" at all - which is denial that consciousness exists. It would be better to be able to assert identity, but then the elements of a conscious experience can't really be coarse-grained states of neuronal ensembles, etc - that would restore the dualism.
We need an ontology which contains "experiences" and "appearances" (for these things undoubtedly exist), which doesn't falsify their character, and which puts them in interaction with the atomic aggregates we know as neurons, which presumably also exist. Substance dualism was the classic way to do this - the soul interacting with the pineal gland, as in Descartes. The baroque quantum monadology I've hinted at, is the only way I know to introduce consciousness into physical causality that avoids both substance dualism and property dualism. Maybe there's some other way to do it, but it's going to be even weirder, and seems like it should still involve what we would now call quantum effects, because the classical ontology just does not contain minds.
I identify with your desire to solve the problem "mathematically" to a certain point. Husserl, the phenomenologist, said that distinct ontological categories are to be understood by different "eidetic sciences". Mathematics, logic, computer science, theoretical physics, and maybe a few other disciplines like decision theory, probability theory, and neoclassical economics, are all eidetic. Husserl's proposition was that there should also be eidetic sciences for all the problematic aspects of consciousness. Phenomenology itself was supposed to be the eidetic science of consciousness, as well as the wellspring of the other eidetic sciences, because all ontology derives from phenomenology somehow, and the eidetic sciences study "regional ontologies", aspects of being.
The idea is not that everything about reality is to be discovered apriori and through introspection. Facts still have to come through experience. But experience takes a variety of forms: along with sensory experience, there's logical experience, reflective experience, and perhaps others. Of these, reflective experience is the essence of phenomenology, and the key to developing new eidetic sciences; that is, to developing the concepts and methods appropriate to the ontological aspects that remain untheorized, undertheorized, or badly theorized. We need new ideas in at least two areas: the description of consciousness, and the ontology of the conscious object. We need new and better ideas about what sort of a thing could "be conscious", "have experiences" like the ones we have, and fit into a larger causal matrix. And then we need to rethink physical ontology so that it contains such things. Right now, as I keep asserting, we are stuck with property dualism because the things of physics, in any combination, are fundamentally unlike the thing that is conscious, and so an assertion of identity is not possible.
For more detail, see everything else I've written on this site, or wait for the promised paper. :-)