Previously, I talked about the mystery of pain and pleasure, and how little we know about what sorts of arrangements of particles intrinsically produce them.
Up now: should FAI researchers care about this topic? Is research into the information theory of pain and pleasure relevant for FAI? I believe so! Here are the top reasons I came up with while thinking about this topic.
An important caveat: much depends on whether pain and pleasure (collectively, 'valence') are simple or complex properties of conscious systems. If theyāre on the complex end of the spectrum, many points on this list may not be terribly relevant for the foreseeable future. On the other hand, if they have a relatively small ākolmogorov complexityā (e.g., if a āhashing functionā to derive valence could fit on a t-shirt), crisp knowledge of valence may be possible sooner rather than later, and could have some immediate relevance to current FAI research directions.
Additional caveats: itās important to note that none of these ideas are grand, sweeping panaceas, or are intended to address deep metaphysical questions, or aim to reinvent the wheel- instead, theyāre intended to help resolve empirical ambiguities and modestly enlarge the current FAI toolbox.
1. Valence research could simplify the Value Problem and the Value Loading Problem. If pleasure/happiness is an important core part of what humanity values, or should value, having the exact information-theoretic definition of it on-hand could directly and drastically simplify the problems of what to maximize, and how to load this value into an AGI.
2. Valence research could form the basis for a well-defined āsanity checkā on AGI behavior. Even if pleasure isnāt a core terminal value for humans, it could still be used as a useful indirect heuristic for detecting value destruction. I.e., if weāre considering having an AGI carry out some intervention, we could ask it what the expected effect is on whatever pattern precisely corresponds to pleasure/happiness. If thereās be a lot less of that pattern, the intervention is probably a bad idea.
3. Valence research could help us be humane to AGIs and WBEs. Thereās going to be a lot of experimentation involving intelligent systems, and although many of these systems wonāt be āsentientā in the way humans are, some system types will approach or even surpass human capacity for suffering. Unfortunately, many of these early systems wonāt work wellā i.e., theyāll be insane. It would be great if we had a good way to detect profound suffering in such cases and halt the system.
4. Valence research could help us prevent Mind Crimes. Nick Bostrom suggests in Superintelligence that AGIs might simulate virtual humans to reverse-engineer human preferences, but that these virtual humans might be sufficiently high-fidelity that they themselves could meaningfully suffer. We can tell AGIs not to do this- but knowing the exact information-theoretic pattern of suffering would make it easier to specify what not to do.
5. Valence research could enable radical forms of cognitive enhancement. Nick Bostrom has argued that there are hard limits on traditional pharmaceutical cognitive enhancement, since if the presence of some simple chemical would help us think better, our brains would probably already be producing it. On the other hand, there seem to be fewer a priori limits on motivational or emotional enhancement. And sure enough, the most effective ācognitive enhancersā such as adderall, modafinil, and so on seem to work by making cognitive tasks seem less unpleasant or more interesting. If we had a crisp theory of valence, this might enable particularly powerful versions of these sorts of drugs.
6. Valence research could help align an AGIās nominal utility function with visceral happiness. There seems to be a lot of confusion with regard to happiness and utility functions. In short: they are different things! Utility functions are goal abstractions, generally realized either explicitly through high-level state variables or implicitly through dynamic principles. Happiness, on the other hand, seems like an emergent, systemic property of conscious states, and like other qualia but unlike utility functions, itās probably highly dependent upon low-level architectural and implementational details and dynamics. In practice, most people most of the time can be said to have rough utility functions which are often consistent with increasing happiness, but this is an awfully leaky abstraction.
My point is that constructing an AGI whose utility function is to make paperclips, and constructing a sentient AGI who is viscerally happy when it makes paperclips, are very different tasks. Moreover, I think there could be value in being able to align these two factorsā to make an AGI which is viscerally happy to the exact extent itās maximizing its nominal utility function.
(Why would we want to do this in the first place? There is the obvious semi-facetious-but-not-completely-trivial answerā that if an AGI turns me into paperclips, I at least want it to be happy while doing soābut I think thereās real potential for safety research here also.)
7. Valence research could help us construct makeshift utility functions for WBEs and Neuromorphic AGIs. How do we make WBEs or Neuromorphic AGIs do what we want? One approach would be to piggyback off of what they already partially and imperfectly optimize for already, and build a makeshift utility function out of pleasure. Trying to shoehorn a utility function onto any evolved, emergent system is going to involve terrible imperfections, uncertainties, and dangers, but if research trends make neuromorphic AGI likely to occur before other options, it may be a case of āsomething is probably better than nothing.ā
One particular application: constructing a ācryptographic reward tokenā control scheme for WBEs/neuromorphic AGIs. Carl Shulman has suggested we could incentivize an AGI to do what we want by giving it a steady trickle of cryptographic reward tokens that fulfill its utility function- it knows if it misbehaves (e.g., if it kills all humans), itāll stop getting these tokens. But if we want to construct reward tokens for types of AGIs that donāt intrinsically have crisp utility functions (such as WBEs or neuromorphic AGIs), weāll have to understand, on a deep mathematical level, what they do optimize for, which will at least partially involve pleasure.
8. Valence research could help us better understand, and perhaps prevent, AGI wireheading. How can AGI researchers prevent their AGIs from wireheading (direct manipulation of their utility functions)? I donāt have a clear answer, and it seems like a complex problem which will require complex, architecture-dependent solutions, but understanding the universeās algorithm for pleasure might help clarify what kind of problem it is, and how evolution has addressed it in humans.
9. Valence research could help reduce general metaphysical confusion. Weāre going to be facing some very weird questions about philosophy of mind and metaphysics when building AGIs, and everybody seems to have their own pet assumptions on how things work. The better we can clear up the fog which surrounds some of these topics, the lower our coordinational friction will be when we have to directly address them.
Successfully reverse-engineering a subset of qualia (valence- perhaps the easiest type to reverse-engineer?) would be a great step in this direction.
10. Valence research could change the social and political landscape AGI research occurs in. This could take many forms: at best, a breakthrough could lead to a happier society where many previously nihilistic individuals suddenly have āskin in the gameā with respect to existential risk. At worst, it could be a profound information hazard, and irresponsible disclosure or misuse of such research could lead to mass wireheading, mass emotional manipulation, and totalitarianism. Either way, it would be an important topic to keep abreast of.
These are not all independent issues, and not all are of equal importance. But, taken together, they do seem to imply that reverse-engineering valence will be decently relevant to FAI research, particularly with regard to the Value Problem, reducing metaphysical confusion, and perhaps making the hardest safety cases (e.g., neuromorphic AGIs) a little bit more tractable.
Where is the cultural context in all of this? How does that play in? Pain and pleasure here in the West is different than in the East just as value systems are different. When it comes to creating AGI I think a central set of agreed upon tenets are important. What is valuable? How can we quantify that in a way that makes sense to create AGI? If we want to reward it for doing good things, we have to consider cultural validation. We don't steal, murder or assault people because we have significant cultural incentive not to do so, especially if you live in a stable country. I think that could help. If we can somehow show group approval of the AGI, like favorable opinions, verbal validation and other things that I intrinsically values as we do. We could use our own culture to reinforce norms within it's archetecture.
A rigorous theory of valence wouldn't involve cultural context, much as a rigorous theory of electromagnetism doesn't involve cultural context.
Cultural context may matter a great deal in terms of how to build a friendly AGI that preserves what's valuable about human civilization-- or this may mostly boil down to the axioms that 'pleasure is good' and 'suffering is bad'. I'm officially agnostic on whether value is simple or complex in this way.
One framework for dealing with the stuff you mention is Coherent Extrapolated Volition (CEV)- it's not the last word on anything but it seems like a good intuition pump.