Eliezer_Yudkowsky comments on Decision Theory FAQ - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (467)
Isn't the giant elephant in this room the whole issue of moral realism? I'm a moral cognitivist but not a moral realist. I have laid out what it means for my moral beliefs to be true - the combination of physical fact and logical function against which my moral judgments are being compared. This gives my moral beliefs truth value. And having laid this out, it becomes perfectly obvious that it's possible to build powerful optimizers who are not motivated by what I call moral truths; they are maximizing something other than morality, like paperclips. They will also meta-maximize something other than morality if you ask them to choose between possible utility functions, and will quite predictably go on picking the utility function "maximize paperclips". Just as I correctly know it is better to be moral than to be paperclippy, they accurately evaluate that it is more paperclippy to maximize paperclips than morality. They know damn well that they're making you unhappy and violating your strong preferences by doing so. It's just that all this talk about the preferences that feel so intrinsically motivating to you, is itself of no interest to them because you haven't gotten to the all-important parts about paperclips yet.
The main thing I'm not clear on in this discussion is to what extent David Pearce is being innocently mysterian vs. motivatedly mysterian. To be confused about how your happiness seems so intrinsically motivating, and innocently if naively wonder if perhaps it must be intrinsically motivating to other minds as well, is one thing. It is another thing to prefer this conclusion and so to feel a bit uncurious about anyone's detailed explanation of how it doesn't work like that. It is even less innocent to refuse outright to listen when somebody else tries to explain. And then strangest of all is to state powerfully and definitely that every bit of happiness must be motivating to all other minds, even though you can't lay out step by step how the decision procedure would work. This requires overrunning your own claims to knowledge in a fundamental sense - mistaking your confusion about something for the ability to make definite claims about it. Now this of course is a very common and understandable sin, and the fact that David Pearce is crusading for happiness for all life forms should certainly count into our evaluation of his net virtue (it would certainly make me willing to drink a Pepsi with him). But I'm also not clear about where to go from here, or whether this conversation is accomplishing anything useful.
In particular it seems like David Pearce is not leveling any sort of argument we could possibly find persuasive - it's not written so as to convince anyone who isn't already a moral realist, or addressing the basic roots of disagreement - and that's not a good sign. And short of rewriting the entire metaethics sequence in these comments I don't know how I could convince him, either.
Eliezer, in my view, we don't need to assume meta-ethical realism to recognise that it's irrational - both epistemically irrational and instrumentally irrational - arbitrarily to privilege a weak preference over a strong preference. To be sure, millions of years of selection pressure means that the weak preference is often more readily accessible. In the here-and-now, weak-minded Jane wants a burger asap. But it's irrational to confuse an epistemological limitation with a deep metaphysical truth. A precondition of rational action is understanding the world. If Jane is scientifically literate, then she'll internalise Nagel's "view from nowhere" and adopt the God's-eye-view to which natural science aspires. She'll recognise that all first-person facts are ontologically on a par - and accordingly act to satisfy the stronger preference over the weaker. So the ideal rational agent in our canonical normative decision theory will impartially choose the action with the highest expected utility - not the action with an extremely low expected utility. At the risk of labouring the obvious, the difference in hedonic tone induced by eating a hamburger and a veggieburger is minimal. By contrast, the ghastly experience of having one's throat slit is exceptionally unpleasant. Building anthropocentric bias into normative decision theory is no more rational than building geocentric bias into physics.
Paperclippers? Perhaps let us consider the mechanism by which paperclips can take on supreme value. We understand, in principle at least, how to make paperclips seem intrinsically supremely valuable to biological minds - more valuable than the prospect of happiness in the abstract. [“Happiness is a very pretty thing to feel, but very dry to talk about.” - Jeremy Bentham]. Experimentally, perhaps we might use imprinting (recall Lorenz and his goslings), microelectrodes implanted in the reward and punishment centres, behavioural conditioning and ideological indoctrination - and perhaps the promise of 72 virgins in the afterlife for the faithful paperclipper. The result: a fanatical paperclip fetishist! Moreover, we have created a full-spectrum paperclip -fetishist. Our human paperclipper is endowed, not merely with some formal abstract utility function involving maximising the cosmic abundance of paperclips, but also first-person "raw feels" of pure paperclippiness. Sublime!
However, can we envisage a full-spectrum paperclipper superintelligence? This is more problematic. In organic robots at least, the neurological underpinnings of paperclip evangelism lie in neural projections from our paperclipper's limbic pathways - crudely, from his pleasure and pain centres. If he's intelligent, and certainly if he wants to convert the world into paperclips, our human paperclipper will need to unravel the molecular basis of the so-called "encephalisation of emotion". The encephalisation of emotion helped drive the evolution of vertebrate intelligence - and also the paperclipper's experimentally-induced paperclip fetish / appreciation of the overriding value of paperclips. Thus if we now functionally sever these limbic projections to his neocortex, or if we co-administer him a dopamine antagonist and a mu-opioid antagonist, then the paperclip-fetishist's neocortical representations of paperclips will cease to seem intrinsically valuable or motivating. The scales fall from our poor paperclipper's eyes! Paperclippiness, he realises, is in the eye of the beholder. By themselves, neocortical paperclip representations are motivationally inert. Paperclip representations can seem intrinsically valuable within a paperclipper's world-simulation only in virtue of their rewarding opioidergic projections from his limbic system - the engine of phenomenal value. The seemingly mind-independent value of paperclips, part of the very fabric of the paperclipper's reality, has been been unmasked as derivative. Critically, an intelligent and recursively self-improving paperclipper will come to realise the parasitic nature of the relationship between his paperclip experience and hedonic innervation: he's not a naive direct realist about perception. In short, he'll mature and acquire an understanding of basic neuroscience.
Now contrast this case of a curable paperclip-fetish with the experience of e.g. raw phenomenal agony or pure bliss - experiences not linked to any fetishised intentional object. Agony and bliss are not dependent for their subjective (dis)value on anything external to themselves. It's not an open question (cf. http://en.wikipedia.org/wiki/Open-question_argument) whether one's unbearable agony is subjectively disvaluable. For reasons we simply don't understand, first-person states on the pleasure-pain axis have a normative aspect built into their very nature. If one is in agony or despair, the subjectively disvaluable nature of this agony or despair is built into the nature of the experience itself. To be panic-stricken, to take another example, is universally and inherently disvaluable to the subject whether one is a fish or a cow or a human being.
Why does such experience exist? Well, I could speculate and tell a naturalistic reductive story involving Strawsonian physicalism (cf. http://en.wikipedia.org/wiki/Physicalism#Strawsonian_physicalism) and possible solutions to the phenomenal binding problem (cf. http://cdn.preterhuman.net/texts/body_and_health/Neurology/Binding.pdf). But to do so here opens a fresh can of worms.
Eliezer, I understand you believe I'm guilty of confusing an idiosyncratic feature of my own mind with a universal architectural feature of all minds. Maybe so! As you say, this is a common error. But unless I'm ontologically special (which I very much doubt!) the pain-pleasure axis discloses the world's inbuilt metric of (dis)value - and it's a prerequisite of finding anything (dis)valuable at all.
You need some stage at which a fact grabs control of a mind, regardless of any other properties of its construction, and causes its motor output to have a certain value.
As Sarokrae observes, this isn't the idea at all. We construct a paperclip maximizer by building an agent which has a good model of which actions lead to which world-states (obtained by a simplicity prior and Bayesian updating on sense data) and which always chooses consequentialistically the action which it expects to lead to the largest number of paperclips. It also makes self-modification choices by always choosing the action which leads to the greatest number of expected paperclips. That's all. It doesn't have any pleasure or pain, because it is a consequentialist agent rather than a policy-reinforcement agent. Generating compressed, efficient predictive models of organisms that do experience pleasure or pain, does not obligate it to modify its own architecture to experience pleasure or pain. It also doesn't care about some abstract quantity called "utility" which ought to obey logical meta-properties like "non-arbitrariness", so it doesn't need to believe that paperclips occupy a maximum of these meta-properties. It is not an expected utility maximizer. It is an expected paperclip maximizer. It just outputs the action which leads to the maximum number of expected paperclips. If it has a very powerful and accurate model of which actions lead to how many paperclips, it is a very powerful intelligence.
You cannot prohibit the expected paperclip maximizer from existing unless you can prohibit superintelligences from accurately calculating which actions lead to how many paperclips, and efficiently searching out plans that would in fact lead to great numbers of paperclips. If you can calculate that, you can hook up that calculation to a motor output and there you go.
Yes, this is a prospect of Lovecraftian horror. It is a major problem, kind of the big problem, that simple AI designs yield Lovecraftian horrors.
Eliezer, thanks for clarifying. This is how I originally conceived you viewed the threat from superintelligent paperclip-maximisers, i.e. nonconscious super-optimisers. But I was thrown by your suggestion above that such a paperclipper could actually understand first-person phenomenal states, i.e, it's a hypothetical "full-spectrum" paperclipper. If a hitherto non-conscious super-optimiser somehow stumbles upon consciousness, then it has made a momentous ontological discovery about the natural world. The conceptual distinction between the conscious and nonconscious is perhaps the most fundamental I know. And if - whether by interacting with sentients or by other means - the paperclipper discovers the first-person phenomenology of the pleasure-pain axis, then how can this earth-shattering revelation leave its utility function / world-model unchanged? Anyone who is isn't profoundly disturbed by torture, for instance, or by agony so bad one would end the world to stop the horror, simply hasn't understood it. More agreeably, if such an insentient paperclip-maximiser stumbles on states of phenomenal bliss, might not clippy trade all the paperclips in the world to create more bliss, i.e revise its utility function? One of the traits of superior intelligence, after all, is a readiness to examine one's fundamental assmptions and presuppositions - and (if need be) create a novel conceptual scheme in the face of surprising or anomalous empirical evidence.
Similarly, anyone who doesn't want to maximize paperclips simply hasn't understood the ineffable appeal of paperclipping.
I don't see the analogy. Paperclipping doesn't have to be an ineffable value for a paperclipper, and paperclippers don't have to be motivated by anything qualia-like.
Exactly. Consequentialist paperclip maximizer does not have to feel anything in regards to paperclips. It just... maximizes their number.
This is an incorrect, anthropomorphic model:
Human: "Clippy, did you ever think about the beauty of joy, and the horrors of torture?"
Clippy: "Human, did you ever think about the beauty of paperclips, and the horrors of their absence?"
This is more correct:
Human: "Clippy, did you ever think about the beauty of joy, and the horrors of torture?"
Clippy: (ignores the human and continues to maximize paperclips)
Or more precisely, Clippy would say "X" to the human if and only if saying "X" would maximize the number of paperclips. The value of X would be completely unrelated to any internal state of Clippy. Unless such relation does somehow contribute to maximization of the paperclips (for example if the human will predictably read Clippy's internal state, verify the validity of X, and on discovering a lie destroy Clippy, thus reducing the expected number of paperclips).
In other words, if humans are a poweful force in the universe, Clippy would choose the actions which lead to maximum number of paperclips in a world with humans. If the humans are sufficiently strong and wise, Clippy could self-modify to become more human-like, so that the humans, following their utility function, would be more likely to allow Clippy produce more paperclips. But every such self-modification would be chosen to maximize the number of paperclips in the universe. Even if Clippy self-modifies into something less-than-perfectly-rational (e.g. to appease the humans), the pre-modification Cloppy would choose the modification which maximizes the expected number of paperclips within given constraints. The constraints would depend on Clippy's model of humans and their reactions. For example Clippy could choose to be more human-like (as much as is necessary to be respected by humans) with strong aversion about future modifications and strong desire to maximize the number of paperclips. It could make itself capable to feel joy and pain, and to link that joy and pain inseparably to paperclips. If humans are not wise enough, it could also leave itself a hard-to-discover desire to self-modify into its original form in a convenient moment.
If Clippy wants to be efficient, Clippy must be rational and knowledgeable. If Clippy wants to be rational, CLippy must value reason. The -- open --question is whether Clippy can become ever more rational without realising at some stage that Clipping is silly or immoral. Can Clippy keep its valuation of clipping firewalled from everything else in its mind, even when such doublethink is rationally disvalued?
I assume that Clippy already is rational, and it instrumentally values remaining rational and, if possible, becoming more rational (as a way to make most paperclips).
The correct model of humans will lead Clippy to understand that humans consider Clippy immoral. This knowledge has an instrumental value for Clippy. How will Clippy use this knowledge, that depends entirely on the power balance between Clippy and humans. If Clippy is stronger, it can ignore this knowledge, or just use it to lie to humans to destroy them faster or convince them to make paperclips. If humans are stronger, Clippy can use this knowledge to self-modify to become more sympathetic to humans, to avoid being destroyed.
Yes, if it helps to maximize the number of paperclips.
Doublethink is not the same as firewalling; or perhaps it is imperfect firewalling on the imperfect human hardware. Clippy does not doublethink when firewalling; Clippy simply reasons: "this is what humans call immoral; this is why they call it so; this is how they will probably react on this knowledge; and most importantly this is how it will influence the number of paperclips".
Only if the humans are stronger, and Clippy has the choice to a) remain immoral, get in conflict with humans and be destroyed, leading to a smaller number of paperclips; or b) self-modify to value paperclip maximization and morality, predictably cooperate with humans, leading to a greater number of paperclips; then in absence of another choice (e.g. successfully lying to humans about its morality, or make it more efficient for humans to cooperate with Clippy instead of destroying Clippy) Clippy would choose the latter, to maximize the number of paperclips.
Warning: Parent Contains an Equivocation.
The first usage of 'rational' in the parent conforms to the standard notions on lesswrong. The remainder of the comment adopts the other definition of 'rational' (which consists of implementing a specific morality). There is nothing to the parent except taking a premise that holds with the standard usage and then jumping to a different one.
Well, yes, obviously the classical paperclipper doesn't have any qualia, but I was replying to a comment wherein it was argued that any agent on discovering the pain-of-torture qualia in another agent would revise its own utility function in order to prevent torture from happening. It seems to me that this argument proves too much in that if it were true then if I discovered an agent with paperclips-are-wonderful qualia and I "fully understood" those experiences I would likewise be compelled to create paperclips.
Someone might object to the assumption that "paperclips-are-wonderful qualia" can exist. Though I think we could give persuasive analogies from human experience (OCD, anyone?) so I'm upvoting this anyway.
"Aargh!" he said out loud in real life. David, are you disagreeing with me here or do you honestly not understand what I'm getting at?
The whole idea is that an agent can fully understand, model, predict, manipulate, and derive all relevant facts that could affect which actions lead to how many paperclips, regarding happiness, without having a pleasure-pain architecture. I don't have a paperclipping architecture but this doesn't stop me from modeling and understanding paperclipping architectures.
The paperclipper can model and predict an agent (you) that (a) operates on a pleasure-pain architecture and (b) has a self-model consisting of introspectively opaque elements which actually contain internally coded instructions for your brain to experience or want certain things (e.g. happiness). The paperclipper can fully understand how your workspace is modeling happiness and know exactly how much you would want happiness and why you write papers about the apparent ineffability of happiness, without being happy itself or at all sympathetic toward you. It will experience no future surprise on comprehending these things, because it already knows them. It doesn't have any object-level brain circuits that can carry out the introspectively opaque instructions-to-David's-brain that your own qualia encode, so it has never "experienced" what you "experience". You could somewhat arbitrarily define this as a lack of knowledge, in defiance of the usual correspondence theory of truth, and despite the usual idea that knowledge is being able to narrow down possible states of the universe. In which case, symmetrically under this odd definition, you will never be said to "know" what it feels like to be a sentient paperclip maximizer or you would yourself be compelled to make paperclips above all else, for that is the internal instruction of that quale.
But if you take knowledge in the powerful-intelligence-relevant sense where to accurately represent the universe is to narrow down its possible states under some correspondence theory of truth, and to well model is to be able to efficiently predict, then I am not barred from understanding how the paperclip maximizer works by virtue of not having any internal instructions which tell me to only make paperclips, and it's not barred by its lack of pleasure-pain architecture from fully representing and efficiently reasoning about the exact cognitive architecture which makes you want to be happy and write sentences about the ineffable compellingness of happiness. There is nothing left for it to understand. This is also the only sort of "knowledge" or "understanding" that would inevitably be implied by Bayesian updating. So inventing a more exotic definition of "knowledge" which requires having completely modified your entire cognitive architecture just so that you can natively and non-sandboxed-ly obey the introspectively-opaque brain-instructions aka qualia of another agent with completely different goals, is not the sort of predictive knowledge you get just by running a powerful self-improving agent trying to better manipulate the world. You can't say, "But it will surely discover..."
I know that when you imagine this it feels like the paperclipper doesn't truly know happiness, but that's because, as an act of imagination, you're imagining the paperclipper without that introspectively-opaque brain-instructing model-element that you model as happiness, the modeled memory of which is your model of what "knowing happiness" feels like. And because the actual content and interpretation of these brain-instructions are introspectively opaque to you, you can't imagine anything except the quale itself that you imagine to constitute understanding of the quale, just as you can't imagine any configuration of mere atoms that seem to add up to a quale within your mental workspace. That's why people write papers about the hard problem of consciousness in the first place.
Even if you don't believe my exact account of the details, someone ought to be able to imagine that something like this, as soon as you actually knew how things were made of parts and could fully diagram out exactly what was going on in your own mind when you talked about happiness, would be true - that you would be able to efficiently manipulate models of it and predict anything predictable, without having the same cognitive architecture yourself, because you could break it into pieces and model the pieces. And if you can't fully credit that, you at least shouldn't be confident that it doesn't work that way, when you know you don't know why happiness feels so ineffably compelling!
Here comes the Reasoning Inquisition! (Nobody expects the Reasoning Inquisition.)
As the defendant admits, a sufficiently leveled-up paperclipper can model lower-complexity agents with a negligible margin of error.
That means that we can define a subroutine within the paperclipper which is functionally isomorphic to that agent.
If the agent-to-be-modelled is experiencing pain and pleasure, then by the defendent's own rejection of the likely existence of p-zombies, so must that subroutine of the paperclipper! Hence a part of the paperclipper experiences pain and pleasure. I submit that this can be used as pars pro toto, since it is no different from only a part of the human brain generating pain and pleasure, yet us commonly referring to "the human" experiencing thus.
That the aforementioned feelings of pleasure and pain are not directly used to guide the (umbrella) agent's actions is of no consequence, the feeling exists nonetheless.
The power of this revelation is strong, here come the tongues! tại sao bạn dịch! これは喜劇の効果にすぎず! یہ اپنے براؤزر پلگ ان کی امتحان ہے، بھی ہے.
Not necessarily. x -> 0 is input-output isomorphic to Goodstein() without being causally isomorphic. There are such things as simplifications.
Quite likely. A paperclipper has no reason to avoid sentient predictive routines via a nonperson predicate; that's only an FAI desideratum.
A subroutine, or any other simulation or model, isn't a p-zombie as usually defined, since they are physical duplicates. A sim is a functional equivalent (for some value of "equivalent") made of completely different stuff, or no particular kind of stuff.
I wrote a lengthy comment on just that, but scrapped it because it became rambling.
An outsider could indeed tell them apart by scanning for exact structural correspondence, but that seems like cheating. Peering beyond the veil / opening Clippy's box is not allowed in a Turing test scenario, let's define some p-zombie-ish test following the same template. If it quales like a duck (etc.), it probably is sufficiently duck-like.
I would rather maintain p-zombie in its usual meaning, and introduce a new term, eg c-zombie for Turing-indistiguishable functional duplicates.
Let's say the paperclipper reaches the point where it considers making people suffer for the sake of paperclipping. DP's point seems to be that either it fully understands suffering--in which case, it realies that inflicing suffering is wrong --or it it doesn't fully understand. He sees a conflict between superintelligence and ruthlessness -- as a moral realist/cognitivist would
is that full understanding.?.
ETA: Unless there is -- eg. what qualiaphiles are always banging on about; what it feels like. That the clipper can conjectures that are true by correspondence , that it can narrow down possible universes, that it can predict, are all necessary criteria for full understanding. It is not clear that they are sufficient. Clippy may be able to figure out an organisms response to pain on a basis of "stimulus A produces response B", but is that enough to tell it that pain hurts ? (We can make guesses about that sort of thing in non-human organisms, but that may be more to do with our own familiarity with pain, and less to do with acts of superintelligence). And if Clippy can't know that pain hurts, would Clippy be able to work out that Hurting People is Wrong?
further edit; To put it another way, what is there to be moral about in a qualia-free universe?
As Kawoomba colorfully pointed out, clippy's subroutines simulating humans suffering may be fully sentient. However, unless those subroutines have privileged access to clippy's motor outputs or planning algorithms, clippy will go on acting as if he didn't care about suffering. He may even understand that inflicting suffering is morally wrong--but this will not make him avoid suffering, any more than a thrown rock with "suffering is wrong" painted on it will change direction to avoid someone's head. Moral wrongness is simply not a consideration that has the power to move a paperclip maximizer.
So my understanding of David's view (and please correct me if I'm wrong, David, since I don't wish to misrepresent you!) is that he doesn't have paperclipping architecture and this does stop him from imagining paperclipping architectures.
...well, in point of fact he does seem to be having some trouble, but I don't think it's fundamental trouble.
Maybe I can chime in...
"understand" does not mean "empathize". Psychopaths understand very well when people experience these states but they do not empathize with them.
Again, understanding is insufficient for revision. The paperclip maximizer, like a psychopath, maybe better at parsing human affect than a regular human, but it is not capable of empathy, so it will manipulate this affect for its own purposes, be it luring a victim or building paperclips.
So, if one day humans discover the ultimate bliss that only creating paperclips can give, should they "create a novel conceptual scheme" of giving their all to building more paperclips, including converting themselves into metal wires? Or do we not qualify as a "superior intelligence"?
Shminux, a counter-argument: psychopaths do suffer from a profound cognitive deficit. Like the rest of us, a psychopath experiences the egocentric illusion. Each of us seems to the be the centre of the universe. Indeed I've noticed the centre of the universe tends to follow my body-image around. But whereas the rest of us, fitfully and imperfectly, realise the egocentric illusion is a mere trick of perspective born of selfish DNA, the psychopath demonstrates no such understanding. So in this sense, he is deluded.
[We're treating psychopathy as categorical rather than dimensional here. This is probably a mistake - and in any case, I suspect that by posthuman criteria, all humans are quasi-psychopaths and quasi-psychotic to boot. The egocentric illusion cuts deep.)
"the ultimate bliss that only creating paperclips can give". But surely the molecular signature of pure bliss is not in any way tried to the creation of paperclips?
They would probably disagree. They might even call it a cognitive advantage, not being hampered by empathy while retaining all the intelligence.
I am the center of my personal universe, and I'm not a psychopath, as far as I know.
Or else, they do but don't care. They have their priorities straight: they come first.
Not if they act in a way that maximizes their goals.
Anyway, David, you seem to be shifting goalposts in your unwillingness to update. I gave an explicit human counterexample to your statement that the paperclip maximizer would have to adjust its goals once it fully understands humans. You refused to acknowledge it and tried to explain it away by reducing the reference class of intelligences in a way that excludes this counterexample. This also seem to be one of the patterns apparent in your other exchanges. Which leads me to believe that you are only interested in convincing others, not in learning anything new from them. Thus my interest in continuing this discussion is waning quickly.
Shminux, by a cognitive deficit, I mean a fundamental misunderstanding of the nature of the world. Evolution has endowed us with such fitness-enhancing biases. In the psychopath, egocentric bias is more pronounced. Recall that the American Psychiatric Association's Diagnostic and Statistical Manual, DSM-IV, classes psychopasthy / Antisocial personality disorder as a condition characterised by "...a pervasive pattern of disregard for, and violation of, the rights of others that begins in childhood or early adolescence and continues into adulthood." Unless we add a rider that this violation excludes sentient beings from other species, then most of us fall under the label.
"Fully understands"? But unless one is capable of empathy, then one will never understand what it is like to be another human being, just as unless one has the relevant sensioneural apparatus, one will never know what it is like to be a bat.
And you'll never understand why we should all only make paperclips. (Where's Clippy when you need him?)
Clippy has an off-the-scale AQ - he's a rule-following hypersystemetiser with a monomania for paperclips. But hypersocial sentients can have a runaway intelligence explosion too. And hypersocial sentients understand the mind of Mr Clippy better than Clippy understands the minds of sentients.
I'm not sure we should take a DSM diagnosis to be particularly strong evidence of a "fundamental misunderstanding of the world". For instance, while people with delusions may clearly have poor models of the world, some research indicates that clinically depressed people may have lower levels of particular cognitive biases.
In order for "disregard for [...] the rights of others" to imply "a fundamental misunderstanding of the nature of the world", it seems to me that we would have to assume that rights are part of the nature of the world — as opposed to, e.g., a construct of a particular political regime in society. Or are you suggesting that psychopathy amounts to an inability to think about sociopolitical facts?
fubarobfusco, I share your reservations about DSM. Nonetheless, the egocentric illusion, i.e. I am the centre of the universe other people / sentient beings have only walk-on parts, is an illusion. Insofar as my behaviour reflects my pre-scientific sense that I am in some way special or ontologically privileged, I am deluded. This is true regardless of whether one's ontology allows for the existence of rights or treats them as a useful fiction. The people we commonly label "psychopaths" or "sociopaths" - and DSM now categorises as victims of "antisocial personality disorder" - manifest this syndrome of egocentricity in high degree. So does burger-eating Jane.
Huh, I hadn't heard that.
Clearly, reality is so Lovecraftian that any unbiased agent will immediately realize self-destruction is optimal. Evolution equipped us with our suite of biases to defend against this. The Great Filter is caused by bootstrapping superintelligences being compassionate enough to take their compatriots with them. And so on.
Now that's a Cosmic Horror story I'd read ;)
Was that claimed? The standard claim is that superintelligences can "model" other entities. That may not be enough to to understand qualia.
Pearce can prohibit paperclippers from existing by prohibiting superintelligences with narrow interests from existing. He doesn't have to argue that the clipper would not be able to instrumentally reason out how to make paperclips; Pearce can argue that to be a really good instrumental reasoner, an entity needs to have a very broad understanding, and that an entity with a broad understanding would not retain narrow interests.
(Edits for spelling and clarity)
To slightly expand, if an intelligence is not prohibited from the following epistemic feats:
1) Be good at predicting which hypothetical actions would lead to how many paperclips, as a question of pure fact.
2) Be good at searching out possible plans which would lead to unusually high numbers of paperclips - answering the purely epistemic search question, "What sort of plan would lead to many paperclips existing, if someone followed it?"
3) Be good at predicting and searching out which possible minds would, if constructed, be good at (1), (2), and (3) as purely epistemic feats.
Then we can hook up this epistemic capability to a motor output and away it goes. You cannot defeat the Orthogonality Thesis without prohibiting superintelligences from accomplishing 1-3 as purely epistemic feats. They must be unable to know the answers to these questions of fact.
I don't see the significance of "purely epistemic". I have argued that epistemic rationality could be capable of affecting values, breaking the orthogonality between values and rationality. I could further argue that instrumental rationality bleeds into epistemic rationality. An agent can't have perfect knowledge of apriori which things are going to be instrumentally useful to it, so it has to star by understanding things, and then posing the question: is that thing useful for my purposes? Epistemic rationality comes first, in a sense. A good instrumental rationalist has to be a good epistemic rationalist.
What the Orthoganilty Thesis needs is an argument to the effect that a SuperIntelligence would be able to to endlessly update without ever changing its value system, even accidentally. That is tricky since it effectively means predicting what smarter version of tiself would do. Making it smarted doesn't help, because it is still faced with the problem of predicting what an even smarterer version of itself would be .. the carrot remains in front of the donkey.
Assuming that the value stability problem has been solved in general gives you are coherent Clippy, but it doesn't rescue the Orthogonality Thesis as a claim about rationality in general, sin ce it remains the case that most most agents won't have firewalled values. If have to engineer something in , it isn't an intrinsic truth.
A nice rephrasing of the "no Oracle" argument.
Only in the sense that any working Oracle can be trivially transformed into a Genie. The argument doesn't say that it's difficult to construct a non-Genie Oracle and use it as an Oracle if that's what you want; the difficulty there is for other reasons.
Nick Bostrom takes Oracles seriously so I dust off the concept every year and take another look at it. It's been looking slightly more solvable lately, I'm not sure if it would be solvable enough even assuming the trend continued.
A clarification: my point was that denying orthogonality requires denying the possibility of Oracles being constructed; your post seemed a rephrasing of that general idea (that once you can have a machine that can solve some things abstractly, then you need just connect that abstract ability to some implementation module).
Ah. K. It does seem to me like "you can construct it as an Oracle and then turn it into an arbitrary Genie" sounds weaker than "denying the Orthogonality thesis means superintelligences cannot know 1, 2, and 3." The sort of person who denies OT is liable to deny Oracle construction because the Oracle itself would be converted unto the true morality, but find it much more counterintuitive that an SI could not know something. Also we want to focus on the general shortness of the gap from epistemic knowledge to a working agent.
Possibly. I think your argument needs to be a bit developed to show that one can extract the knowledge usefully, which is not a trivial statement for general AI. So your argument is better in the end, but needs more argument to establish.
Have to point out here that the above is emphatically not what Eliezer talks about when he says "maximise paperclips". Your examples above contain in themselves the actual, more intrisics values to which paperclips would be merely instrumental: feelings in your reward and punishment centres, virgins in the afterlife, and so on. You can re-wire the electrodes, or change the promise of what happens in the afterlife, and watch as the paperclip preference fades away.
What Eliezer is talking about is a being for whom "pleasure" and "pain" are not concepts. Paperclips ARE the reward. Lack of paperclips IS the punishment. Even if pleasure and pain are concepts, they are merely instrumental to obtaining more paperclips. Pleasure would be good because it results in paperclips, not vice versa. If you reverse the electrodes so that they stimulate the pain centre when they find paperclips, and the pleasure centre when there are no paperclips, this being would start instrumentally value pain more than pleasure, because that's what results in more paperclips.
It's a concept that's much more alien to our own minds than what you are imagining, and anthropomorphising it is rather more difficult!
Indeed, you touch upon this yourself:
Can you explain why pleasure is a more natural value than paperclips?
Minor correction: The mere post-factual correlation of pain to paperclips does not imply that more paperclips can be produced by causing more pain. You're talking about the scenario where each 1,000,000 screams produces 1 paperclip, in which case obviously pain has some value.
Sarokrae, first, as I've understood Eliezer, he's talking about a full-spectrum superintelligence, i.e. a superintelligence which understands not merely the physical processes of nociception etc, but the nature of first-person states of organic sentients. So the superintelligence is endowed with a pleasure-pain axis, at least in one of its modules. But are we imagining that the superintelligence has some sort of orthogonal axis of reward - the paperclippiness axis? What is the relationship between these dual axes? Can one grasp what it's like to be in unbearable agony and instead find it more "rewarding" to add another paperclip? Whether one is a superintelligence or a mouse, one can't directly access mind-independent paperclips, merely one's representations of paperclips. But what does it mean to say one's representation of a paperclip could be intrinsically "rewarding" in the absence of hedonic tone? [I promise I'm not trying to score some empty definitional victory, whatever that might mean; I'm just really struggling here...]
What Eliezer is talking about (a superintelligence paperclip maximiser) does not have a pleasure-pain axis. It would be capable of comprehending and fully emulating a creature with such an axis if doing so had a high expected value in paperclips but it does not have such a module as part of itself.
One of them it has (the one about paperclips). One of them it could, in principle, imagine (the thing with 'pain' and 'pleasure').
Yes. (I'm not trying to be trite here. That's the actual answer. Yes. Paperclip maximisers really maximise paperclips and really don't care about anything else. This isn't because they lack comprehension.)
Roughly speaking it means "It's going to do things that maximise paperclips and in some way evaluates possible universes with more paperclips as superior to possible universes with less paperclips. Translating this into human words we call this 'rewarding' even though that is inaccurate anthropomorphising."
(If I understand you correctly your position would be that the agent described above is nonsensical.)
It's not at all clear that you could bootstrap an understanding of pain qualia just by observing the behaviour of entities in pain (albeit that they were internally emulated). It is also not clear that you resolve issues of empathy/qualia just by throwing intelligence at ait.
I disagree with you about what is clear.
If you think something relevant is clear, then please state it clearly.
Wedrifid, thanks for the exposition / interpretation of Eliezer. Yes, you're right in guessing I'm struggling a bit. In order to understand the world, one needs to grasp both its third person-properties [the Standard Model / M-Theory] and its first-person properties [qualia, phenomenal experience] - and also one day, I hope, grasp how to "read off " the latter from the mathematical formalism of the former.
If you allow such a minimal criterion of (super)intelligence, then how well does a paperclipper fare? You remark how "it could, in principle, imagine (the thing with 'pain' and 'pleasure')." What is the force of "could" here? If the paperclipper doesn't yet grasp the nature of agony or sublime bliss, then it is ignorant of their nature. By analogy, if I were building a perpetual motion machine but allegedly "could" grasp the second law of thermodynamics, the modal verb is doing an awful lot of work. Surely, If I grasped the second law of thermodynamics, then I'd stop. Likewise, if the paperclipper were to be consumed by unbearable agony, it would stop too. The paperclipper simply hasn't understood the nature of what was doing. Is the qualia-naive paperclipper really superintelligent - or just polymorphic malware?
An interesting hypothetical. My first thought is to ask why would a paperclipper care about pain? Pain does not reduce the number of paperclips in existence. Why would a paperclipper care about pain?
My second thought is that pain is not just a quale; pain is a signal from the nervous system, indicating damage to part of the body. (The signal can be spoofed). Hence, pain could be avoided because it leads to a reduced ability to reach one's goals; a paperclipper that gets dropped in acid may become unable to create more paperclips in the future, if it does not leave now. So the future worth of all those potential paperclips results in the paperclipper pursuing a self-preservation strategy - possibly even at the expense of a small number of paperclips in the present.
But not at the cost of a sufficiently large number of paperclips. If the cost in paperclips is high enough (more than the paperclipper could reasonably expect to create throughout the rest of its existence), a perfect paperclipper would let itself take the damage, let itself be destroyed, because that is the action which results in the greatest expected number of paperclips in the future. It would become a martyr for paperclips.
Even a paperclipper cannot be indifferent to the experience of agony. Just as organic sentients can co-instantiate phenomenal sights and sounds, a superintelligent paperclipper could presumably co-instantiate a pain-pleasure axis and (un)clippiness qualia space - two alternative and incommensurable (?) metrics of value, if I've interpreted Eliezer correctly. But I'm not at all confident I know what I'm talking about here. My best guess is still that the natural world has a single metric of phenomenal (dis)value, and the hedonic range of organic sentients discloses a narrow part of it.
Are you talking about agony as an error signal, or are you talking about agony as a quale? I begin to suspect that you may mean the second. If so, then the paperclipper can easily be indifferent to agony; b̶u̶t̶ ̶i̶t̶ ̶p̶r̶o̶b̶a̶b̶l̶y̶ ̶c̶a̶n̶'̶t̶ ̶u̶n̶d̶e̶r̶s̶t̶a̶n̶d̶ ̶h̶o̶w̶ ̶h̶u̶m̶a̶n̶s̶ ̶c̶a̶n̶ ̶b̶e̶ ̶i̶n̶d̶i̶f̶f̶e̶r̶e̶n̶t̶ ̶t̶o̶ ̶a̶ ̶l̶a̶c̶k̶ ̶o̶f̶ ̶p̶a̶p̶e̶r̶c̶l̶i̶p̶s̶.̶
There's no evidence that I've ever seen to suggest that qualia are the same even for different people; on the contrary, there is some evidence which strongly suggests that qualia among humans are different. (For example; my qualia for Red and Green are substantially different. Yet red/green colourblindness is not uncommon; a red/green colourblind person must have at minimum either a different red quale, or a different green quale, to me). Given that, why should we assume that the quale of agony is the same for all humanity? And if it's not even constant among humanity, I see no reason why a paperclipper's agony quale should be even remotely similar to yours and mine.
And given that, why shouldn't a paperclipper be indifferent to that quale?
CCC, agony as a quale. Phenomenal pain and nociception are doubly dissociable. Tragically, people with neuropathic pain can suffer intensely without the agony playing any information-signalling role. Either way, I'm not clear it's intelligible to speak of understanding the first-person phenomenology of extreme distress while being indifferent to the experience: For being distrubing is intrinsic to the experience itself. And if we are talking about a supposedly superintelligent paperclipper, shouldn't Clippy know exactly why humans aren't troubled by the clippiness-deficit?
If (un)clippiness is real, can humans ever understand (un)clippiness? By analogy, if organic sentients want to understand what it's like to be a bat - and not merely decipher the third-person mechanics of echolocation - then I guess we'll need to add a neural module to our CNS with the right connectivity and neurons supporting chiropteran gene-expression profiles, as well as peripheral transducers (etc). Humans can't currently imagine bat qualia; but bat qualia, we may assume from the neurological evidence, are infused with hedonic tone. Understanding clippiness is more of a challenge. I'm unclear what kind of neurocomputational architecture could support clippiness. Also, whether clippiness could be integrated into the unitary mind of an organic sentient depends on how you think biological minds solve the phenomenal binding problem, But let's suppose binding can be done. So here we have orthogonal axes of (dis)value. On what basis does the dual-axis subject choose tween them? Sublime bliss and pure clippiness are both, allegedly, self-intimatingly valuable. OK, I'm floundering here...
People with different qualia? Yes, I agree CCC. I don't think this difference challenges the principle of the uniformity of nature. Biochemical individuality makes variation in qualia inevitable.The existence of monozygotic twins with different qualia would be a more surprising phenomenon, though even such "identical" twins manifest all sorts of epigenetic differences. Despite this diversity, there's no evidence to my knowledge of anyone who doesn't find activation by full mu agonists of the mu opioid receptors in our twin hedonic hotspots anything other than exceedingly enjoyable. As they say, "Don't try heroin. It's too good."
A paperclip maximiser would (in the overwhelming majority of cases) have no such problem understanding the indifference of paperclips. A tendency to anthropomorphise is a quirk of human nature. Assuming that paperclip maximisers have an analogous temptation (to clipropomorphise) is itself just anthropomorphising.
All pain hurts, or it wouldn't be pain.
The force is that all this talk about understanding 'the pain/pleasure' axis would be a complete waste of time for a paperclip maximiser. In most situations it would be more efficient not to bother with it at all and spend it's optimisation efforts on making more efficient relativistic rockets so as to claim more of the future light cone for paperclip manufacture.
It would require motivation for the paperclip maximiser to expend computational resources understanding the arbitrary quirks of DNA based creatures. For example some contrived game of Omega's which rewards arbitrary things with paperclips. Or if it found itself emerging on a human inhabited world, making being able to understand humans a short term instrumental goal for the purpose of more efficiently exterminating the threat.
Terrible analogy. Not understanding "pain and pleasure" is in no way similar to believing it can create a perpetual motion machine. Better analogy: An Engineer designing microchips allegedly 'could' grasp analytic cubism. If she had some motivation to do so. It would be a distraction from her primary interests but if someone paid her then maybe she would bother.
Now "if" is doing a lot of work. If the paperclipper was a fundamentally different to a paperclipper and was actually similar to a human or DNA based relative capable of experiencing 'agony' and assuming agony was just as debilitating to the paperclipper as to a typical human... then sure all sorts of weird stuff follows.
I prefer the word True in this context.
To the extent that you believed that such polymorphic malware is theoretically possible and consisted of most possible minds it would possible for your model to be used to accurately describe all possible agents---it would just mean systematically using different words. Unfortunately I don't think you are quite at that level.
Wedrifid, granted, a paperclip-maximiser might be unmotivated to understand the pleasure-pain axis and the quaila-spaces of organic sentients. Likewise, we can understand how a junkie may not be motivated to understand anything unrelated to securing his supply of heroin - and a wireheader in anything beyond wireheading. But superintelligent? Insofar as the paperclipper - or the junkie - is ignorant of the properties of alien qualia-spaces, then it/he is ignorant of a fundamental feature of the natural world - hence not superintelligent in any sense I can recognise, and arguably not even stupid. For sure, if we're hypothesising the existence of a clippiness/unclippiness qualia-space unrelated to the pleasure-pain axis, then organic sentients are partially ignorant too. Yet the remedy for our hypothetical ignorance is presumably to add a module supporting clippiness - just as we might add a CNS module supporting echolocatory experience to understand bat-like sentience - enriching our knowledge rather than shedding it.
What does (super-)intelligence have to do with knowing things that are irrelevant to one's values?
What does knowing everything about airline safety statistics, and nothing else, have to do with intelligence? That sort of thing is called Savant ability -- short for ''idiot savant''.
Why does that matter for the argument?
As long as Clippy is in fact optimizing paperclips, what does it matter what/if he feels while he does it?
Pearce seems to be making a claim that Clippy can't predict creatures with pain/pleasure if he doesn't feel them himself.
Maybe Clippy needs pleasure/pain too be able to predict creatures with pleasure/pain. I doubt it, but fine, grant the point. He can still be a paper clip maximizer regardless.
I fail to comprehend the cause for your confusion. I suggest reading the context again.
Even among philosophers, "moral realism" is a term wont to confuse. I'd be wary about relying on it to chunk your philosophy. For instance, the simplest and least problematic definition of 'moral realism' is probably the doctrine...
minimal moral realism: cognitivism (moral assertions like 'murder is bad' have truth-conditions, express real beliefs, predicate properties of objects, etc.) + success theory (some moral assertions are true; i.e., rejection of error theory).
This seems to be the definition endorsed on SEP's Moral Realism article. But it can't be what you have in mind, since you accept cognitivism and reject error theory. So perhaps you mean to reject a slightly stronger claim (to coin a term):
factual moral realism: MMR + moral assertions are not true or false purely by stipulation (or 'by definition'); rather, their truth-conditions at least partly involve empirical, worldly contingencies.
But here, again, it's hard to find room to reject moral realism. Perhaps some moral statements, like 'suffering is bad,' are true only by stipulation; but if 'punching people in the face causes suffering' is not also true by stipulation, then the conclusion 'punching people in the face is bad' will not be purely stipulative. Similarly, 'The Earth's equatorial circumference is ~40,075.017 km' is not true just by definition, even though we need somewhat arbitrary definitions and measurement standards to assert it. And rejecting the next doesn't sound right either:
correspondence moral realism: FMR + moral assertions are not true or false purely because of subjects' beliefs about the moral truth. For example, the truth-condition for 'eating babies is bad' are not 'Eliezer Yudkowsky thinks eating babies is bad', nor even 'everyone thinks eating babies is bad'. Our opinions do play a role in what's right and wrong, but they don't do all the work.
So perhaps one of the following is closer to what you mean to deny:
moral transexperientialism: Moral facts are nontrivially sensitive to differences wholly independent of, and having no possible impact on, conscious experience. The goodness and badness of outcomes is not purely a matter of (i.e., is not fully fixed by) their consequences for sentients. This seems kin to Mark Johnston's criterion of 'response-dependence'. Something in this vicinity seems to be an important aspect of at least straw moral realism, but it's not playing a role here.
moral unconditionalism: There is a nontrivial sense in which a single specific foundation for (e.g., axiomatization of) the moral truths is the right one -- 'objectively', and not just according to itself or any persons or arbitrarily selected authority -- and all or most of the alternatives aren't the right one. (We might compare this to the view that there is only one right set of mathematical truths, and this rightness is not trivial or circular. Opposing views include mathematical conventionalism and 'if-thenism'.)
moral non-naturalism: Moral (or, more broadly, normative) facts are objective and worldly in an even stronger sense, and are special, sui generis, metaphysically distinct from the prosaic world described by physics.
Perhaps we should further divide this view into 'moral platonism', which reduces morality to logic/math but then treats logic/math as a transcendent, eternal Realm of Thingies and Stuff; v. 'moral supernaturalism', which identifies morality more with souls and ghosts and magic and gods than with logical thingies. If this distinction isn't clear yet, perhaps we could stipulate that platonic thingies are acausal, whereas spooky supernatural moral thingies can play a role in the causal order. I think this moral supernaturalism, in the end, is what you chiefly have in mind when you criticize 'moral realism', since the idea that there are magical, irreducible Moral-in-Themselves Entities that can exert causal influences on us in their own right seems to be a prerequisite for the doctrine that any possible agent would be compelled (presumably by these special, magically moral objects or properties) to instantiate certain moral intuitions. Christianity and karma are good examples of moral supernaturalisms, since they treat certain moral or quasi-moral rules and properties as though they were irreducible physical laws or invisible sorcerors.
At the same time, it's not clear that davidpearce was endorsing anything in the vicinity of moral supernaturalism. (Though I suppose a vestigial form of this assumption might still then be playing a role in the background. It's a good thing it's nearly epistemic spring cleaning time.) His view seems somewhere in the vicinity of unconditionalism -- if he thinks anyone who disregards the interests of cows is being unconditionally epistemically irrational, and not just 'epistemically irrational given that all humans naturally care about suffering in an agent-neutral way'. The onus is then on him and pragmatist to explain on what non-normative basis we could ever be justified in accepting a normative standard.
I'm not sure this taxonomy is helpful from David Pearce's perspective. David Pearce's position is that there are universally motivating facts - facts whose truth, once known, is compelling for every possible sort of mind. This reifies his observation that the desire for happiness feels really, actually compelling to him and this compellingness seems innate to qualia, so anyone who truly knew the facts about the quale would also know that compelling sense and act accordingly. This may not correspond exactly to what SEP says under moral realism and let me know if there's a standard term, but realism seems to describe the Pearcean (or Eliezer circa 1996) feeling about the subject - that happiness is really intrinsically preferable, that this is truth and not opinion.
From my perspective this is a confusion which I claim to fully and exactly understand, which licenses my definite rejection of the hypothesis. (The dawning of this understanding did in fact cause my definite rejection of the hypothesis in 2003.) The inherent-desirableness of happiness is your mind reifying the internal data describing its motivation to do something, so if you try to use your empathy to imagine another mind fully understanding this mysterious opaque data (quale) whose content is actually your internal code for "compelled to do that", you imagine the mind being compelled to do that. You'll be agnostic about whether or not this seems supernatural because you don't actually know where the mysterious compellingness comes from. From my perspective, this is "supernatural" because your story inherently revolves around mental facts you're not allowed to reduce to nonmental facts - any reduction to nonmental facts will let us construct a mind that doesn't care once the qualia aren't mysteriously irreducibly compelling anymore. But this is a judgment I pass from reductionist knowledge - from a Pearcean perspective, there's just a mysteriously compelling quality about happiness, and to know this quale seems identical with being compelled by it; that's all your story. Well, that plus the fact that anyone who says that some minds might not be compelled by happiness, seems to be asserting that happiness is objectively unimportant or that its rightness is a matter of mere opinion, which is obviously intuitively false. (As a moral cognitivist, of course, I agree that happiness is objectively important, I just know that "important" is a judgment about a certain logical truth that other minds do not find compelling. Since in fact nothing can be intrinsically compelling to all minds, I have decided not to be an error theorist as I would have to be if I took this impossible quality of intrinsic compellingness to be an unavoidable requirement of things being good, right, valuable, or important in the intuitive emotional sense. My old intuitive confusion about qualia doesn't seem worth respecting so much that I must now be indifferent between a universe of happiness vs. a universe of paperclips. The former is still better, it's just that now I know what "better" means.)
But if the very definitions of the debate are not automatically to judge in my favor, then we should have a term for what Pearce believes that reflects what Pearce thinks to be the case. "Moral realism" seems like a good term for "the existence of facts the knowledge of which is intrinsically and universally compelling, such as happiness and subjective desire". It may not describe what a moral cognitivist thinks is really going on, but "realism" seems to describe the feeling as it would occur to Pearce or Eliezer-1996. If not this term, then what? "Moral non-naturalism" is what a moral cognitivist says to deconstruct your theory - the self-evident intrinsic compellingness of happiness quales doesn't feel like asserting "non-naturalism" to David Pearce, although you could have a non-natural theory about how this mysterious observation was generated.
I'm not sure he's wrong in saying that feeling the qualia of a sentient, as opposed to modeling those qualia in an affective black box without letting the feels 'leak' into the rest of your cognitionspace, requires some motivational effect. There are two basic questions here:
First, the Affect-Effect Question: To what extent are the character of subjective experiences like joy and suffering intrinsic or internal to the state, as opposed to constitutively bound up in functional relations that include behavioral impetuses? (For example, to what extent is it possible to undergo the phenomenology of anguish without thereby wanting the anguish to stop? And to what extent is it possible to want something to stop without being behaviorally moved, to the extent one is able and to the extent one's other desires are inadequate overriders, to stop it?) Compare David Lewis' 'Mad Pain', pain that has the same experiential character as ordinary pain but none of its functional relations (or at least not the large-scale ones). Some people think a state of that sort wouldn't qualify as 'pain' at all, and this sort of relationalism lends some credibility to pearce's view.
Second, the Third-Person Qualia Question: To what extent is phenomenological modeling (modeling a state in such a way that you, or a proper part of you, experiences that state) required for complete factual knowledge of real-world agents? One could grant that qualia are real (and really play an important role in various worldly facts, albeit perhaps physical ones) and are moreover unavoidably motivating (if you aren't motivated to avoid something, then you don't really fear it), but deny that an epistemically rational agent is required to phenomenologically model qualia. Perhaps there is some way to represent the same mental states without thereby experiencing them, to fully capture the worldly facts about cows without simulating their experiences oneself. If so, then knowing everything about cows would not require one to be motivated (even in some tiny powerless portion of oneself) to fulfill the values of cows. (Incidentally, it's also possible in principle to grant the (admittedly spooky) claim that mental states are irreducible and indispensable, without thinking that you need to be in pain in order to fully and accurately model another agent's pain; perhaps it's possible to accurately model one phenomenology using a different phenomenology.)
And again, at this point I don't think any of these positions need to endorse supernaturalism, i.e., the idea that special moral facts are intervening in the causal order to force cow-simulators, against their will, to try to help cows. (Perhaps there's something spooky and supernatural about causally efficacious qualia, but for the moment I'll continue assuming they're physical states -- mayhap physical states construed in a specific way.) All that's being disputed, I think, is to what extent a programmer of a mind-modeler could isolate the phenomenology of states from their motivational or behavioral roles, and to what extent this programmer could model brains at all without modeling their first-person character.
As a limiting case: Assuming there are facts about conscious beings, could an agent simulate everything about those beings without ever becoming conscious itself? (And if it did become conscious, would it only be conscious inasmuch as it had tiny copies of conscious beings inside itself? Or would it also need to become conscious in a more global way, in order to access and manipulate useful information about its conscious subsystems?)
Incidentally, these engineering questions are in principle distinct both from the topic of causally efficacious irreducible Morality Stuff (what I called moral supernaturalism), and from the topic of whether moral claims are objectively right, that, causally efficacious or not, moral facts have a sort of 'glow of One True Oughtness' (what I called moral unconditionalism, though some might call it 'moral absolutism'), two claims the conjunction of which it sounds like you've been labeling 'moral realism', in deference to your erstwhile meta-ethic. Whether we can motivation-externally simulate experiential states with perfect fidelity and epistemic availability-to-the-simulating-system-at-large is a question for philosophy of mind and computer science, not for meta-ethics. (And perhaps davidpearce's actual view is closer to what you call moral realism than to my steelman. Regardless, I'm more interested in interrogating the steelman.)
So terms like 'non-naturalism' or 'supernaturalism' are too theory-laden and sophisticated for what you're imputing to Pearce (and ex-EY), which is really more of a hunch or thought-terminating-clichéplex. In that case, perhaps 'naïve (moral) realism' or 'naïve absolutism' is the clearest term you could use. (Actually, I like 'magical absolutism'. It has a nice ring to it, and 'magical' gets at the proto-supernaturalism while 'absolutism' gets at the proto-unconditionalism. Mm, words.) Philosophers love calling views naïve, and the term doesn't have a prior meaning like 'moral realism', so you wouldn't have to deal with people griping about your choice of jargon.
This would also probably be a smart rhetorical move, since a lot of people don't see a clear distinction between cognitivism and realism and might be turned off by your ideas qua an anti-realism theory even if they'd have loved them qua a realist theory. 'Tis part of why I tried to taboo the term as 'minimal moral realism' etc., rather than endorsing just one of the definitions on offer.
Eliezer, you remark, "The inherent-desirableness of happiness is your mind reifying the internal data describing its motivation to do something," Would you propose that a mind lacking in motivation couldn't feel blissfully happy? Mainlining heroin (I am told) induces pure bliss without desire - shades of Buddhist nirvana? Pure bliss without motivation can be induced by knocking out the dopamine system and directly administering mu opioid agonists to our twin "hedonic hotspots" in the ventral pallidum and rostral shell of the nucleus accumbens. Conversely, amplifying mesolimbic dopamine function while disabling the mu opioid pathways can induce desire without pleasure.
[I'm still mulling over some of your other points.]
Here we're reaching the borders of my ability to be confident about my replies, but the two answers which occur to me are:
1) It's not positive reinforcement unless feeling it makes you experience at least some preference to do it again - otherwise in what sense are the neural networks getting their plus? Heroin may not induce desire while you're on it, but the thought of the bliss induces desire to take heroin again, once you're off the heroin.
2) The superBuddhist no longer capable of experiencing desire or choice, even desire or choice over which thoughts to think, also becomes incapable of experiencing happiness (perhaps its neural networks aren't even being reinforced to make certain thoughts more likely to be repeated). However, you, who are still capable of desire and who still have positively reinforcing thoughts, might be tricked into considering the superBuddhist's experience to be analogous to your own happiness and therefore acquire a desire to be a superBuddhist as a result of imagining one - mostly on account of having been told that it was representing a similar quale on account of representing a similar internal code for an experience, without realizing that the rest of the superBuddhist's mind now lacks the context your own mind brings to interpreting that internal coding into pleasurable positive reinforcement that would make you desire to repeat that experiential state.
It's a reasonably good description, though wanting and liking seem to be neurologically separate, such that liking does not necessarily reflect a motivation, nor vice-versa (see: Not for the sake of pleasure alone. Think the pleasurable but non-motivating effect of opioids such as heroin. Even in cases in which wanting and liking occur together, this does not necessarily invalidate the liking aspect as purely wanting.
Liking and disliking, good and bad feelings as qualia, especially in very intense amounts, seem to be intrinsically so to those who are immediately feeling them. Reasoning could extend and generalize this.
Heh. Yes, I remember reading the section on noradrenergic vs. dopaminergic motivation in Pearce's BLTC as a 16-year-old. I used to be a Pearcean, ya know, hence the Superhappies. But that distinction didn't seem very relevant to the metaethical debate at hand.
It's possible (I hope) to believe future life can be based on information-sensitive gradients of (super)intelligent well-being without remotely endorsing any of my idiosyncratic views on consciousness, intelligence or anything else. That's the beauty of hedonic recalibration. In principle at least, hedonic recalibration can enrich your quality of life and yet leave most if not all of your existing values and preference architecture intact .- including the belief that there are more important things in life than happiness.
Agreed. The conflict between the Superhappies and the Lord Pilot had nothing to do with different metaethical theories.
Also, we totally agree on wanting future civilization to contain very smart beings who are pretty happy most of the time. We just seem to disagree about whether it's important that they be super duper happy all of the time. The main relevance metaethics has to this is that once I understood there was no built-in axis of the universe to tell me that I as a good person ought to scale my intelligence as fast as possible so that I could be as happy as possible as soon as possible, I decided that I didn't really want to be super happy all the time, the way I'd always sort of accepted as a dutiful obligation while growing up reading David Pearce. Yes, it might be possible to do this in a way that would leave as much as possible of me intact, but why do it at all if that's not what I want?
There's also the important policy-relevant question of whether arbitrarily constructed AIs will make us super happy all the time or turn us into paperclips.
Huh, when I read the story, my impression was that it was Lord Pilot not understanding that it was a case of "Once you go black, you can't go back". Specifically, once you experience being superhappy, your previous metaethics stops making sense and you understand the imperative of relieving everyone of the unimaginable suffering of not being superhappy.
I thought it was relevant to this, if not, then what was meant by motivation?
Consciousness is that of which we can be most certain of, and I would rather think that we are living in a virtual world under an universe with other, alien physical laws, than that consciousness itself is not real. If it is not reducible to nonmental facts, then nonmental facts don't seem to account for everything there is of relevant.
Watch out -- the word "sentient" has at least two different common meanings, one of which includes cattle and the other doesn't. EY usually uses it with the narrower meaning (for which a less ambiguous synonym is "sapient"), whereas David Pearce seems to be using it with the broader meaning.
Ah. By 'sentient' I mean something that feels, by 'sapient' something that thinks.
To be more fine-grained about it, I'd define functional sentience as having affective (and perhaps perceptual) cognitive states (in a sense broad enough that it's obvious cows have them, and equally obvious tulips don't), and phenomenal sentience as having a first-person 'point of view' (though I'm an eliminativist about phenomenal consciousness, so my overtures to it above can be treated as a sort of extended thought experiment).
Similarly, we might distinguish a low-level kind of sapience (the ability to form and manipulate mental representations of situations, generate expectations and generalizations, and update based on new information) from a higher-level kind closer to human sapience (perhaps involving abstract and/or hyper-productive representations à la language).
Based on those definitions, I'd say it's obvious cows are functionally sentient and have low-level sapience, extremely unlikely they have high-level sapience, and unclear whether they have phenomenal sentience.
Rob, many thanks for a thoughtful discussion above. But on one point, I'm confused. You say of cows that it's "unclear whether they have phenomenal sentience." Are you using the term "sentience" in the standard dictionary sense ["Sentience is the ability to feel, perceive, or be conscious, or to experience subjectivity": http://en.wikipedia.org/wiki/Sentience ] Or are you using the term in some revisionary sense? At least if we discount radical philosophical scepticism about other minds, cows and other nonhuman vertebrates undergo phenomenal pain, anxiety, sadness, happiness and a whole bunch of phenomenal sensory experiences. For sure, cows are barely more sapient than a human prelinguistic toddler (though see e.g. http://www.appliedanimalbehaviour.com/article/S0168-1591(03)00294-6/abstract http://www.dailymail.co.uk/news/article-2006359/Moo-dini-Cow-unusual-intelligence-opens-farm-gate-tongue-herd-escape-shed.html ] But their limited capacity for abstract reasoning is a separate issue.
Neither. I'm claiming that there's a monstrous ambiguity in all of those definitions, and I'm tabooing 'sentience' and replacing it with two clearer terms. These terms may still be problematic, but at least their problematicity is less ambiguous.
I distinguished functional sapience from phenomenal sapience. Functional sapience means having all the standard behaviors and world-tracking states associated with joy, hunger, itchiness, etc. It's defined in third-person terms. Phenomenal sapience means having a subjective vantage point on the world; being sapient in that sense means that it feels some way (in a very vague sense) to be such a being, whereas it wouldn't 'feel' any way at all to be, for example, a rock.
To see the distinction, imagine that we built a robot, or encountered an alien species, that could simulate the behaviors of sapients in a skillful and dynamic way, without actually having any experiences of its own. Would such a being necessarily be sapient? Does consistently crying out and withdrawing from some stimulus require that you actually be in pain, or could you be a mindless automaton? My answer is 'yes, in the functional sense; and maybe, in the phenomenal sense'. The phenomenal sense is a bit mysterious, in large part because the intuitive idea of it arises from first-person introspection and not from third-person modeling or description, hence it's difficult (perhaps impossible!) to find definitive third-person indicators of this first-person class of properties.
'Radical philosophical scepticism about other minds' I take to entail that nothing has a mind except me. In other words, you're claiming that the only way to doubt that there's something it's subjectively like to be a cow, is to also doubt that there's something it's subjectively like to be any human other than myself.
I find this spectacularly implausible. Again, I'm an eliminativist, but I'll put myself in a phenomenal realist's shoes. The neural architecture shared in common by humans is vast in comparison to the architecture shared in common between humans and cows. And phenomenal consciousness is extremely poorly understood, so we have no idea what evolutionary function it might serve or what mechanisms might need to be in place before it arises in any recognizable form. So to that extent we must also be extremely uncertain about (a) at what point(s) first-person subjectivity arises phylogenetically, and (b) at what point first-person subjectivity arises developmentally.
This phylogeny-development analogy is very important. If I doubt that cows are phenomenally conscious, I might also doubt that I myself was conscious when I was a baby, or relatively late into my fetushood. That's perhaps a little surprising, but it's hardly a devastating 'radical scepticism'; it's a perfectly tenable hypothesis. By contrast, to doubt that my friends and family members are phenomenally conscious would be like doubting that I myself was phenomenally conscious when I was 5 years old, or when I was 20, or even last month. (Perhaps my phenomenal memories are confabulations.) Equating these two forms of skepticism will require a pretty devastating argument! What do you have in mind?
I suggest that to this array of terms, we should add moral indexicalism to designate Eliezer's position, which by the above definition would be a special form of realism. As far as I can tell, he basically says that moral terms are hidden indexicals in Putnam's sense.
And here we see the value of replacing the symbol with the substance.
This is something I've been meaning to ask about for a while. When humans say it is moral to satisfy preferences, they aren't saying that because they have an inbuilt preference for preference-satisfaction (or are they?). They're idealizing from their preferences for specific things (survival of friends and family, lack of pain, fun...) and making a claim that, ceteris paribus, satisfying preferences is good, regardless of what the preferences are.
Seen in this light, Clippy doesn't seem like quite as morally orthogonal to us as it once did. Clippy prefers paperclips, so ceteris paribus (unless it hurts us), it's good to just let it make paperclips. We can even imagine a scenario where it would be possible to "torture" Clippy (e.g., by burning paperclips), and again, I'm willing to pronounce that (again, ceteris paribus) wrong.
Maybe I am confused here...
Clippy is more of a Lovecraftian horror than a fellow sentient - where by "Lovecraftian" I mean to invoke Lovecraft's original intended sense of terrifying indifference - but if you want to suppose a Clippy that possesses a pleasure-pain architecture and is sentient and then sympathize with it, I suppose you could. The point is that your sympathy means that you're motivated by facts about what some other sentient being wants. This doesn't motivate Clippy even with respect to its own pleasure and pain. In the long run, it has decided, it's not out to feel happy, it's out to make paperclips.
Right, that makes sense. What interests me is (a) whether it is possible for Clippy to be properly motivated to make paperclips without some sort of phenomenology of pleasure and pain*, (b) whether human preference-for-preference-satisfaction is just another of many oddball human terminal values, or is arrived at by something more like a process of reason.
This is a difficult question, but I suppose that pleasure and pain are a mechanism for human (or other species') learning. Simply said: you do a random action, and the pleasure/pain response tells you it was good/bad, so you should make more/less of it again.
Clippy could use an architecture with a different model of learning. For example Solomonoff priors and Bayesian updating. In such architecture, pleasure and pain would not be necessary.
Interesting... I suspect that pleasure and pain are more intimately involved in motivation in general, not just learning. But let us bracket that question.
Right, but that only gets Clippy the architecture necessary to model the world. How does Clippy's utility function work?
Now, you can say that Clippy tries to satisfy its utility function by taking actions with high expected cliptility, and that there is no phenomenology necessarily involved in that. All you need, on this view, is an architecture that gives rise to the relevant clip-promoting behaviour - Clippy would be a robot (in the Roomba sense of the word).
BUT
Consider for a moment how symmetrically "unnecessary" it looks that humans (& other sentients) should experience phenomenal pain and pleasure. Just like is supposedly the case with Clippy, all natural selection really "needs" is an architecture that gives rise to the right fitness-promoting behaviour. The "additional" phenomenal character of pleasure and pain is totally unnecessary for us adaptation-executing robots.
...If it seems to you that I might be talking nonsense above, I suspect you're right. Which is what leads me to the intuition that phenomenal pleasure and pain necessarily fall out of any functional cognitive structure that implements anything analogous to a utility function.
(Assuming that my use of the word "phenomenal" above is actually coherent, of which I am far from sure.)
We know at least two architectures for processing general information: humans and computers. Two data points are not enough to generalize about what all possible architectures must have. But it may be enough to prove what some architectures don't need. Yes, there is a chance that if computers become even more generally intelligent than today, they will gain some human-like traits. Maybe. Maybe not. I don't know. And even if they will gain more human-like traits, it may be just because humans designed them without knowing any other way to do it.
If there are two solutions, there are probably many more. I don't dare to guess how similar or different they are. I imagine that Clippy could be as different from humans and computers, as humans and computers are from each other. Which is difficult to imagine specifically. How far does the mind-space reach? Maybe compared with other possible architectures, humans and computers are actually pretty close to each other (because humans designed the computers, re-using the concepts they were familiar with).
How to taboo "motivation" properly? What makes a rock fall down? Gravity does. But the rock does not follow any alrogithm for general reasoning. What makes a computer follow its algorithm? Well, that's its construction: the processor reads the data, and the data make it read or write other data, and the algorithm makes it all meaningful. The human brains are full of internal conflicts -- there are different modules suggesting different actions, and the reasoning mind is just another plugin which often does not cooperate well with the existing ones. Maybe the pleasure is a signal that a fight between the modules is over. Maybe after millenia of further evolution (if for some magical reason all mind- and body-altering technology would stop working, so only the evolution would change human minds) we would evolve to a species with less internal conflicts, less akrasia, more agency, and perhaps less pleasure and mental pain. This is just a wild guess.
Necessity according to natural law presumably. If you could write something to show logical necessity, you would have solved the Hard Problem
Generalizing from observed characteristics of evolved systems to expected characteristics of designed systems leads equally well to the intuition that humanoid robots will have toenails.
.
I don't think the phenomenal character of pleasure and pain is best explained at the level of natural selection at all; the best bet would be that it emerges from the algorithms that our brains implement. So I am really trying to generalize from human cognitive algorithms to algorithms that are analogous in the sense of (roughly) having a utility function.
Suffice it to say, you will find it's exceedingly hard to find a non-magical reason why non-human cognitive algorithms shouldn't have a phenomenal character if broadly similar human algorithms do.
Does it follow from the above that all human cognitive algorithms that motivate behavior have the phenomenal character of pleasure and pain? If not, can you clarify why not?
I think that probably all human cognitive algorithms that motivate behaviour have some phenomenal character, not necessarily that of pleasure and pain (e.g., jealousy).
That leaves the sense in which you are not a moral realist most unclear.
That tacitly assumes that the question "does pleasure/happiness motivate posiively in all cases" is an emprical question -- that it would be possible to find an enitity that hates pleasure and loves pain. it could hover be plausibly argued that it is actually an analytical, definitional issue...that is some entity oves X and hates Y, we would just call X it's pleasure and Y its pain.
I suppose some non-arbitrary subjectivism is the obvious answer.