From John Danaher's review:
Of course, the Humean theory may be false and so Bostrom wisely avoids it in his defence of the orthogonality thesis.
I had the opposite reaction. The Humean theory of motivation is correct, and I see no reason to avoid tying the orthogonality thesis to it. To me, Bostrom's distancing of the orthogonality thesis from Humean motivation seemed like splitting hairs. Since how strong a given motivation is can only be measured relative to other motivations, Bostrom's point that an agent could have very strong motivations not arisin...
what matters for Bostrom’s definition of intelligence is whether the agent is getting what it wants
This brings up another way - comparable to the idea that complex goals may require high intelligence - in which the orthogonality thesis might be limited. I think that the very having of wants itself requires a certain amount of intelligence. Consider the animal kingdom, sphexishness, etc. To get behavior that clearly demonstrates what most people would confidently call "goals" or "wants", you have to get to animals with pretty subst...
There is more than one version of the orthogonality thesis. It is trivially false under some interpretations, and trivially true under others, which is true because only some versions can be used as a stage in an argument towards Yudkowskian UFAI.
It is admitted from the outset that some versions of the OT are not logically possible, those being the ones that involve a Godelian or Lobian contradiction.
It is also admitted that the standard OT does not deal with any dynamic or developmental aspects of agents. However, the UFAI argument is posited on agents w...
What are other examples of possible motivating beliefs? I find the examples of morals incredibly non-convincing (as in actively convincing me of the opposite position).
Here's a few examples I think might count. They aren't universal, but they do affect humans:
Realizing neg-entropy is going to run out and the universe will end. An agent trying to maximize average-utility-over-time might treat this as a proof that the average is independent of its actions, so that it assigns a constant eventual average utility to all possible actions (meaning what it does
What cognitive skills do moral realists think you need for moral knowledge? Is it sufficient to be really good at prediction and planning?
One way intelligence and goals might be related is that the ontology an agent uses (e.g. whether it thinks of the world it deals with in terms of atoms or agents or objects) as well as the mental systems it has (e.g. whether it has true/false beliefs, or probabilistic beliefs) might change how capable it is, as well as which values it can comprehend. For instance, an agent capable of a more detailed model of the world might tend to perceive more useful ways to interact with the world, and so be more intelligent. It should also be able to represent preferences which wouldn't have made sense in a simpler model.
This section presents and explains the orthogonality thesis, but doesn't provide much argument for it. Should the proponents or critics of such a view be required to make their case?
There is more than one version of the orthogonality thesis. It is trivially false under some interpretations, and trivially true under others, which is true because only some versions can be used as a stage in an argument towards Yudkowskian UFAI.
It is admitted from the outset that some versions of the OT are not logically possible, those being the ones that involve a Godelian or Lobian contradiction.
It is also admitted that the standard OT does not deal with any dynamic or developmental aspects of agents. However, the UFAI argument is posited on agents which have stable goals, and the ability to self improve, so trajectories in mindspace are crucial.
Goal stability is not a given: it is not possessed by all mental architectures, and may not be possessed by any, since noone knows his to engineer it, and humans appear not to have it. It is plausible that an agent would desire to preserve its goals, but the desire to preserve goals does not imply the ability to preserve goals. Therefore, no goal stable system of any complexity exists on this planet, and goal inability cannot be assumed as a default or given.
Self improvement is likewise not a given, since the long and disappointing history of AGI research is largely a history of failure to achieve adequate self improvement. Algorithmspace is densely populated with non self improvers.
An orthogonality claim of a kind relevant to UFAI must be one that posits the stable and continued co-existence of an arbitrary set of values in a self improving AI. However, the version of the OT that is obviously true is one that maintains the momentary co-existence of arbitrary values and level of intelligence.
We have stated that goal stability and self impairment, separately, may well be rare in mindspace.Furthermore, it is not clear arbitrary values are compatible with long term self improvement as a combination: a learning, self improving AI will not be able to guarantee that a given self modification keeps its goals unchanged, since it doing so involves the the relatively dumber version at time T1 making an an accurate prediction about the more complex version at time T2. This has been formalised into a proof that less powerful formal systems cannot predict the abilities of more formal ones.
From Squarks article
http://lesswrong.com/lw/jw7/overcoming_the_loebian_obstacle_using_evidence/
"Suppose you're trying to build a self-modifying AGI called "Lucy". Lucy works by considering possible actions and looking for formal proofs that taking one of them will increase expected utility. In particular, it has self-modifying actions in its strategy space. A self-modifying action creates essentially a new agent: Lucy2. How can Lucy decide that becoming Lucy2 is a good idea? Well, a good step in this direction would be proving that Lucy2 would only take actions that are "good". I.e., we would like Lucy to reason as follows "Lucy2 uses the same formal system as I, so if she decides to take action a, it's because she has a proof p of the sentence s(a) that 'a increases expected utility'. Since such a proof exits, a does increase expected utility, which is good news!" Problem: Lucy is using L in there, applied to her own formal system! That cannot work! So, Lucy would have a hard time self-modifying in a way which doesn't make its formal system weaker. As another example where this poses a problem, suppose Lucy observes another agent called "Kurt". Lucy knows, by analyzing her sensory evidence, that Kurt proves theorems using the same formal system as Lucy. Suppose Lucy found out that Kurt proved theorem s, but she doesn't know how. We would like Lucy to be able to conclude s is, in fact, true (at least with the probability that her model of physical reality is correct). "
Squark thinks that goal stable self improvement can be rescued btpy probablist reasoning. I would rather explore the consequences of goal instability,
An AI that opts for goal stability over self improvement will probably not become smart enough to be dangerous.
An AI that opts for self improvement over goal stability might visit paperclippping, or any of a large number of other goals on its random walk. However, paperclippers aren't dangerous unless they are fairly stable paperclippers. An AI that paperclips for a short time is no threat: the low hanging fruit is to just buy them, or make them out of steel.
Would an AI evolve into goal stability? Something as arbitrary as papercliping is a very poor candidate for an attractor. The good candidates are quasi evolutionary goals that promote survival and reproduction. That's doesn't strongly imply friendliness, but inasmuch as it implies unfriendliness, it implies a kind we are familiar with, being outcompeted for resources by entities with a drive for survival, not the alien, Lovecraftian horror of the paperclippers scenario.
(To backtrack a little: I am not arguing that goal instability is particularly likely. I can't quantify the proportion of AIs that will opt for the conservative approach of not self modifying).
Goal stability is a prerequisite for MiRIs favoured method of achieving AI safety, but it is also a prerequisite for MiRIs favourite example of unsafe AI, the paperclipper, so it's loss does not appear to make AI more dangerous.
If goal stability is unavailable to AIs, or at least to the potentially dangerous ones -- we don't have worry to much about the non-improvers -- then the standard MIRI solution of solving friendliness, and coding it in as unupdateable goals, is unavailable. That is not entirely bad news, as the approach based on rigid goals is quite problematical. It entails having to get something exactly right first time, which is not a situation you want to be in if you can avoid it -- particularly when the stakes are so high.
This is part of a weekly reading group on Nick Bostrom's book, Superintelligence. For more information about the group, and an index of posts so far see the announcement post. For the schedule of future topics, see MIRI's reading guide.
Welcome. This week we discuss the ninth section in the reading guide: The orthogonality of intelligence and goals. This corresponds to the first section in Chapter 7, 'The relation between intelligence and motivation'.
This post summarizes the section, and offers a few relevant notes, and ideas for further investigation. Some of my own thoughts and questions for discussion are in the comments.
There is no need to proceed in order through this post, or to look at everything. Feel free to jump straight to the discussion. Where applicable and I remember, page numbers indicate the rough part of the chapter that is most related (not necessarily that the chapter is being cited for the specific claim).
Reading: 'The relation between intelligence and motivation' (p105-8)
Summary
Another view
John Danaher at Philosophical Disquisitions starts a series of posts on Superintelligence with a somewhat critical evaluation of the orthogonality thesis, in the process contributing a nice summary of nearby philosophical debates. Here is an excerpt, entitled 'is the orthogonality thesis plausible?':
Notes
In-depth investigations
If you are particularly interested in these topics, and want to do further research, these are a few plausible directions, some inspired by Luke Muehlhauser's list, which contains many suggestions related to parts of Superintelligence. These projects could be attempted at various levels of depth.
How to proceed
This has been a collection of notes on the chapter. The most important part of the reading group though is discussion, which is in the comments section. I pose some questions for you there, and I invite you to add your own. Please remember that this group contains a variety of levels of expertise: if a line of discussion seems too basic or too incomprehensible, look around for one that suits you better!
Next week, we will talk about instrumentally convergent goals. To prepare, read 'Instrumental convergence' from Chapter 7. The discussion will go live at 6pm Pacific time next Monday November 17. Sign up to be notified here.