Oh, dang it.
HA:
Those are interesting empirical questions. Why jump to the conclusion?
I didn't claim it was a proof that some sort of algorithm was running; but given the overall increased effectiveness at maximizing utility that seems to come with the experience of deliberation, I'd say it's a very strongly supported hypothesis. (And to abuse a mathematical principle, the Church-Turing Thesis lends credence to the hypothesis: you can't consistently compete with a good algorithm unless you're somehow running a good algorithm.)
Do you have a specific hypothesis you think is better, or specific evidence that contradicts the hypothesis that some good decision algorithm is generally running during a deliberation?
Also, I think it'll be instructive to check the latest neuroscience research on them. We no longer need to go straight to our intuitions as a beginning and end point.
Oh, I agree, and I'm fascinated too by modern neuroscientific research into cognition. It just seems to me that what I've read supports the hypothesis above.
I wonder if you're bothered by Eliezer's frequent references to our intuitions of our cognition rather than sticking to a more outside view of it. It seems to me that his picture of "free will as experience of a decision algorithm" does find support from the more objective outside view, but that he's also trying to "dissolve the question" for those whose intuitions of introspection make an outside account "feel wrong" at first glance. It doesn't seem that's quite the problem for you, but it's enough of a problem for others that I think he's justified in spending time there.
Secondly, an illusion/myth/hallucination may be that you have the ultimate capacity to choose between "deliberation" (running some sort of decision tree/algorithm) and a random choice process in each given life instance...
Again, I don't think that anyone actually chooses randomly; even the worst decisions come out with far too much order for that to be the case. There is a major difference in how aware people are of their real deliberations (which chiefly amounts to how honest they are with themselves), and those who seem more aware tend to make better decisions and be more comfortable with them. That's a reason why I choose to try and reflect on my own deliberations and deliberate more honestly.
I don't need some "ultimate capacity" to not-X in order for X to be (or feel like, if you prefer) my choice, though; I just need to have visualized the alternatives, seen no intrinsic impediments and felt no external constraints. That's the upshot of this reinterpretation of free will, which both coincides with our feeling of freedom and doesn't require metaphysical entities.
Usually I don't talk about "free will" at all, of course! That would be asking for trouble - no, begging for trouble - since the other person doesn't know about my redefinition.
Boy, have we ever seen that illustrated in the comments on your last two posts; just replace "know" with "care". I think people have been reading their own interpretations into yours, which is a shame: your explanation as the experience of a decision algorithm is more coherent and illuminating than my previous articulation of the feeling of free will (i.e. lack of feeling of external constraint). Thanks for the new interpretation.
Hopefully Anonymous:
If I understand you correctly on calling the feeling of deliberation an epiphenomenon, do you agree that those who report deliberating on a straightforward problem (say, a chess problem) tend to make better decisions than those who report not deliberating on it? Then it seems that some actual decision algorithm is operating, analogously to the one the person claims to experience.
Do you then think that moral deliberation is characteristically different from strategic deliberation? If so, then I partially agree, and I think this might be the crux of your objection: that in moral decisions, we often hide our real objectives from our conscious selves, and look to justify those hidden motives. While in chess, there's very little sense of "looking for a reason to move the rook" as a high priority, the sort of motivated cognition this describes is pretty ubiquitous in human moral decision.
However, what I think Eliezer might reply to this is that there still is a process of deliberation going on; the ultimate decision does tend to achieve our goals far better than a random decision, and that's best explained by the running of some decision algorithm. The fact that the goals we pursue aren't always the ones we stateâ even to ourselvesâ doesn't prevent this from being a real deliberation; it just means that our experience of the deliberation is false to the reality of it.
If that was ambiguous, I meant that the falsehood was the positing of an "I" separate from the patterns of physical evolution of the brain.
...I actually can't see how the world would be different if I do have free will or if I don't. (Stephen Weeks)
In order for you to have free will, there has to be a "you" entity in the first place. . . (Matthew C.)
I have an idea where Eliezer is going with this, and I think the above comments are helpful in it.
Seems to me that the reason people intuitively feel there must be some such thing as free will is that there's a basic notion of free vs. constrained in social life, and that we project physical causality of our thoughts to be of the same form.
That is, we tend to think of physical determinism (or probabilistic determinism if we understand it) as if it were the same sort of thing as the way American law constrains our actions, or the way a psychopath holding a gun to our head would do the same. In either case, we can separate the self from the external constraint, and we directly feel that constraint. The fact that our thought processes don't feel constrained by an external agent, then, seems to indicate that they are free from any (deterministic or even probabilistic) necessity.
The falsehood here, as I see it, is that there is no "I" separate from the thoughts, emotions, actions, etc. that are all subject to the physical evolution of my brain; there's no separate thing which is "forced" to go along for the ride. But until we begin to really grasp that (and realize that Descartes was simply wrong in what he thought "Cogito, ergo sum" meant for the self), we have the false dilemma of "free will" versus "physics made me do it".
David,
You're right not to feel a 'blow to your immortality' should that happen; but consider an alternate story:
You step into the teleport chamber on Earth and, after a weird glow surrounds you, you step out on Mars feeling just fine and dandy. Then somebody tells you that there was a copy of you left in the Earth booth, and that the copy was just assassinated by anti-cloning extremists.
The point of the identity post is that there's really no difference at all between this story and the one you just told, except that in this story you subjectively feel you've traveled a long way instead of staying in the booth on Earth.
Both of the copies are you (or, more precisely, before you step into the booth each copy is a future you); and to each copy, the other copy is just a clone that shares their memories up to time X.
Dave,
Well, if you resolve not to sign up for cryonics and if the thinking on Quantum Immortality is correct, you might expect a series of weird (and probably painful) events to prevent you indefinitely from dying; while if you're signed up for it, the vast majority of the worlds containing a later "you" will be the ones revived after a peaceful death. So there's a big difference in the sort of experience you might anticipate, depending on whether you've signed up.
Hang on, the automated manufacturing plant isn't quite what I mean by an optimization process of this sort. The "specialized intelligences" being discussed fit the bill better of something with strong optimizing powers but unambitious goals.
Caledonian,
Oh, sure, ant colonies are optimization processes too. But there are a few criteria by which we can distinguish the danger of an ant colony from the danger of a human from the danger of an AGI. For example:
(1) How powerful is the optimization processâ how tiny is the target it can achieve? A sophisticated spambot might reliably achieve proper English sentences, but I work towards a much smaller target (namely, a coherent conversation) which the spambot couldn't reliably hit.
Not counting the production of individual ants (which is the result of a much larger optimization process of evolution), the ant colony is able to achieve a certain social structure in the colony and to establish the same in a new colony. That's nice, but not really as powerful as it gets when compared to humans painting the Mona Lisa or building rockets.
(2) What are the goals of the process? An automated automobile plant is pretty powerful at hitting a small target (a constructed car of a particular sort, out of raw materials), but we don't worry about it because there's no sense in which the plant is trying to expand, reproduce itself, threaten humans, etc.
(3) Is the operation of the process going to change either of the above? This is, so far, only partially true for some advanced biological intelligences and some rudimentary machine ones (not counting the slow improvements of ant colonies under evolution); but a self-modifying AI has the potential to alter (1) and (2) dramatically in a short period of time.
Can you at least accept that a smarter-than-human AI able to self-modify would exceed anything we've yet seen on properties (1) and (3)? That's why the SIAI hopes to get (2) right, even given (3).
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
Eliezer,
Every time I think you're about to say something terribly naive, you surprise me. It looks like trying to design an AI morality is a good way to rid oneself of anthropomorphic notions of objective morality, and to try and see where to go from there.
Although I have to say the potshot at Nietzsche misses the mark; his philosophy is not a resignation to meaninglessness, but an investigation of how to go on and live a human or better-than-human life once the moral void has been recognized. I can't really explicate or defend him in such a short remark, but I'll say that most of the people who talk about Nietzsche (including, probably, me) read their own thoughts over his own; be cautious for that reason of dismissing him before reading any of his major works.