Followup toPossibility and Could-ness, The Ultimate Source

Brandon Reinhart wrote:

I am "grunching." Responding to the questions posted without reading your answer. Then I'll read your answer and compare. I started reading your post on Friday and had to leave to attend a wedding before I had finished it, so I had a while to think about my answer.

Brandon, thanks for doing this.  You've provided a valuable illustration of natural lines of thought.  I hope you won't be offended if, for educational purposes, I dissect it in fine detail.  This sort of dissection is a procedure I followed with Marcello to teach thinking about AI, so no malice is intended.

Can you talk about "could" without using synonyms like "can" and "possible"?

When we speak of "could" we speak of the set of realizable worlds [A'] that follows from an initial starting world A operated on by a set of physical laws f.

(Emphases added.)

I didn't list "realizable" explicitly as Tabooed, but it refers to the same concept as "could".  Rationalist's Taboo isn't played against a word list, it's played against a concept list.  The goal is to force yourself to reduce.

Because follows links two worlds, and the linkage is exactly what seems confusing, a word like "follows" is also dangerous.

Think of it as being like trying to pick up something very slippery.  You have to prevent it from squeezing out of your hands.  You have to prevent the mystery from scurrying away and finding a new dark corner to hide in, as soon as you flip on the lights.

So letting yourself use a word like "realizable", or even "follows", is giving your mind a tremendous opportunity to Pass the Recursive Buck - which anti-pattern, be it noted in fairness to Brandon, I hadn't yet posted on.

If I was doing this on my own, and I didn't know the solution yet, I would also be marking "initial", "starting", and "operated on".  Not necessarily at the highest priority, but just in case they were hiding the source of the confusion.  If I was being even more careful I would mark "physical laws" and "world".

So when we say "I could have turned left at the fork in the road." "Could" refers to the set of realizable worlds that follow from an initial starting world A in which we are faced with a fork in the road, given the set of physical laws. We are specifically identifying a sub-set of [A']: that of the worlds in which we turned left.

One of the anti-patterns I see often in Artificial Intelligence, and I believe it is also common in philosophy, is inventing a logic that takes as a primitive something that you need to reduce to pieces.

To your mind's eye, it seems like "could-ness" is a primitive feature of reality.  There's a natural temptation to describe the properties that "could-ness" seems to have, and make lists of things that are "could" or "not-could".  But this is, at best, a preliminary step toward reduction, and you should be aware that it is at best preliminary step.

The goal is to see inside could-ness, not to develop a modal logic to manipulate primitive could-ness.

But seeing inside is difficult; there is no safe method you know you can use to see inside.

And developing a modal logic seems like it's good for a publication, in philosophy.  Or in AI, you manually preprogram a list of which things have could-ness, and then the program appears to reason about it.  That's good for a publication too.

This does not preclude us from making mistakes in our use of could. One might say "I could have turned left, turned right, or started a nuclear war." The options "started a nuclear war" may simply not be within the set [A']. It wasn't physically realizable given all of the permutations that result from applying our physical laws to our starting world.

Your mind tends to bounce off the problem, and has to be constrained to face it - like your mind itself is the slippery thing that keeps squeezing out of your hands.

It tries to hide the mystery somewhere else, instead of taking it apart - draw a line to another black box, releasing the tension of trying to look inside the first black box.

In your mind's eye, it seems, you can see before you the many could-worlds that follow from one real world.

The real answer is to resolve a Mind Projection Fallacy; physics follows a single line, but your search system, in determining its best action, has to search through multiple options not knowing which it will make real, and all the options will be labeled as reachable in the search.

So, given that answer, you can see how talking about "physically realizable" and "permutations(?) that result from applying physical laws" is a bounce-off-the-problem, a mere-logic, that squeezes the same unpenetrated mystery into "realizable" and "permutations".

If our physical laws contain no method for implementing free will and no randomness, [A'] contains only the single world that results from applying the set of physical laws to A. If there is randomness or free will, [A'] contains a broader collection of worlds that result from applying physical laws to A...where the mechanisms of free will or randomness are built into the physical laws.

Including a "mechanism of free will" into the model is a perfect case of Passing the Recursive Buck.

Think of it from the perspective of Artificial Intelligence.  Suppose you were writing a computer program that would, if it heard a burglar alarm, conclude that the house had probably been robbed.  Then someone says, "If there's an earthquate, then you shouldn't conclude the house was robbed."  This is a classic problem in Bayesian networks with a whole deep solution to it in terms of causal graphs and probability distributions... but suppose you didn't know that.

You might draw a diagram for your brilliant new Artificial General Intelligence design, that had a "logical reasoning unit" as one box, and then a "context-dependent exception applier" in another box with an arrow to the first box.

So you would have convinced yourself that your brilliant plan for building AGI included a "context-dependent exception applier" mechanism.  And you would not discover Bayesian networks, because you would have prematurely marked the mystery as known.

I don't mean "worlds" in the quantum mechanics sense, but as a metaphor for resultant states after applying some number of physical permutations to the starting reality.

"Permutations"?  That would be... something that results in several worlds, all of which have the could-property?  But where does the permuting come from?  How does only one of the could-worlds become real, if it is a matter of physics?  After you ask these questions you realize that you're looking at the same problem as before, which means that saying "permutations" didn't help reduce it.

Why can a machine practice free will? If free will is possible for humans, then it is a set of properties or functions of the physical laws (described by them, contained by them in some way) and a machine might then implement them in whatever fashion a human brain does. Free will would not be a characteristic of A or [A'], but the process applied to A to reach a specific element of [A'].

Again, if you remember that the correct answer is "Forward search process that labels certain options as reachable before judging them and maximizing", you can see the Mind Projection Fallacy on display in trying to put the could-ness property into basic physics.

So...I think I successfully avoided using reference to "might" or "probable" or other synonyms and closely related words.

now I'll read your post to see if I'm going the wrong way.

Afterward, Brandon posted:

Hmm. I think I was working in the right direction, but your procedural analogy let you get closer to the moving parts. But I think "reachability" as you used it and "realizable" as I used it (or was thinking of it) seem to be working along similar lines.

I hate to have to put it this way, because it seems harsh: but it's important to realize that, no, this wasn't working in the right direction.

Again to be fair, Marcello and I used to generate raw material like this on paper - but it was clearly labeled as raw material; the point was to keep banging our heads on opaque mysteries of cognition, until a split opened up that helped reduce the problem to smaller pieces, or looking at the same mystery from a different angle helped us get a grasp on at least its surface.

Nonetheless:  Free will is a Confusing Problem.  It is a comparatively lesser Confusing Problem but it is still a Confusing Problem.  Confusing Problems are not like the cheap damn problems that college students are taught to solve using safe prepackaged methods.  They are not even like the Difficult Problems that mathematicians tackle without knowing how to solve them.  Even the simplest Confusing Problem can send generations of high-g philosophers wailing into the abyss.  This is not high school homework, this is beisutsukai monastery homework.

So you have got to be extremely careful.  And hold yourself, not to "high standards", but to your best dream of perfection.  Part of that is being very aware of how little progress you have made.  Remember that one major reason why AIfolk and philosophers bounce off hard problems and create mere modal logics, is that they get a publication and the illusion of progress.  They rewarded themselves too easily.  If I sound harsh in my criticism, it's because I'm trying to correct a problem of too much mercy.

They overestimated how much progress they had made, and of what kind.  That's why I'm not giving you credit for generating raw material that could be useful to you in pinning down the problem.  If you'd said you were doing that, I would have given you credit.

I'm sure that some people have achieved insight by accident from their raw material, so that they moved from the illusion of progress to real progress.  But that sort of thing cannot be left to accident. More often, the illusion of progress is fatal: your mind is happy, content, and no longer working on the difficult, scary, painful, opaque, not-sure-how-to-get-inside part of the mystery.

Generating lots of false starts and dissecting them is one methodology for working on an opaque problem.  (Instantly deadly if you can't detect false starts, of course.)  Yet be careful not to credit yourself too much for trying!  Do not pay yourself for labor, only results!  To run away from a problem, or bounce off it into easier problems, or to convince yourself you have solved it with a black box, is common.  To stick to the truly difficult part of a difficult problem, is rare.  But do not congratulate yourself too much for this difficult feat of rationality; it is only the ante you pay to sit down at the high-stakes table, not a victory.

The only sign-of-success, as distinguished from a sign-of-working-hard, is getting closer to the moving parts.

And when you are finally unconfused, of course all the black boxes you invented earlier, will seem in retrospect to have been "driving in the general direction" of the truth then revealed inside them.  But the goal is reduction, and only this counts as success; driving in a general direction is easy by comparison.

So you must cultivate a sharp and particular awareness of confusion, and know that your raw material and false starts are only raw material and false starts - though it's not the sort of thing that funding agencies want to hear.  Academia creates incentives against the necessary standard; you can only be harsh about your own progress, when you've just done something so spectacular that you can be sure people will smile at your downplaying and say, "What wonderful modesty!"

The ultimate slippery thing you must grasp firmly until you penetrate is your mind.

New Comment
17 comments, sorted by Click to highlight new comments since: Today at 11:10 AM

I seem to be unable to view the referenced comment.

Hmm, no replies after all this time?

Typepad splits lots of comments over pages, for me. Try going to the second page.

"The real answer is to resolve a Mind Projection Fallacy; physics follows a single line, but your search system, in determining its best action, has to search through multiple options not knowing which it will make real, and all the options will be labeled as reachable in the search."

This is rather silly. You could replace "physics" with "time", or "causality", or something like it, and the fallacy is obvious. All one knows about physics is that it has always followed a single line in some specific situations. In some others, as simple as statistical mecanics, this "line" gets really blurred. You seem to argue, in the other post, that counterfactuals are a mental construct used for reasoning, but actually unreal, because, you know, a counterfactual never happenned.

No one is saying could-able things have happenned (or at least I hope no one is). This looks awfully lot like some generalized hidsight bias when looking at the universe. My take is you still don't understand the concept of possibility (and I'm not claiming I do, by the way); reducing it to symbolic reasoning doesn't make it any clearer (although, granted, it does explain one curious fact about people: that it's very easy to talk about things "wanting" and "desiring" and "thinking" when these things are following deterministic tracks. Think of how most people describe countries, or companies, or water).

Moreover, your previous post has done nothing to dispell the illusion that free will is this extra-natural god-given gift, or something like it. It hasn't even completely reduced it to determinism, in my opinion.

Also, Eliezer, I really preferred the older posts, recently the writing here is too dogmatic for me.

What Alexandre said. It may be that physics is deterministic: but implying that this is logically necessary, since the merely possible does not happen, by definition, doesn't seem reasonable to me.

Alexandre passos, Unknown, (and Caledonian too, from a previous thread,)

Eliezer has already stated that he's taking a deterministic many worlds interpretation of reality as a premise (and explained at some length why he does so in the QM series). If you disagree with that premise, of course the conclusions do not necessarily follow.

I'm not defending the assumption of determinism -- but I am saying that a criticism of the argument that flows from Eliezer's premises would be more apposite and interesting than essentially posting over and over again, "Nuh uh! What if the universe isn't deterministic, huh?"

I'm sorry, I'm probably just cranky in the morning. I'll go drink some coffee and then start regretting posting this.

I took a different route on the "homework".

My thought was that "can" is a way of stating your strength in a given field, relative to some standard. "I can speak Chinese like a native" is saying "My strength in Chinese is equal to the standard of a native level Chinese speaker." "Congress can declare war" means "Congress' strength in the system of American government is equal to the strength needed to declare war."

Algorithmically, it would involve calculating your own strength in a field, and then calculating the minimum standard needed to do something. So an AI might examine all the Chinese dictionaries and grammars that had been programmed into it, estimate its Chinese skills, estimate the level of Chinese skills of a native speaker, and then compare them to see whether it could say "I can speak Chinese like a native."

This is different enough from Eliezer's solution and from what everyone else is talking about that I'd appreciate it if someone could critique it and tell me whether I made something into a primitive inappropriately, or, if I've missed a point, exactly which one it was and where I missed it.

I approached it similarly (as part of a more general attempt, since this is a minor use of the word), positing the "I could lift that box over there" was a comparison of the physical prowess necessary to complete the task and the amount I currently possess. In Eliezer's formulation, this is equivalent to determining reachability with constraints, but it's more of an example of the general procedure than an explanation of it, unfortunately. I'm glad to see that someone else was thinking similarly though.

Alexandre passos, Unkown,

you can believe in any matter of things, why not in intelligent falling when you're at it? http://en.wikipedia.org/wiki/Intelligent_falling

The question is not what one can or can't believe, the question is: where does the evidence point to? And where are you ignoring evidence because you would prefer one answer to another?

Let evidence guide your beliefs, not beliefs guide your appraisal of evidence.

I'm certainly not offended you used my comment as an example. I post my thoughts here because I know no one physically local to me that holds an interest in this stuff and because working the problems...even to learn I'm making the same fundamental mistakes I was warned to watch for...helps me improve.

I think that the "could" idea does not need to be confined to the process of planning future actions.

Suppose we think of the universe as a large state transition matrix, with some states being defined as intervals because of our imperfect knowledge of them. Then, any state in the interval is a "possible state" in the sense that it is consistent with our knowledge of the world, but we have no way to verify that this is in fact the actual state.

Now something that "could" happen corresponds to a state that is reachable from any of the "possible states" using the state transition matrix (in the linear systems sense of reachable). This applies to the world outside ("A meteor could hit me at any moment") or to my internal state ("I could jump off a cliff") in the sense that given my imperfect knowledge of my own state and other factors, the jump-off-a-cliff state is reachable from this fuzzy cloud of states.

Pushing the mystery into the territory has another negative consequence: if you claim that mystery resides in the territory, you still need to explain how do you know it. In our case, if you say that could-worlds exist in reality, you need to explain the magical powers of the mind to both observe the real could-worlds from the present, and to order the universe to take the path you chose.

Eliezer, if you became Reality, if you were as such a causa sui closed system, and if you were still able to change (or transcend yourself), then you might prefer analyzing various abstract choices of how to induct yourself without being able to predict in advance how you would, to trivially riding the wave of a fixed algorithmic complexity. If so, it doesn't seem this possibilistic idea of Be Acquainted Only With The Necessary would be much help.

Brilliant!

I really like this quote, but not because of it's sexual connotations :) The ultimate slippery thing you must grasp firmly until you penetrate is your mind.

There was a matematician that developed a method for solving hard problems. Instead of attacking the problem frontally(trying to crack the nut) he started to build a framework around it, create all kinds of useful mathematical abstractions, gradually dissolving the question(==the nut) until the solution became evident.

This ties in with: And you would not discover Bayesian networks, because you would have prematurely marked the mystery as known.

I guess that Bayesian networks, or at least bayesian thinking was invented before this application for AI. After that it was just one inferential step to apply this to AI. What if bayesianism hadn't been invented yet? Would it make sense to bang the head against the problem, hoping to find a solution? In the same vein I have the suspicion that many of the remaining problems in AI might be too many inferential steps away to solve directly. In this case there will be need to improve the surrounding knowledge first.

Strictly, it should be "a mechanism for incompatibilist free will." Free will may or may not be compatible with physical determinism, depending on exactly how you formulate it. Formulations which object to physical determinism are dubbed "incompatibilist."

As a compatibilist, I would assert that the mind does have mechanisms for free will and nonetheless there may be only one possible universe at T+epsilon given a state of the universe at T.

"keep banging our heads [...] until a split opened up"

Ow.

"In your mind's eye, it seems, you can see before you the many could-worlds that follow from one real world." Isn't it exactly what many-worlds interpretation does to QM (to keep it deterministic, yada-yada-yada; to be fair, Brandon specifically stated he is not considering the QM sense, but I am not sure the sense he suggested himself is distinct)? There are worlds that are (with not-infinitesimally-low probability-mass) located in the future of the world we are now (and they are multiple), and there are worlds that are not. The former are "realizable", and they "follow" - and whether they are reachable depends on how good the "forward search process that labels certain options as reachable before judging them and maximizing" is. My intuition says that "could" can mean the former, rather than "whatever my mind generated in the search as options" (and, moreover, that the latter is a heuristics of the mind for the former). (Unless, of course, the real bomb under this definition is in "probability-mass" hiding the same "could-ness", but if you are going to tell me that QM probability-mass is likewise reducible to labeling by a search process and this is the "correct answer", I will find this... well, only mildly surprising, because QM never ceases to amaze me, which influences my further evaluations, but at least I don't see how this obviously follows from the QM sequence.)

Moreover, this quotation from Possibility and Could-ness seems to hint to a similar (yet distinct, because probability is in the mind) problem.
> But you would have to be very careful to use a definition like that one consistently.  "Could" has another closely related meaning in which it refers to the provision of at least a small amount of probability.