Followup toThe Rhythm of Disagreement

At the age of 15, a year before I knew what a "Singularity" was, I had learned about evolutionary psychology.  Even from that beginning, it was apparent to me that people talked about "disagreement" as a matter of tribal status, processing it with the part of their brain that assessed people's standing in the tribe.  The peculiar indignation of "How dare you disagree with Einstein?" has its origins here:  Even if the disagreer is wrong, we wouldn't apply the same emotions to an ordinary math error like "How dare you write a formula that makes e equal to 1.718?"

At the age of 15, being a Traditional Rationalist, and never having heard of Aumann or Bayes, I thought the obvious answer was, "Entirely disregard people's authority and pay attention to the arguments.  Only arguments count."

Ha ha!  How naive.

I can't say that this principle never served my younger self wrong.

I can't even say that the principle gets you as close as possible to the truth.

I doubt I ever really clung to that principle in practice.  In real life, I judged my authorities with care then, just as I do now...

But my efforts to follow that principle, made me stronger.  They focused my attention upon arguments; believing in authority does not make you stronger.  The principle gave me freedom to find a better way, which I eventually did, though I wandered at first.

Yet both of these benefits were pragmatic and long-term, not immediate and epistemic.  And you cannot say, "I will disagree today, even though I'm probably wrong, because it will help me find the truth later."  Then you are trying to doublethink.  If you know today that you are probably wrong, you must abandon the belief today.  Period.  No cleverness.  Always use your truth-finding skills at their full immediate strength, or you have abandoned something more important than any other benefit you will be offered; you have abandoned the truth.

So today, I sometimes accept things on authority, because my best guess is that they are really truly true in real life, and no other criterion gets a vote.

But always in the back of my mind is that childhood principle, directing my attention to the arguments as well, reminding me that you gain no strength from authority; that you may not even know anything, just be repeating it back.

Earlier I described how I disagreed with a math book and looked for proof, disagreed humbly with Judea Pearl and was proven (half) right, disagreed immodestly with Sebastian Thrun and was proven wrong, had a couple of quick exchanges with Steve Omohundro in which modesty-reasoning would just have slowed us down, respectfully disagreed with Daniel Dennett and disrespectfully disagreed with Steven Pinker, disagreed with Robert Aumann without a second thought, disagreed with Nick Bostrom with second thoughts...

What kind of rule am I using, that covers all these cases?

Er... "try to get the actual issue really right"?  I mean, there are other rules but that's the important one.  It's why I disagree with Aumann about Orthodox Judaism, and blindly accept Judea Pearl's word about the revised version of his analysis.  Any argument that says I should take Aumann seriously is wasting my time; any argument that says I should disagree with Pearl is wasting my truth.

There are all sorts of general reasons not to argue with physicists about physics, but the rules are all there to help you get the issue right, so in the case of Many-Worlds you have to ignore them.

Yes, I know that's not helpful as a general principle.  But dammit, wavefunctions don't collapse!  It's a massively stupid idea that sticks around due to sheer historical contingency!  I'm more confident of that than any principle I would dare to generalize about disagreement.

Notions of "disagreement" are psychology-dependent pragmatic philosophy.  Physics and Occam's razor are much simpler.  Object-level stuff is often much clearer than meta-level stuff, even though this itself is a meta-level principle.

In theory, you have to make a prior decision whether to trust your own assessment of how obvious it is that wavefunctions don't collapse, before you can assess whether wavefunctions don't collapse.  In practice, it's much more obvious that wavefunctions don't collapse, than that I should trust my disagreement.  Much more obvious.  So I just go with that.

I trust any given level of meta as far as I can throw it, but no further.

There's a rhythm to disagreement.  And oversimplified rules about when to disagree, can distract from that rhythm.  Even "Follow arguments, not people" can distract from the rhythm, because no one, including my past self, really uses that rule in practice.

The way it works in real life is that I just do the standard first-order disagreement analysis:  Okay, in real life, how likely is it that this person knows stuff that I don't?

Not, Okay, how much of the stuff that I know that they don't, have they already taken into account in a revised estimate, given that they know I disagree with them, and have formed guesses about what I might know that they don't, based on their assessment of my and their relative rationality...

Why don't I try the higher-order analyses?  Because I've never seen a case where, even in retrospect, it seems like I could have gotten real-life mileage out of it.  Too complicated, too much of a tendency to collapse to tribal status, too distracting from the object-level arguments.

I have previously observed that those who genuinely reach upward as rationalists, have usually been broken of their core trust in the sanity of the people around them. In this world, we have to figure out who to trust, and who we have reasons to trust, and who might be right even when we believe they're wrong.  But I'm kinda skeptical that we can - in this world of mostly crazy people and a few slightly-more-sane people who've spent their whole lives surrounded by crazy people who claim they're saner than average - get real-world mileage out of complicated reasoning that involves sane people assessing each other's meta-sanity.  We've been broken of that trust, you see.

Does Robin Hanson really trust, deep down, that I trust him enough, that I would not dare to disagree with him, unless he were really wrong?  I can't trust that he does... so I don't trust him so much... so he shouldn't trust that I wouldn't dare disagree...

It would be an interesting experiment: but I cannot literally commit to walking into a room with Robin Hanson and not walking out until we have the same opinion about the Singularity.  So that if I give him all my reasons and hear all his reasons, and Hanson tells me, "I still think you're wrong," I must then agree (or disagree in a net direction Robin can't predict).  I trust Robin but I don't trust him THAT MUCH.  Even if I tried to promise, I couldn't make myself believe it was really true - and that tells me I can't make the promise.

When I think about who I would be willing to try this with, the name that comes to mind is Michael Vassar - which surprised me, and I asked my mind why.  The answer that came back was, "Because Michael Vassar knows viscerally what's at stake if he makes you update the wrong way; he wouldn't use the power lightly."  I'm not going anywhere in particular with this; but it points in an interesting direction - that a primary reason I don't always update when people disagree with me, is that I don't think they're taking that disagreement with the extraordinary gravity that would be required, on both sides, for two people to trust each other in an Aumann cage match. 

Yesterday, Robin asked me why I disagree with Roger Schank about whether AI will be general in the foreseeable future.

Well, first, be it said that I am no hypocrite; I have been explicitly defending immodesty against modesty since long before this blog began.

Roger Schank is a famous old AI researcher who I learned about as the pioneer of yet another false idol, "scripts".  He used suggestively named LISP tokens, and I'd never heard it said of him that he had seen the light of Bayes.

So I noted that the warriors of old are often more formidable intellectually than those who venture into the Dungeon of General AI today, but their arms and armor are obsolete.  And I pointed out that Schank's prediction with its stated reasons seemed more like an emotional reaction to discouragement, than a painstakingly crafted general model of the future of AI research that had happened to yield a firm prediction in this case.

Ah, said Robin, so it is good for the young to disagree with the old.

No, but if the old guy is Roger Schank, and the young guy is me, and we are disagreeing about Artificial General Intelligence, then sure.

If the old guy is, I don't know, Murray Gell-Mann, and we're disagreeing about, like, particle masses or something, I'd have to ask what I was even doing in that conversation.

If the old fogey is Murray Gell-Mann and the young upstart is Scott Aaronson, I'd probably stare at them helplessly like a deer caught in the headlights.  I've listed out the pros and cons here, and they balance as far as I can tell:

  • Murray Gell-Mann won a Nobel Prize back in the eighteenth century for work he did when he was four hundred years younger, or something like that.
  • Scott Aaronson has more recent training.
  • ...but physics may not have changed all that much since Gell-Mann's reign of applicability, sad to say.
  • Aaronson still has most of his neurons left.
  • I know Aaronson is smart, but Gell-Mann doesn't have any idea who Aaronson is.  Aaronson knows Gell-Mann is a Nobel Laureate and wouldn't disagree lightly.
  • Gell-Mann is a strong proponent of many-worlds and Aaronson is not, which is one of the acid tests of a physicist's ability to choose correctly amid controversy.

It is traditional - not Bayesian, not even remotely realistic, but traditional - that when some uppity young scientist is pushing their chosen field as far they possibly can, going past the frontier, they have a right to eat any old scientists they come across, for nutrition.

I think there's more than a grain of truth in that ideal.  It's not completely true.  It's certainly not upheld in practice.  But it's not wrong, either.

It's not that the young have a generic right to disagree with the old, but yes, when the young are pushing the frontiers they often end up leaving the old behind.  Everyone knows that and what's more, I think it's true.

If someday I get eaten, great.

I still agree with my fifteen-year-old self about some things:  The tribal-status part of our minds, that asks, "How dare you disagree?", is just a hindrance.  The real issues of rational disagreement have nothing to do with that part of us; it exists for other reasons and works by other rhythms.  "How dare you disagree with Roger Schank?" ends up as a no-win question if you try to approach it on the meta-level and think in terms of generic trustworthiness: it forces you to argue that you yourself are generically above Schank and of higher tribal status; or alternatively, accept conclusions that do not seem, er, carefully reasoned.  In such a case there is a great deal to be said for simply focusing on the object-level arguments.

But if there are no simple rules that forbid disagreement, can't people always make up whatever excuse for disagreement they like, so they can cling to precious beliefs?

Look... it's never hard to shoot off your own foot, in this art of rationality.  And the more art you learn of rationality, the more potential excuses you have.  If you insist on disagreeing with Gell-Mann about physics, BLAM it goes.  There is no set of rules you can follow to be safe.  You will always have the opportunity to shoot your own foot off.

I want to push my era further than the previous ones: create an advanced art of rationality, to advise people who are trying to reach as high as they can in real life.  They will sometimes have to disagree with others.  If they are pushing the frontiers of their science they may have to disagree with their elders.  They will have to develop the skill - learning from practice - of when to disagree and when not to.  "Don't" is the wrong answer.

If others take that as a welcome excuse to shoot their own feet off, that doesn't change what's really the truly true truth.

I once gave a talk on rationality at Peter Thiel's Clarium Capital.  I did not want anything bad to happen to Clarium Capital.  So I ended my talk by saying, "And above all, if any of these reasonable-sounding principles turn out not to work, don't use them."

In retrospect, thinking back, I could have given the different caution:  "And be careful to follow these principles consistently, instead of making special exceptions when it seems tempting."  But it would not be a good thing for the Singularity Institute, if anything bad happened to Clarium Capital.

That's as close as I've ever come to betting on my high-minded advice about rationality in a prediction market - putting my skin in a game with near-term financial consequences.  I considered just staying home - Clarium was trading successfully; did I want to disturb their rhythm with Centipede's Dilemmas?  But because past success is no guarantee of future success in finance, I went, and offered what help I could give, emphasizing above all the problem of motivated skepticism - when I had skin in the game.  Yet at the end I said:  "Don't trust principles until you see them working," not "Be wary of the temptation to make exceptions."

I conclude with one last tale of disagreement:

Nick Bostrom and I once took a taxi and split the fare.   When we counted the money we'd assembled to pay the driver, we found an extra twenty there.

"I'm pretty sure this twenty isn't mine," said Nick.

"I'd have been sure that it wasn't mine either," I said.

"You just take it," said Nick.

"No, you just take it," I said.

We looked at each other, and we knew what we had to do.

"To the best of your ability to say at this point, what would have been your initial probability that the bill was yours?" I said.

"Fifteen percent," said Nick.

"I would have said twenty percent," I said.

So we split it $8.57 / $11.43, and went happily on our way, guilt-free.

I think that's the only time I've ever seen an Aumann-inspired algorithm used in real-world practice.

New Comment
26 comments, sorted by Click to highlight new comments since: Today at 4:54 PM

Wait, so did you or did you not say "Be wary of the temptation to make exceptions"?

But dammit, wavefunctions don't collapse!

Sorry to distract from your main point, but it is quite a distance from there to "therefore, there are Many Worlds". And for that matter, you have definitely not addressed the notion of collapse in all its possible forms. The idea of universe-wide collapse caused by observation is definitely outlandish, solipsistic in fact, and also leaves "consciousness", "observation", or "measurement" as an unanalysed residue. However, that's not the only way to introduce discontinuity into wavefunction evolution. One might have a jump (they won't be collapses, if you think in terms of state vectors) merely with respect to a particular subspace. The actual history of states of the universe might be a complicated agglomeration of quantum states, with spacelike joins provided by the tensor product, and timelike joins by a semilocal unitary evolution.

Furthermore, there have always been people who held that the wavefunction is only a representation of one's knowledge, and that collapse does not refer to an actual physical process. What you should say as a neo-rationalist is that those people should not be content with an incomplete description of the world, and that something like Minimum Description Length should be used to select between possible complete theories when there is nothing better, and you should leave it at that.

I suppose that the larger context is that in the case of seed AI, we need to get it right the first time, and therefore it would be helpful to have an example of rationality which doesn't just consist of "run the experiment and see what happens", in order to establish that it is possible to think about things in advance. But this is a bad example! Alas, I cannot call upon any intersubjective, third-party validation of this claim, but I do consider this (quantum ontology), if not my one true expertise, certainly a core specialization. And so I oppose my intuition to yours, and say that any valid completion of the Many Worlds idea is either going to have a simpler one-world variant, or, at best, it will be a theory of multiple self-contained worlds - not the splitting, half-merged, or otherwise interdependent worlds of standard MWI rhetoric. This is not a casual judgement; I could give a lengthy account of all the little reasons which lead up to it. If we had a showdown at the rationality dojo, I think it would come out the winner. But first of all, I think you should just ask yourself whether it makes sense to say "Many Worlds is the leading candidate" when all you have really shown is that a particular form of collapse theory is not.

Particle masses?? Definitely go with Gell-Mann.

That taxi anecdote is a blazing star of dorkiness in an otherwise mundane universe. I applaud you, sir.

Let me try to paraphrase:

I don't trust anyone as much as I trust myself to be competent and really try hard to get things right. I can see that if I followed any rule I can imagine to disagree less with people, based on their qualifications as seen by third parties, there would be cases where I'd have to agree more with claims that to me are obviously wrong. So I don't follow any rule, I just do what seems right, and damn it, doing that is what seems right to me - and it seems to work!
The problem is that a great many other people feel exactly the way you do, but when you disagree with them it is clear that at least one of you would be more accurate by disagreeing less. So by more carefully analyzing these kinds of situations we should be able to identify who to advise to disagree less in what kinds of situations. Your passion here shows just how important it is to do this and get it right.

"A primary reason I don't always update when people disagree with me, is that I don't think they're taking that disagreement with the extraordinary gravity that would be required, on both sides, for two people to trust each other in an Aumann cage match."

The problem with this is that there is at least a small probability that they are taking it with that gravity. Given this small probability, it is still necessary to update, although a smaller amount. This was my point in the previous post. The fact that you should update when you are surprised by someone's opinion doesn't depend on Aumann's theorem: it simply follows from Bayes's rule, if you accept that there is at least some tiny chance that he holds his opinion for good reasons, and therefore that there is at least some tiny chance that the truth is the cause of his opinion. For if there is, then the probability that he would state that opinion, given that it is true, is greater than the probability that he would state the opinion, given that it is false. Therefore you should update your opinion, allowing his opinion to be more likely to be true than you previously thought it.

Then again, there are some topics that seem to turn even the most brilliant minds to mush. (These seems to be the same topics for which, incidentally, there could be no epistemically relevant "authority.") Hence, your disagreement with Aumann out-of-hand, for instance.

Thus I'd think some kind of topic-sensitivity is lurking in your rule as well.

Eliezer,

I also think that considering the particular topics is helpful here. In the math book, you were pretty confident the statement was wrong once you discovered a clear formal proof, because essentially there's nothing more to be said.

On the interpretation of quantum mechanics, since you believe we have almost all the relevant data we'll ever have (save for observed superpositions of larger and larger objects) and the full criteria to decide between these hypotheses given that information, you again think that disagreement is unfounded.

(I suggest you make an exception in your analysis for Scott Aaronson et al, whose view as I understand it is that progress in his research is more important than holding the Best Justified Interpretation at all times, if the different interpretations don't have consequences for that research; so he uses whatever one seems most helpful at the moment. This is more like asking a different valid question than getting the wrong answer to a question.)

But on the prospects for General AI in the next century, well, there's all sort of data you don't yet have that would greatly help, and others might have it; and updating according to Bayes on that data is intractable without significant assumptions. I think that explains your willingness to hear out Daniel Dennett (albeit with some skepticism).

Finally, I think that when it comes to religion you may be implicitly using the same second-order evaluation I've come around to. I still ascribe a nonzero chance to my old religion being true—I didn't find a knockdown logical flaw or something completely impossible in my experience of the world. I just came to the conclusion I didn't have a specific reason to believe it above others.

However, I'd refuse to give any such religion serious consideration from now on unless it became more than 50% probable to my current self, because taking up a serious religion changes one's very practice of rationality by making doubt a disvalue. Spending too much thought on a religion can get you stuck there, and it was hard enough leaving the first time around. That's a second-order phenomenon different from the others: taking the Copenhagen interpretation for a hypothesis doesn't strongly prevent you from discarding it later.

My best probability of finding the truth lies in the space of nonreligious answers instead of within any particular religion, so I can't let myself get drawn in. So I do form an object-level bias against religion (akin to your outright dismissal of Aumann), but it's one I think is justified on a meta-level.

I laughed out loud at the taxi story!

Scott said:

Particle masses?? Definitely go with Gell-Mann.

But Scott, the fact that you say that, means that if I saw that you were still arguing particle masses with Gell-Mann, the most probable explanation in my mind would be some kind of drastically important modern update that Gell-Mann was bizarrely refusing to take into account. Such an event is not beyond probability.

Gell-Mann is a Nobel laureate and you are not (yet), but you know that, so you would disagree with Gell-Mann lot more reluctantly than he would disagree with you. To the extent that I trust you to know this, your disagreement with Gell-Mann sends a much stronger signal than Gell-Mann's disagreement with you. It doesn't cancel out exactly, because I don't trust you perfectly; if I trusted you very little, it would cancel out almost not at all.

But you can see how, in the limit of disagreement between ideal Bayesians, the direction of disagreement on each succeeding round would be perfectly unforeseeable.

Eliezer: Yeah, I understand. I was making a sort of meta-joke, that you shouldn't trust me over Gell-Mann about particle physics even after accounting for the fact that I say that and would be correspondingly reluctant to disagree...

"To the best of your ability to say at this point, what would have been your initial probability that the bill was yours?" I said. "Fifteen percent," said Nick. "I would have said twenty percent," I said. So we split it $8.57 / $11.43, and went happily on our way, guilt-free. (this is a 3 : 4 ratio) Now for this to have been fair, you must both have walked away with a 3/7 probability that the money belonged to Nick.

This algorithm definitely feels like the wrong answer. Taking this ratio couldn't possibly be the correct way for both of you to update your beliefs.

Why?

Well, because the correct answer really ought to be invariant under swapping. If you and Nick Bostrom had traded opinions, Nick's answer would have been eighty percent, your answer would have been eighty-five percent, and you would have split the twenty in a 16:17 ratio. You would also have ended up with 16:17 if you had instead asked "What would have been your initial probability that the bill was not yours?"

It's the wrong answer because if X is the proposition that the bill belongs to Eliezer then setting your mutual belief in X to: f(X) = p_E(X) / (p_E(X) + p_N(~X)) doesn't look pretty and symetric. Not only that, but f(X)+f(~X) != 1. What you did only appeared symetric because X made reference to the parties involved. Granted, when propositions do that, people tend to be biased in their own favor, but taking that into account would be solving a different problem.

Now I haven't read Aumann's paper, so I don't know the correct answer, if he presents one. But if I had to come up with an answer just intuitively it would be: p(X) = (p_E(X)+p_N(X))/2

One (oversimplified) possible set of assumptions that could lead to this answer would be:

"Well, presuming we saw the same things, one of our memories must have made a serious mistake. Now if we ignore the possibility that we both made mistake, and we know nothing about the relative frequences at which we both make mistakes like this one, it seems reasonable to assume that, given that one of us made a mistake the principle of indifference would suggest that the chances are half that I was the one who reasoned properly and half that you are the one who reasoned properly. So, provided that we think mistakes aren't more or less likely for either of us depending on the truth of the proposition, averaging our probability estimates makes sense."

Now of course, real mistakes are continuous, and more seriously, different evidence was observed by both parties, so I don't think averaging is the correct answer in all cases.

However, that said, this formula gives p(X) = 21/40, and thus your fair share is $10.50. While I won't say this with authority because I wasn't there and I don't know what types of mistakes people are likely to make, and how likely each of you was to see and remember what, you owe Nick Bostrom ninety-seven cents.

Hm... yes, I can't say your formula is obviously right, but mine is obviously inconsistent. I guess I owe Nick ninety-seven cents.

I got the same answer as Marcello by assuming that each of you should get the same expected utility out of the split.

Say that Nick keeps x and you keep y. Then the expected utility for Nick is

0.85 x − 0.15 ($20 − x),

while the expected utility for you is

0.8 y − 0.2 ($20 − y).

Setting these equal to each other, and using x + y = $20, yields that Nick should keep x = $9.50, leaving y = $10.50 for you.

I disagree with both of these methods. If EY were 100% sure, and NB were 50% sure, then I think the entire 20 should go to EY, and neither of the two methods have this property. I am very interested in trying to figure out what the best formula for this situation is, but I do not yet know. Here is a proposal:

Take the least amount of evidence so that you can shift both predictions by this amount of evidence, to make them the same, and split according to this probability.

Is this algorithm good?

My take on it:

You judge an odds ratio of 15:85 for the money having been yours versus it having been Nick's, which presumably decomposes into a maximum entropy prior (1:1) multiplied by whatever evidence you have for believing it's not yours (15:85). Similarly, Nick has a 80:20 odds ratio that decomposes into the same 1:1 prior plus 80:20 evidence.

In that case, the combined estimate would be the combination of both odds ratios applied to the shared prior, yielding a 1:1 * 15:85 * 80:20 = 12:17 ratio for the money being yours versus it being Nicks. Thus, you deserve 12/29 of it, and Nick deserves the remaining 17/29.

Yeah, I made a pointlessly longer calculation and got the same answer. (And by varying the prior from 0.5 to other values, you can get any other answer.)

If they're both about equally likely to reason as well, I'd say Eliezer's portion should be p * $20, where ln(p/(1-p))=(1.0*ln(0.2/0.8)+1.0*ln(0.85/0.15))/(1.0+1.0)=0.174 ==> p=0.543. That's $10.87, and he owes NB merely fifty-six cents.

Amusingly, if it's mere coincidence that the actual split was 3:4 and in fact they split according to this scheme, then the implication is that we are trusting Eliezer's estimate 86.4% as much as NB's.

Re: "Gell-Mann is a strong proponent of many-worlds" - see also:

"Gell-Mann does not like the "many worlds" interpretation."

He seems to prefer "consistent histories":

http://en.wikipedia.org/wiki/Consistent_histories

The Taxi anecdote is ultra-geeky - I like that! ;-)

Also, once again I accidentally commented on Eliezers last entry, silly me!

Ooo I love this game! How many inconsistencies can you get in a taxi...

2008: "Fifteen percent," said Nick. "I would have said twenty percent," I said.
2004: He named the probability he thought it was his (20%), I named the probability I thought it was mine (15%)

Any more? :)

We're all flawed and should bear that in mind in disagreements, even when the mind says it's sure.

RI, I've never claimed my memory for past events is reliable. In fact, it often seems to me that my mind has no grip on arbitrary facts - facts that could just as easily have been "the other way around".

In 2004 the event in question was only a year ago, so the 2004 memory is much more likely to be correct.

I tend to just avoid identity fetishes, symmetry fetishes, and structural fetishes. Structural fetishes bite me from time to time but only when I'm feeling extra geeky and I'm trying to reduce everything down to Jack's magic bean.

Probability and provability are not one and the same.

The dependency of both collapse and many world on the design of the experiment makes me very fidgetty. Also the fact that QM keeps dodging the question of what goes on at the filter/splitter/polarizer bugs the bejeezus out of me. You would think one the multilapse twin theories could create the probabilities from subspaces existing at the decision boundary. That would impress me.

I still haven't heard why your counterpart has to be you. Why not one of the rats of NIMH?

And since there are multiple entanglements wouldn't it be necessary to make every part, the polarizer, the detector, and the photon superpose?

I always try to attack a problem from where it tickles. Can a macromechanical system be built such that without any modification of the photon substitute or the polarizer substitute the results would match the micro scale results?

If so then just shine a light on that sucker while it does its business.

I've come to the conclusion that:

#1 the use of amplitudes and complex numbers is really an unholy marriage between logic and arithmetic. The structure of multiplying by imaginary numbers and adding resembles logical arguments rather than a mysterious quantum recipe.

#2 the use of complex numbers implies the encoding of two interdependent variables into one construct like the way you can replace i with t in RC circuit equations. Soon as you do that you get a time dependent value for the voltage.

i by definition when squared transforms one variable (or member of a vector/quaternion/octonion) into its neighboring term without the usual fuss of inverting equations. just square and subtract or reduce to restore the form you are seeking.

I don't think Scott Aaronson ever did an Enron commercial.

I tend to just avoid identity fetishes, symmetry fetishes, and structural fetishes. Structural fetishes bite me from time to time but only when I'm feeling extra geeky and I'm trying to reduce everything down to Jack's magic bean.

Probability and provability are not one and the same.

The dependency of both collapse and many world on the design of the experiment makes me very fidgetty. Also the fact that QM keeps dodging the question of what goes on at the filter/splitter/polarizer bugs the bejeezus out of me. You would think one the multilapse twin theories could create the probabilities from subspaces existing at the decision boundary. That would impress me.

I still haven't heard why your counterpart has to be you. Why not one of the rats of NIMH?

And since there are multiple entanglements wouldn't it be necessary to make every part, the polarizer, the detector, and the photon superpose?

I always try to attack a problem from where it tickles. Can a macromechanical system be built such that without any modification of the photon substitute or the polarizer substitute the results would match the micro scale results?

If so then just shine a light on that sucker while it does its business.

I've come to the conclusion that:

#1 the use of amplitudes and complex numbers is really an unholy marriage between logic and arithmetic. The structure of multiplying by imaginary numbers and adding resembles logical arguments rather than a mysterious quantum recipe.

#2 the use of complex numbers implies the encoding of two interdependent variables into one construct like the way you can replace i with t in RC circuit equations. Soon as you do that you get a time dependent value for the voltage.

i by definition when squared transforms one variable (or member of a vector/quaternion/octonion) into its neighboring term without the usual fuss of inverting equations. just square and subtract or reduce to restore the form you are seeking.

Here's one simple rule which I hope you're following, and to which I suspect you should give more weight: If you are already aware that many apparently intelligent people disagree about an issue, hearing another person's conclusions about that issue should have little effect on your beliefs about that issue. If you hear an intelligent person disagree with a belief for which you weren't aware of intelligent disagreement, give more weight to that person's belief than your instincts suggest. I'd give more weight to your opinions if it were clear that you relied heavily on rules such as this.

I object to your advice that "if any of these reasonable-sounding principles turn out not to work, don't use them." The evidence about what works for investing is sufficiently noisy that it's dangerous for investors to think they can get anything resembling conclusive evidence about what works. I'd suggest instead advising them to "assume these rules are necessary but not sufficient for success."