Hypothesis: whenever you make a choice, the consequences of it are almost as likely to be bad as good, because the scale of the intended consequences is radically smaller than the scale of the chaotic unintended effects. (The expected outcome is still as positive as you think, it’s just a small positive value plus a very high variance random value at each step.)

This seems different from how things are usually conceived, but does it change anything that we don’t already know about?

Could this be false?

New Comment
15 comments, sorted by Click to highlight new comments since:

There's a literature on this issue I think, it's called the problem of Cluelessness. See e.g. Hilary Greaves https://philpapers.org/rec/GREC-38

IIRC, my take was: --Yeah this seems probably true. --It probably shouldn't undermine our usual prioritization decisions, but it definitely feels like it might, and deserves more thought. --I'd be interested to hear whether it still holds in a multiverse + superrationality context. I expect it still does.

This is likely true, but misleading.  "almost as likely" is WAY different than "just as likely".  A very small edge compounded over many decisions makes for a big difference.  

Another implication of this is that some effort into figuring out which decisions have a bigger impact and a higher-than-random likelihood of you picking the best option is worth it.  Ideally, figure out the habits that make small, frequent decisions easy to do the probably-right thing, and spend actual thought on the decisions with more impact and that you have non-generic information about.

Michael Huemer also tries to affect society without a precise and detailed understanding?

Have you seen "Convergence of expected utilities with algorithmic probability distributions", by Peter de Blanc? Under certain conditions, he proves that all expected utility calculations diverge.

This would be true in a chaotic classical world. But we live in a quantum world. 

The weather is chaotic. Consider a decision of whether or not to sneeze. And suppose you are standing next to a bank of huge fans, each being flipped on and off by a quantum randomness source. 

Whether or not you sneeze, the fans ensure you get a quantum superposition over a huge number of possible future weathers. The particular futures would be different, but with so many samples, most futures where you do sneeze would have a corresponding almost identical one where you didn't.

Of course, whether this picture or the chaotic picture is more correct depends on the scale of your decisions relative to the scale of the quantum randomness. It also depends on how much you consider things to be in superposition. If you don't know the name of the vice president of bolivia, do they have a superposition over all names until you look it up.

In this view, addressing a letter to vice president Alfronzo, or to vice president Bernardo have similar consequences, a superposition of the letter arriving or not (but mostly not)

I think Michael Huemer had an interesting take on a variant of this question -- In Praise of Passivity:

Voters, activists, and political leaders of the present day are in the position of medieval doctors. They hold simple, prescientific theories about the workings of society and the causes of social problems, from which they derive a variety of remedies–almost all of which prove either ineffectual or harmful. Society is a complex mechanism whose repair, if possible at all, would require a precise and detailed understanding of a kind that no one today possesses. Unsatisfying as it may seem, the wisest course for political agents is often simply to stop trying to solve society’s problems.

It makes it more important to make a greater quantity of total choices.

(I don't understand it either)

I think the idea is that Huemer's quote seems to itself be an effort to repair society without fully understanding it.

I don't think this is a facile objection, either*—I think it's very possible that "Voters, activists, and political leaders" are actually an essential part of the complex mechanism of society and if they all stopped trying to remedy problems things would get even worse.

On the other hand, you can recurse this reasoning and say that maybe bold counterintuitive philosophical prescriptions like Huemer's are also part of the complex mechanism.

 

*To the quote as a standalone argument, anyway—haven't read the essay.

I don't exactly get that either. Hypocrisy could apply if someone is advocating not to advocate.

If you have effective control then "I would never do that" is a good reason to stop worrying about that state. With big unaccounted forces you can get inadvedrtly into states that you did not want to come about. This means you benefit from plans from every possible state even if you don't have a theory how such a state might come about.

Hmmm... small isn't exactly the right concept. Maybe... shattered?, or dissociated? Continuing to model causality as ripples spreading from events, we can remember that there are countless events happening all the time at various scales. Spacetime, after all, is famously big. As those ripples propagate outward, they meet other ripples and the properties of the causal (let's borrow a term:) field at that point become informed by all of the incident ripples. Eventually, the movements of the field are no longer distinguishable as ripples (at a certain scale) and appear as fully random background noise on which are superimposed more identifiably ripple- or wave-like patterns on larger scales from newer perturbations that are still more coherent. The random-looking movements show up on the smallest scales possible (yes, the Planck scale), while the still-more-coherent events show up on scales more like the ones we can directly observe, e.g. meters in seconds or light-years in... well, years.

Note: I'll own that my understanding of the various QP models isn't super strong, but even the Wikipedia article on the delayed choice quantum eraser experiment notes that it is predicated on a non-standard view of QP.

In any case, erasure of events over time subtly different from what I meant to describe, and is inconsistent with conservation of information. The tiny ripples we make in causality with most of our actions will, indeed, leave lasting impressions at infinitesimal scales. But the further out you go from the origin, the more other ripples any one event will come into dialog with. The end result is that, given enough time, any one ripple becomes such a small factor in future outcomes as to be (for all practical purposes) indistinguishable from the rest of the background noise.

Allow me a metaphor. Suppose you throw a rock at an ocean wave. The rock creates a ripple on the surface of the wave, but you would have to be a keen observer indeed to detect it in the rush of water. The wave may be slightly differently configured as it crushes houses near the shore, but the contractor still charges the same amount for repairs. The rock tumbles across the bottom of the ocean, pushed by the wave along with thousands of other rocks and shells and particles of sand and biomatter. It changes the movement of those bits, but the lot of it is bulldozed off the beach to make way for fresh sand a few weeks later just the same. Nearly all possible future events are, after a few seconds at most, indistinguishable from those that follow if you throw the rock elsewhere. Throwing the rock ends up having no practical effect on future events. Even absent the wave, the ripple from the rock will be essentially undetectable a mile off.

On the longest timescales, everything goes to a uniform boring gray with no variation (heat death). This cannot be simultaneously true with the proposition that tiny changes will all - or even likely - come to individually have meaningful effects if considered over enough time. Heat death implies something like a viscosity in causality that gives older events less weight than newer ones.

I suspect that while the potential reach of the causal ripples from any given decision is theoretically infinite, in practice most such ripples are quickly mitigated by the rest of causal reality. In other words, most of our actions are almost immediately overwhelmed by everything else that's going on.

ETA: Seems like most of the information systems I encounter day-to-day are flexible enough that tiny changes in the environment are largely absorbed or ignored. This probably contributes to what I see as an exponential decay factor on individual causal events.