Dagon

Just this guy, you know?

Wiki Contributions

Comments

Sorted by
Dagon42

I think all the same arguments that it doesn't change decisions also apply to why it doesn't change virtue evaluations.  It still all adds up to normality.  It's still unimaginably big.  Our actions as well as our beliefs and evaluations are irrelevant at most scales of measurement.

Dagon30

I think this is the right way to think of most anti-inductive (planner-adversarial or competitive exploitation) situations.  Where there are multiple dimensions of assymetric capabilities, any change is likely to shift the equilibrium, but not necessarily by as much as the shift in component.  

That said, tipping points are real, and sometimes a component shift can have a BIGGER effect, because it shifts the search to a new local minimum.  In most cases, this is not actully entirely due to that component change, but the discovery and reconfiguration is triggered by it.  The rise of mass shootings in the US is an example - there are a lot of causes, but the shift happened quite quickly.

Offense-defense is further confused as an example, because there are at least two different equilibria involved.  when you say

The offense-defense balance is a concept that compares how easy it is to protect vs conquer or destroy resources.

Conquer control vs retain control is a different thing than destroy vs preserve.  Frank Herbert claimed (via fiction) that "The people who can destroy a thing, they control it." but it's actually true in very few cases.  The equilibrium of who gets what share of the value from something can shift very separately from the equilibrium of how much total value that thing provides.

Dagon31

Hmm. I think there are two dimensions to the advice (what is a reasonable distribution of timelines to have, vs what should I actually do).  It's perfectly fine to have some humility about one while still giving opinions on the other.  "If you believe Y, then it's reasonable to do X" can be a useful piece of advice.  I'd normally mention that I don't believe Y, but for a lot of conversations, we've already had that conversation, and it's not helpful to repeat it.

 

Dagon20

note: this was 7 years ago and I've refined my understanding of CDT and the Newcomb problem since.

My current understanding of CDT is that it's does effectively assign a confidence of 1 to the decision not being causally upstream of Omega's action, and that is the whole of the problem.  It's "solved" by just moving Omega's action downstream (by cheating and doing a rapid switch).  It's ... illustrated? ... by the transparent version, where a CDT agent just sees the second box as empty before it even realizes it's decided.  It's also "solved" by acausal decision theories, because they move the decision earlier in time to get the jump on Omega.  

For non-rigorous DTs (like human intuition, and what I personally would want to do), there's a lot of evidence in the setup that Omega is going to turn out to be correct, and one-boxing is an easy call.  If the setup is somewhat difference (say, neither Omega nor anyone else makes any claims about predictions, just says "sometimes both boxes have money, sometimes only one"), then it's a pretty straightforward EV calculation based on kind of informal probability assignments.

But it does require not using strict CDT, which rejects the idea that the choice has backward-causality.

Dagon2612

Thanks for this - it's important to keep in mind that a LOT of systems are easier to sustain or expand than to begin.  Perhaps most systems face this.

In a lot of domains, this is known as the "bootstrap" problem, based on the concept of "lift yourself up by your bootstraps", which doesn't actually work well as a metaphor.  See Bootstrapping - Wikipedia

In CS, for instance, compilers are pieces of software that turn source code into machine code.  Since they're software, they need a complier to build them.  GCC (and some other from-scratch compilers, but many other compilers just depend on GCC) includes a "bootstrap C compiler", which is some hand-coded (actually nowadays it's not, it's compiled as well) executable code which can compile a minimal "stage 2" compiler, which then compiles the main compiler, and then the main compiler is used to build itself again, with all optimizations available.

In fact, you've probably heard the term "booting up" or "rebooting" your computer.  This is a shortening of the word "bootstrap", and refers to powering on without any software, loading a small amount of code from ROM or Flash (or other mostly-static store), and using that code to load further stages of Operating System.  

Dagon134

Allocation of blame/causality is difficult, but I think you have it wrong.

ex. 1 ... He would also waste Tim's $100 which counterfactually could have been used to buy something else for Bob. So Bob is stuck with using the $100 headphone and spending the $300 somewhere else instead.

No.  TIM wasted $100 on a headset that Bob did not want (because he planned to buy a better one).  Bob can choose whether to to hide this waste (at a cost of the utility loss by having $300 and worse listening experience, but a "benefit" of misleading Tim about his misplaced altruism), or to discard the gift and buy the headphones like he'd already planned (for the benefit of being $300 poorer and having better sound, and the cost of making Tim feel bad but perhaps learning to ask before wasting money).

ex. 2 The world is now stuck with Chris' poor translation on book X with Andy and Bob never touching it again because they have other books to work on.

Umm, here I just disagree.  The world is no worse off for having a bad translation than having no translation.  If the bad translation is good enough that the incremental value of a good translation doesn't justify doing it, then that is your answer.  If it's not valuable enough to change the marginal decision to translate, then Andy or Bob should re-translate it.  Either way, Chris has improved the value of books, or has had no effect except wasting his own time.

Dagon20

You need to be careful to define "us" in these discussions.  The people for whom it worked in the past are not the people making behavioral choices now.  They are the ancestors of today's people.  You also have to be more specific about what "worked" means - they were able to reproduce and create the current people.  That is very different from what most people mean by "it works" when evaluating how to behave today.

It's also impossible to distinguish what parts of historical behavior "worked" in this way.  Perhaps it was conformity per se, perhaps it was the specific conformist behaviors that previous eras preferred, perhaps it was other parts of the environment that made it work, which no longer does.

Dagon83

It gets very complicated when you add in incentives and recognize that science and scientists are also businesses.  There's a LOT of the world that scientists haven't (or haven't in the last century or so) really tried to prove, replicate, and come to consensus on.

Dagon20

Yes for the first half, no for the second.  I would reply 1/2, but not JUST because of conventional probability theory.  It's also because the unstated parts of "what will resolve the prediction", in my estimation and modeling, match the setup of conventional probability theory.  It's generally assumed there's no double-counting or other experience-affecting tomfoolery.

Dagon20

I'm very much not sure discouraging HFT is a bad thing.

It's not just the "bad" HFT.  It's any very-low-margin activity.

But normal taxes have the same effect, don't they?

Nope, normal taxes scale with profit, not with transaction size.  

Load More