Dagon

Just this guy, you know?

Wiki Contributions

Comments

Sorted by
Dagon125

Wish I could upvote and disagree.  Evolution is a mechanism without a target.  It's the result of selection processes, not the cause of those choices.

Dagon31

There have been a number of debates (which I can't easily search on, which is sad) about whether speech is an action (intended to bring about a consequence) or a truth-communication or truth-seeking (both imperfect, of course) mechanism.  It's both, at different times to different degrees, and often not explicit about what the goals are.

The practical outcome seems spot-on.  With some people you can have the meta-conversation about what they want from an interaction, with most you can't, and you have to make your best guess, which you can refine or change based on their reactions.

Out of curiosity, when chatting with an LLM, do you wonder what its purpose is in the responses it gives?  I'm pretty sure it's "predict a plausible next token", but I don't know how I'll know to change my belief.

Dagon20

Gah!  I missed my chance to give one of my favorite Carl Sagan quotes, a recipe for Apple Pie, which demonstrates the universality and depth of this problem:

If you wish to make an apple pie from scratch you must first invent the universe.

Dagon20

Note that the argument whether MWI changes anything is very different from the argument about what matters and why.  I think it doesn't change anything, independently of how much what things in-universe matter.

Separately, I tend to think "mattering is local".  I don't argue as strongly for this, because it's (recursively) a more personal intuition, less supported by type-2 thinking.  

Dagon42

I think all the same arguments that it doesn't change decisions also apply to why it doesn't change virtue evaluations.  It still all adds up to normality.  It's still unimaginably big.  Our actions as well as our beliefs and evaluations are irrelevant at most scales of measurement.

Dagon50

I think this is the right way to think of most anti-inductive (planner-adversarial or competitive exploitation) situations.  Where there are multiple dimensions of assymetric capabilities, any change is likely to shift the equilibrium, but not necessarily by as much as the shift in component.  

That said, tipping points are real, and sometimes a component shift can have a BIGGER effect, because it shifts the search to a new local minimum.  In most cases, this is not actully entirely due to that component change, but the discovery and reconfiguration is triggered by it.  The rise of mass shootings in the US is an example - there are a lot of causes, but the shift happened quite quickly.

Offense-defense is further confused as an example, because there are at least two different equilibria involved.  when you say

The offense-defense balance is a concept that compares how easy it is to protect vs conquer or destroy resources.

Conquer control vs retain control is a different thing than destroy vs preserve.  Frank Herbert claimed (via fiction) that "The people who can destroy a thing, they control it." but it's actually true in very few cases.  The equilibrium of who gets what share of the value from something can shift very separately from the equilibrium of how much total value that thing provides.

Dagon31

Hmm. I think there are two dimensions to the advice (what is a reasonable distribution of timelines to have, vs what should I actually do).  It's perfectly fine to have some humility about one while still giving opinions on the other.  "If you believe Y, then it's reasonable to do X" can be a useful piece of advice.  I'd normally mention that I don't believe Y, but for a lot of conversations, we've already had that conversation, and it's not helpful to repeat it.

 

Dagon20

note: this was 7 years ago and I've refined my understanding of CDT and the Newcomb problem since.

My current understanding of CDT is that it's does effectively assign a confidence of 1 to the decision not being causally upstream of Omega's action, and that is the whole of the problem.  It's "solved" by just moving Omega's action downstream (by cheating and doing a rapid switch).  It's ... illustrated? ... by the transparent version, where a CDT agent just sees the second box as empty before it even realizes it's decided.  It's also "solved" by acausal decision theories, because they move the decision earlier in time to get the jump on Omega.  

For non-rigorous DTs (like human intuition, and what I personally would want to do), there's a lot of evidence in the setup that Omega is going to turn out to be correct, and one-boxing is an easy call.  If the setup is somewhat difference (say, neither Omega nor anyone else makes any claims about predictions, just says "sometimes both boxes have money, sometimes only one"), then it's a pretty straightforward EV calculation based on kind of informal probability assignments.

But it does require not using strict CDT, which rejects the idea that the choice has backward-causality.

Dagon2612

Thanks for this - it's important to keep in mind that a LOT of systems are easier to sustain or expand than to begin.  Perhaps most systems face this.

In a lot of domains, this is known as the "bootstrap" problem, based on the concept of "lift yourself up by your bootstraps", which doesn't actually work well as a metaphor.  See Bootstrapping - Wikipedia

In CS, for instance, compilers are pieces of software that turn source code into machine code.  Since they're software, they need a complier to build them.  GCC (and some other from-scratch compilers, but many other compilers just depend on GCC) includes a "bootstrap C compiler", which is some hand-coded (actually nowadays it's not, it's compiled as well) executable code which can compile a minimal "stage 2" compiler, which then compiles the main compiler, and then the main compiler is used to build itself again, with all optimizations available.

In fact, you've probably heard the term "booting up" or "rebooting" your computer.  This is a shortening of the word "bootstrap", and refers to powering on without any software, loading a small amount of code from ROM or Flash (or other mostly-static store), and using that code to load further stages of Operating System.  

Dagon134

Allocation of blame/causality is difficult, but I think you have it wrong.

ex. 1 ... He would also waste Tim's $100 which counterfactually could have been used to buy something else for Bob. So Bob is stuck with using the $100 headphone and spending the $300 somewhere else instead.

No.  TIM wasted $100 on a headset that Bob did not want (because he planned to buy a better one).  Bob can choose whether to to hide this waste (at a cost of the utility loss by having $300 and worse listening experience, but a "benefit" of misleading Tim about his misplaced altruism), or to discard the gift and buy the headphones like he'd already planned (for the benefit of being $300 poorer and having better sound, and the cost of making Tim feel bad but perhaps learning to ask before wasting money).

ex. 2 The world is now stuck with Chris' poor translation on book X with Andy and Bob never touching it again because they have other books to work on.

Umm, here I just disagree.  The world is no worse off for having a bad translation than having no translation.  If the bad translation is good enough that the incremental value of a good translation doesn't justify doing it, then that is your answer.  If it's not valuable enough to change the marginal decision to translate, then Andy or Bob should re-translate it.  Either way, Chris has improved the value of books, or has had no effect except wasting his own time.

Load More