This is a special post for quick takes by Hasturtimesthree. Only they can create top-level comments. Comments here also appear on the Quick Takes page and All Posts page.
4 comments, sorted by Click to highlight new comments since:

Most people in the rationality community are more likely to generate correct conclusions than I am, and are in general better at making decisions. Why is that the case?

Because they have more training data, and are in general more competent than I am. They actually understand the substrate on which they make decisions, and what is likely to happen, and therefore have reason to trust themselves based on their past track record, while I do not. Is the solution therefore just "git gud"?

This sounds unsatisfactory, it compresses competence to a single nebulous attribute rather than recommending concrete steps. It is possible that there are in fact generalizable decisionmaking algorithms/heuristics that I am unaware of, that I can use to actually generate decisions that have good outcomes.

It might then be possible that I haven't had enough training? When was the last time I actually made a decision unguided by the epistemic modesty shaped thing that I otherwise use, because relying on my own thought processes is known to have bad outcomes and mess things up?

In which case the solution might be to have training in a low-stakes environment, where I can mess up without consequence, and learn from that. Problem: these are hard to generate in a way that carries over cross-domain. If I trust my decision process about which tech to buy in Endless Legend, that says nothing about my decision process about what to do when I graduate.

Endless Legend is simple, and the world is complicated. I can therefore fully understand: "this is the best tech to research: I need to convert a lot of villages, so I need influence to spend on that, and this tech generates a lot of it". While figuring out what path to take such that the world benefits the most requires understanding what the world needs, an unsolved problem in itself, and the various effects each path is likely to have. Or on even the small scale, where to put a particular object in the REACH that doesn't seem to have an obvious location is a question I would go to another host for.

Both of these are a problem that does not have a simple, understandable solution that has everything fall into place; they do not take place in a universe that I can trust my reasoning about, unlike, say, a Python script. And yet people seem to be relatively good at making decisions in the case of hard-to-understand environments with many uncertainties (such as the actual world).

So the low-stakes environment to train in still has to be hard to understand, and the decisions must be things that affect this environment in non-abstractable ways... or do they? It's possible that what training data gives a person is a good way of abstracting things away into understandable chunks, which one can trust to work on their own. I suppose this is what we call "concepts" or "patterns", took me long enough to arrive at something so obvious. So does this mean that I need a better toolbox of these, so that I can understand the world better and therefore make better decisions? This seems a daunting task: the world is complicated, how can I abstract things away as non-lossily as possible, especially when these are patterns I cannot receive training data on, such as "what AGI (or similar) will look like"? So then the question is: how do I acquire enough of a toolbox of these such that the world appears simpler and more understandable? (The answer is likely "slowly, and with a lot of revisions to perceived patterns as you get evidence against them or clarifying what they look like.")

In a conversation with a friend about decision theory, they gave the following rebuttal to FDT:

Suppose there is a gene with two alleles, A and B. A great majority of agents with allele A use FDT. A great majority of agents with allele B use EDT. Omega presents the following variant of Newcomb's problem: instead of examining the structure of your mind, Omega simply sequences your genome, and determines which allele you have. The opaque box contains a million dollars if you have allele B, and is empty if you have allele A. Here, FDT agents would two-box, and more often than not end up with $1,000, as nothing depends on any prediction of their behavior, only on an external factor their actions have no retrocausal effect on: no matter what they do, they are not "proving Omega wrong" in any sense; their genome does not subjunctively depend on their actions. An EDT agent, however, would one-box, as with regular Newcomb's problem: they do not care about subjunctive dependence."If you're so smart," a proponent of EDT might ask of a proponent of FDT, "why ain't you rich?"

One easy out is to simply get one's genome sequenced prior to the experiment itself. In that case, both EDT and FDT agents know what is in the boxes beforehand, and will two-box accordingly. The EDT agent can no longer obtain any evidence of them having one allele or the other.


What if that's impossible though? Suppose we don't know what the allele is, and are by some contrived means kept from finding out, leaving only knowledge of its existence?

One point in favor of FDT: unlike in a regular Newcomb's problem, a precommitment to one-box prior to the experiment does nothing: even if one does so, this has no effect on Omega's decision.

Comes the response from a proponent of EDT: No causal effect, perhaps, but such a precommitment is evidence that one carries allele B. Thus, those who make that precommitment are more often than not rewarded!


Consider the multiverse, and the circumstances of an agent's birth in this scenario. 50% of worlds give them gene A. 50% give them gene B. Regardless of what they do, these probabilities are fixed, as the alleles of an agent are determined at random. Thus, if the agent has a greater propensity for two-boxing in this modified Newcomb's problem, regardless of their genes, they will overall come out ahead in expected value.

Contrast the regular Newcomb's problem: the agent has retrocausal control over the probabilities of their being a one-boxer or two-boxer. By one-boxing, they cause the multiverse to have one more one-boxer in it, even if they are one of the small minority that Omega mis-predicts. Thus, from the multiversal perspective, it is the agents further down the line that determine the probability of an agent one-boxing, rather than a random process.

TAPs are a nice tool if you can actually notice the trigger when it happens. If you are of necessity in the middle of doing something (or more precisely, thinking of something) when the trigger happens, you may forget to implement the action, because you're too busy thinking about other things. If something happens on a predictable, regular basis, then an alarm is sufficient to get you to notice it, but this is not always the case.


So, how do you actually notice things when you're in the middle of something and focusing on that?


Option 1: Alter the "something" you are in the middle of. This may be done via forced, otherwise unnecessary repetition of the action and the events leading up to it, or the creation of a deviation in the flow of the thing. The latter is how post-it notes work. This option is useful when logistically feasible, for external actions. The deviation, here, serves to increase your lucidity (the quality required to actually notice "Oh wait, it's that time again"), leading you to perform the action accordingly. This overlaps somewhat with option 2, below.


Option 2: Constant Vigilance. Notice all the things, all the time. This is a massive boost to the completability of all TAPs if achieved, but probably takes a good deal of mental energy. It also precludes being "in the middle of" things at all, as the flow must be interrupted by "Has a trigger happened?". It might be possible to combine this with option 1 by weaving Constant Vigilance into a specific procedure otherwise, as starting something affords more lucidity than being in the middle of it. I haven't tried this yet; results incoming. In fact, I have set an alarm to remind me to post the results of an attempt at this here. One can accomplish some of this during meditation, when attempting to notice all of one's sensations and/or experiences: one is relatively unlikely to miss the trigger then.

Results: it is hard; harder than meditation. I can get lost in thought easily, though I snapped back often enough to successfully execute the action. I note that any change in the thing I was in the middle of prompted a snapback: no longer lost in thought due to suddenly needing to think about external reality. I expect I will get better at this with time, especially since in this case the action needs to be executed daily.