Diabetics can't have the tablets with sugar
This is false.
Diabetics can't have the tablets with sugar
This is false.
If this is true, then the most likely world I see in which it gets accepted is one where T3 tests become more like glucose tests: reasonably cheap, and available for patients to self-administer at home.
"We live in a simulation" and "we live in not-a-simulation" are not mutually exclusive.
I appreciate collections of rationality techniques, and I admire the spirit with which this was made. However, after Duncan raised the possibility of uncanny-valley problems, I cross-checked this with what I remember as a CFAR alum and a few issues jumped out at me.
Hamming circles: This needs a warning. If you organize a group into Hamming Circles and they don't know what they're doing, aren't in the right mindspace, or don't have enough shared context and trust, it can backfire pretty severely. People's Hamming problems are often things that are aversive to think about, and attempting to discuss them but having it go poorly can make the problem worse.
Comfort zone expansion: This is not what CFAR means by the phrase at all. The first link describes a mindful walkthrough, which is something one might do prior to comfort zone expansion. The second link is by someone not associated with CFAR, and it says some things that diametrically oppose things I recall CFAR instructors saying and which I think are objectionable.
Focused Grit: This description is the first step of a 3-step process. Step two is, if after having tried for five minutes you haven't solved the problem, then set another 5-minute timer and spend it brainstorming 5-minute exercises for solving the problem. Then step 3 is doing some of those exercises.
[Epistemic status: speculative. Definitely don't try to make a decision based on this without speaking to an endocrinologist first.]
So, let me see if I understand what you wrote, adding in a few things I read on Wikipedia and the interpretations that seem obvious to me.
T3 controls metabolic rate, by upregulating metabolic processes throughout the body. TSH controls the concentration of T3 by setting the rate at which T4 is converted to T3. TSH is tested for, T3 and T4 are usually not. The Wikipedia page for TSH lists diagnoses for the cross-product of T3 and TSH, with primary hyper- and hypothyroidism corresponding to the cases where they are mismatched: high TSH and low T3, or low TSH and high T3. Cases where T3 and TSH are both low indicate iodine deficiency, because iodine is also a necessary part of the conversion from T4 to T3. TSH is linked to the circadian rhythm.
Adding a bit of interpretation of my own, TSH represents the difference between the body's overall metabolic rate is, and what some mechanism thinks it should be. Under this model, symptoms of metabolic-rate-too-low would appear if:
(All diabetics with imperfect blood sugar control would fall in the "unaccounted energy sink" category. I have T1DM. fibromyalgics probably would too; the characteristic symptom of fibromyalgia is chronic pain of undiagnosed origin, and chronic pain is very likely to have a corresponding ongoing energy expenditure.)
At this point the selection of possible causes has fanned out enough that it seems implausible for everyone with CFS symptoms to have the same root cause. But it's also the case that, under this model, T3 supplementation is likely to help with a broader range of causes than TSH is.
However, there are two good reasons to hesitate before trying a T3 supplement such as pig thyroid. First: this is bypassing several feedback/regulatory steps in the body, so there's a much higher risk of accidentally overshooting and getting a dangerous overdose. And, second: increasing overall availability of energy in the body can make infections and cancers worse.
The problem goes away if you add finiteness in any of a bunch of different places: restrict agents to only output decisions of bounded length, or to only follow strategies of bounded length, or expected utilities are constrained to finitely many distinct levels. (Making utility a bounded real number doesn't work, but only because there are infinitely many distinct levels close to the bound).
The problem also goes away if you allow agents to output a countable sequence of successively better decisions, and define an optimal sequence as one such that for any possible decision, a decision at least that good appears somewhere in the sequence. This seems like the most promising approach.
The Center For Applied Rationality (CFAR) checklist is a heuristic for assessing the admissibility of one's own testimony.
Did something get jumbled here? This isn't right at all.
We need not event the wheel, for legal theorists have researched this issue for years, while practitioners and courts have identified heuristics useful to lay people interested in this field.
Grammar aside, the standard legal process and courts are really bad at reaching true conclusions. Taking their practices as wisdom seems likely to be quite bad.
The main thing holding me back from posting on Less Wrong, and I really doubt that I'm alone in this, is that it feels sort of mutually exclusive with posting on my own blog. That blog has to exist for the things I want to post that would be too far offtopic for LW, but then if I want it to not be dead, that consumes my entire posting volume.
Not looking at the world in a probabilistic way
Philosophy has long had the hope that eventually, somehow, it would find a set of elegant axioms from which the rest would regrow, like what happened in math. Several branches of philosophy think they did collapse it to a set of elegant axioms (though upon inspection, they actually let the complexity leak back in elsewhere). I think there's a fear, not entirely unjustified, that if you let probabilistic reasoning into two many places then this closes off the possibility of reaching an axiomatization or of ever reaching firm conclusions about interesting questions. Today, it's been long enough to know that the quest for axiomatization was doomed from the start - or at least, the quest for an axiomatization that wasn't itself a probabilistic thing. So allowing probabilistic reasoning shouldn't seem like a big scary concession anymore, but on the other hand, it's still difficult and most philosophers aren't dual-classed into maths.
Using personal preference or personal intuitions as priors instead of some objective measure along the lines of Solomonoff Induction
Unfortunately, Solomonoff Induction falls off the table as soon as the questions get interesting. As a next-best-thing, intuition is not all that bad. I'd criticize a lot of philosophy, not for grounding ideas in intuition, but for treating intuition as a black box rather than as something which can be studied and debugged and improved. Most LW-style philosophy does bottom out at intuition somewhere, it just does a better-than-usual job of patching over intuition's weaknesses.
Moral realism
When you're getting started on learning game theory, there is a point where it looks like it might be building towards an elegant theory of morality, something that would reproduce our moral intuitions and being a great Schelling point and ground morality really well. Then it runs into roadblocks and doesn't get there, so we're stuck with a hodgepodge metaethics where morality depends on an aggregation of many peoples' preferences but there are different ways to aggregate one persons' preferences and different ways to aggregate groups' preferences and some preferences don't count and it's all very unsatisfying. But if you haven't hit that wall yet or you're very optimistic or you're limiting yourself to sufficiently simple trolley problems, then moral realism seems like a thing.
Mathematical Platonism
This is a trap door into silly arguments about subtleties of the word "exist" which are cleanly and completely separated from all predictions. But if you want to engage with ideas like a mathematical multiverse, you do end up needing to think about subtleties of the word "exist", and math ends up looking more fundamental than physics.
Libertarian free will (I'm looking for arguments other than those from religion)
I'm not sure what libertarian free will is in relation to the rest of the ideas about free will, but I find thinking about free will gets a lot easier if you first acknowledge that our intuitions are guided by the idea of ordinary freedom (ie, whether there's a human around with a whip), and then go a step further and just think about ordinary freedom instead.
The view that there actually exist abstract "tables" and "chairs" and not just particles arranged into those forms
These ideas come back in slightly different forms when you start considering mathematical multiverses and low-fidelity simulations of the universe. For example, if you accept the simulation argument, and further suppose that the simulation would not be full-fidelity but would be designed to make this fact hard to notice, then you get the conclusion that certain abstract objects exist and their constituent particles don't.
The existence of non-physical minds (I'm looking for arguments other than the argument from the Hard Problem of Consciousness)
The idea of minds as cognitive algorithms leads to something sort-of like this; in that framing, minds are physical objects with a dual existence in platonic math-realm that diverges if physics causes a deviation from the algorithm.
View more: Next
You have noticed things happening that don't match your model of how you think the world (and nutrition in particular) should work. Rather than defy the data, maybe you could come up with a different model that better explains the observations?