Comment author: johnlawrenceaspden 14 April 2016 04:12:42PM *  1 point [-]

Absolutely, hormone levels would have to be higher than they were, in order to make any difference.

Whether that means 'out of the normal range' for any particular hormone, I don't know. Nobody seems to have the faintest idea what 'normal range' means for these things. But if there are serious cases of resistance, the level of the thing resisted should have to be way high in order to make any difference.

And it should probably cause TSH suppression. In my own case, I had TSH 2.51 when I first started complaining of symptoms, it was 4.06 when they came back, and with a tiny bit of thyroid which was nevertheless enough to get rid of all manifestations of CFS and cause some hyper symptoms it was 2.31, so looks like it pushed TSH down, but still not even as far as some people say is optimal.

And all of that could be noise in the test. Circadian cycle means TSH results can vary by a factor of two depending on when the blood is drawn. Go figure!

What would be ideal would be to figure out the cause of the 'hormone resistance', and fix it, rather than trying to overwhelm it.

It might even be a bad idea to fix it, if it's performing some vital immune defence function.

Comment author: PeterDonis 14 April 2016 06:26:33PM *  0 points [-]

Whether that means 'out of the normal range' for any particular hormone, I don't know. Nobody seems to have the faintest idea what 'normal range' means for these things.

Yes, a better way to put it would be that the bloodstream levels of T3/T4 should be significantly higher when the person is treated and feels better than they were before treatment.

it should probably cause TSH suppression

That would be expected, yes.

What would be ideal would be to figure out the cause of the 'hormone resistance', and fix it, rather than trying to overwhelm it.

Yes, agreed. See below.

It might even be a bad idea to fix it, if it's performing some vital immune defence function.

If this is true, it might also be a bad idea to overwhelm it. The "immune defense" hypothesis says that, if a person is feeling symptoms of CFS/etc., it's because some pathogen is trying to attack their cells, so the immune defense kicks in, but as a side effect it also depresses normal cell metabolism. If thyroid therapy increases normal cell metabolism by overwhelming the immune defense, it might also increase the ability of the pathogen to infect cells. The only real fix in this case would be to find the pathogen and eliminate it.

Comment author: johnlawrenceaspden 14 April 2016 01:04:31PM *  1 point [-]

Hi Peter, I don't want to claim credit for this, it's mostly the work of John Lowe/Broda Barnes (and now Gordon Skinner). I've just put their ideas into what I think is a fairly compelling order, and connected them to some ideas of Greg Cochran and Sarah Myhill. I'm seeing myself more as a speaker for the dead.

I was definitely starting off thinking about 'something wrong with those tests, a few cases missed, maybe', and I absolutely agree that if we take the anecdotal evidence that T3/T4/NDT help in CFS at face value, then this is definitely evidence for that. Two different hypotheses can make a similar prediction.

I've ended up thinking about the 'hormone resistance' idea, because it seems like the sort of thing that might well be true, (once you've realised that it works that way in diabetes), and it's the simplest explanation for what's going on. Sometimes you'll see people with symptoms and a TSH of 2, sometimes you'll see people with a TSH of 30 (in whom something is obviously going wrong), but no symptoms at all yet.

As for 'hormone resistance' as an idea, you're right that if the action of the hormones was completely blocked, adding extra stuff to the bloodstream wouldn't make any difference. And also, those people should be very ill indeed with really obvious hypothyroidism. (Severe cases are easy to recognise).

But there's no reason why resistance should be an on/off thing. There are all sorts of chemical reactions taking place between hormones in the blood and their effect on the mitochondria. All it would need is for something to mysteriously slow one of the reactions down.

John Lowe was forced into inventing the idea of 'peripheral resistance to thyroid hormone' by noticing that a lot (about 25%) of his patients didn't get better (or in fact notice) his attempts to fix them with T4/T3. They should have been made quite ill by this if hormone deficiency wasn't the problem. So he tried higher and higher doses of T3, and found that that worked. He never seems to have connected it to diabetes or to have wondered if it was present in the other cases. I think he thought that 'central hypothyroidism' was the principal problem (another thing that's missed by TSH)

It would make sense as an immune response. Something nasty (virus most likely) might be trying to get into the cells, and in order to avoid being eaten alive, the body somehow tries to wall off the cells so it's harder for things to get in and out. It's a very scorched-earth defense, but those sorts of things happen. Often in bacterial diseases, your body takes most of the iron out of your bloodstream. That's bad for you, but worse for the bacteria. Fever's the same. It does you a lot of harm but it does the enemy more harm.

We have very little idea how the immune system works, or how pathogens try to get round it, but it's a very strange and cruel world down there, and there's group-selection on both sides, so I think it's best viewed as a billion year war, with strategies, tricks, camouflage, and even cleverness involved.

Alternative medicine seems to be much more into the idea of 'toxins', by which they mean chemicals that weren't around when we were evolving. That might well work too, but then you'd expect that there'd be a specific chemical which could reliably induce the resistance, and thus the symptoms of hypothyroidism. I don't know of one.

Conventional medicine seems to mostly revolve around the idea of making up new chemicals and seeing what they do. And they seem to refuse to consider any evidence that doesn't come from very careful formal trials (which are very expensive). I don't think that's a terribly good approach, myself, but it seems to be the best we've got.

All credit to them for wanting to avoid fooling themselves, but I think they've swapped 'bad data' for 'no data', when what they should have done is 'been careful'.

I wish they'd spend more time thinking about cause and effect and what all these systems are 'for', and accepting that millions screaming in pain is not just a big placebo effect or a 'psychological problem'

Comment author: PeterDonis 14 April 2016 02:43:19PM 2 points [-]

Ok, so the "hormone resistance" hypothesis is really something more like: the rate of some key reaction involving T3/T4 is being slowed down by some unknown factor; since we don't know what the factor is, we can't fix it directly, but we can increase the reaction rate by increasing the concentration of T3/T4 in the bloodstream to above normal levels, to compensate for the damping effect of the unknown factor.

This hypothesis makes an obvious testable prediction: that when people with CFS/etc. who are treated with thyroid extract feel better, the T3/T4 levels in their bloodstream should be above normal. Or, conversely, if their bloodstream T3/T4 levels are within the normal range, they should not feel better, even though they are being treated with thyroid extract. I don't know if any existing data has this information.

Comment author: PeterDonis 14 April 2016 03:21:30AM 2 points [-]

I have a couple of questions about your hypothesis.

First, as I understand it, you are hypothesizing that there are people who have symptoms of CFS/etc. but normal blood levels of T3, T4, and TSH, who can nevertheless be helped by taking thyroid extract. And your hypothesized explanation for why these people are having symptoms of CFS/etc. is that, even though there are normal levels of T3 and T4 in their bloodstream, those hormones are not getting into their cells where they are actually needed. But if that is the case, how will putting more T3 and T4 into their bloodstream help? It still won't be getting into the cells. It seems to me that, if your hypothesized cause were correct, the indicated treatment would be to somehow inject T3/T4 directly into the cells--or else to figure out what is blocking the hormones from getting into the cells, and fix that. But just putting more T3/T4 into the bloodstream, ISTM, should not work if your hypothesized cause were correct.

Second, as I understand it, you are taking the fact that treating these people with thyroid extract appears to help them, as evidence that your hypothesis is correct. But it seems to me that this fact is actually evidence for a different hypothesis: the hypothesis that the definition of "normal" levels of T3, T4, and TSH is incorrect. More specifically, that "normal" levels should be defined, not in a "one size fits all" fashion, but specifically for each person based on some set of factors that can vary from person to person. (Obvious candidates would be body weight/BMI and genetic factors.)

Comment author: endoself 22 July 2014 05:01:59AM *  6 points [-]

MMEU isn't stable upon reflection. Suppose that in addition to the mysterious [0.4, 0.6] coin, you had a fair coin, and I tell you that all offer bet 1 ("pay 50¢ to be payed $1.10 if the coin came up heads") if the fair coin comes up heads and bet 2 if the fair coin comes up tails, but you have to choose whether to accept or reject before flipping the fair coin to decide which bet will be chosen. In this case, the Knighian uncertainty cancels out, and your expected winnings are +5¢ no matter which value is [0.4, 0.6] is taken to be the true probabilty of the mysterious coin, so you would take this bet on MMEU.

Upon seeing how the fair coin turns out, however, MMEU would tell you to reject whichever of bets 1 and 2 is offered. Thus, if I offer to let you see the result of the fair coin before deciding whether to accept the bet, you will actually prefer not to see the coin, for an expected outcome of +5¢, rather than see the coin, reject the bet, and win nothing with certainty. Alternatively, if given the chance, you would prefer to self-modify so as to not exhibit ambiguity aversion in this scenario.

In general, any agent using a decision rule that is not generalized Bayesian performs strictly worse than some generalized Bayes decision rule. Note, though, that this does not mean that such an agent is forced to accept at least one of bets 1 and 2, since rejecting whichever of them is offered is a Bayes rule; for example, a Bayesian agent who believes that the bookie knows something that they don't will behave in this way. It does mean, though, that there are many situations where MMEU cannot work, such as in my example above, since in such scenarios it is not equivalent to any Bayes rule.

Comment author: PeterDonis 23 July 2014 12:23:43AM 2 points [-]

In this case, the Knighian uncertainty cancels out

Does it? You still know that you will only be able to take one of the two bets; you just don't know which one. The Knightian uncertainty only cancels out if you know you can take both bets.

Comment author: [deleted] 19 June 2011 12:19:28PM 2 points [-]

2: Machine does not allow interaction with other real people. (Less-trivially fixable, but still very fixable. Networked MBLSes would do the trick, and/or ones with input devices to let outsiders communicate with folks who were in them.

How could you tell the difference? Let's say I claim to have build a MBLS that doesn't contain any sentients whatsoever and invite you to test it for an hour. (I guarantee you it won't rewire any preferences or memories; no cheating here.) Do you expect to not be happy? I have taken great care that emotions like loneliness or guilt won't arise and that you will have plenty of fun. What would be missing?

Like in my response to Yasuo, I find it really weird to distinguish states that have no different experiences, that feel exactly the same.

Let's consider another case: suppose my neurochemistry were altered so I just had a really high happiness set point [...] but had comparable emotional range to what I have now [...] so I could dip low when unpleasant things happened [...]

Why would you want that? To me, that sounds like deliberately crippling a good solution. What good does it do to be in a low mood when something bad happens? I'd assume that this isn't an easy question to answer and I'm not calling you out on it, but "I want to be able to feel something bad" sounds positively deranged.

(I can see uses with regards to honest signaling, but then a constant high set-point and a better ability to lie would be preferable.)

It does not seem like a transmuted orgasmium version of "me" would remember much [...]. Remembering things is not universally enjoyable, and anyway it's rarely the most enjoyable thing I could be doing; this faculty would be replaced.

Yes, I would imagine orgasmium to essentially have no memory or only insofar as it's necessary for survival and normal operations. Why does that matter? You already have a very unreliable and sparse memory. You wouldn't lose anything great in orgasmium; it would always be present. I can only think of the intuition "the only way to access some of the good things that happened to me, right now, is through my memory, so if I lost it, those good things would be gone". Orgasmium is always amazing.

But then, that can't be exactly right, as you say you'd be more at ease to have memory you simply never use. I can't understand this. If you don't use it, how can it possibly affect your well-being, at any point? How can you value something that doesn't have a causal connection to you?

I think in general this boils down to: I don't want to lose capacities that I currently have.

How do you know that? I'm not trying to play the postmodernism card "How do we know anything?", I'm genuinely curious how you arrived at this conclusion. If I try to answer the question "Do I care about losing capacities?", I go through thought experiments and try to imagine scenarios that are only distinguished by the amount of capacities I have and then see what emotional reaction comes up. But then I'm still answering the question based on my (anticipated and real) rewards, so I'm really deciding what state I would enjoy more and pick the more enjoyable one (or less painful one). Wireheading, however, is always maximally enjoyable, so it seems I should always choose it.

(For completeness, I would normally agree with you that losing capacities is bad, but only because losing optimization power makes it harder to arrive at my goals. If I saw no need for more power, e.g. because I'm already maximally happy and there's a system to ensure sustainability, I'd happily give up everything.)

(Finally, I really appreciate your detailed and charitable answer.)

In response to comment by [deleted] on Why No Wireheading?
Comment author: PeterDonis 20 July 2014 06:23:50AM *  1 point [-]

Apologies for coming to the discussion very, very late, but I just ran across this.

If I saw no need for more power, e.g. because I'm already maximally happy and there's a system to ensure sustainability, I'd happily give up everything.

How could you possibly get into this epistemic state? That is, how could you possibly be so sure of the sustainability of your maximally happy state, without any intervention from you, that you would be willing to give up all your optimization power?

(This isn't the only reason why I personally would not choose wireheading, but other reasons have already been well discussed in this thread and I haven't seen anyone else zero in on this particular point.)

Comment author: Viliam_Bur 04 March 2014 04:47:22PM 1 point [-]

That sample question reminds me of a "lie score", which is a hidden part of some personality tests. Among the serious questions, there are also some questions like this, where you are almost certain that the "nice" answer is a lie. Most people will lie on one or two of ten such question, but the rule of thumb is that if they lie in five or more, you just throw the questionnaire away and declare them a cheater. -- However, if they didn't lie on any of these question, you do a background check whether they have studied psychology. And you keep in mind that the test score may be manipulated.

Okay, I admit that this problem would be much worse for rationality tests, because if you want a person with given personality, they most likely didn't study psychology. But if CFAR or similar organizations become very popular, then many candidates for highly rational people will be "tainted" by the explicit study of rationality, simply because studying rationality explicitly is probably a rational thing to do (this is just an assumption), but it's also what an irrational person self-identifying as a rationalist would do. Also, practicing for IQ tests is obvious cheating, but practicing for getting better at doing rational tasks is the rational thing to do, and a wannabe rationalist would do it, too.

Well, seems like the rationality tests would be more similar to IQ tests than to personality test. Puzzles, time limits... maybe even the reaction times or lie detectors.

Comment author: PeterDonis 06 March 2014 11:43:06PM *  0 points [-]

Among the serious questions, there are also some questions like this, where you are almost certain that the "nice" answer is a lie.

On the Crowne-Marlowe scale, it looks to me (having found a copy online and taken it) like most of the questions are of this form. When I answered all of the questions honestly, I scored 6, which according to the test, indicates that I am "more willing than most people to respond to tests truthfully"; but what it indicates to me is that, for all but 6 out of 33 questions, the "nice" answer was a lie, at least for me.

The 6 questions were the ones where the answer I gave was, according to the test, the "nice" one, but just happened to be the truth in my case: for example, one of the 6 was "T F I like to gossip at times"; I answered "F", which is the "nice" answer according to the test--presumably on the assumption that most people do like to gossip but don't want to admit it--but I genuinely don't like to gossip at all, and can't stand talking to people who do. Of course, now you have the problem of deciding whether that statement is true or not. :-)

Could a rationality test be gamed by lying? I think that possibility is inevitable for a test where all you can do is ask the subject questions; you always have the issue of how to know they are answering honestly.

Comment author: ChrisHallquist 02 March 2014 10:35:33PM 1 point [-]

Personally, I am entirely in favor of the "I don't trust your rationality either" qualifier.

Comment author: PeterDonis 03 March 2014 04:46:13PM *  0 points [-]

Is that because you think it's necessary to Wei_Dai's argument, or just because you would like people to be up front about what they think?

Comment author: Wei_Dai 01 March 2014 09:21:52AM *  39 points [-]

So sharing evidence the normal way shouldn't be necessary. Asking someone "what's the evidence for that?" implicitly says, "I don't trust your rationality enough to take your word for it."

I disagree with this, and explained why in Probability Space & Aumann Agreement . To quote the relevant parts:

There are some papers that describe ways to achieve agreement in other ways, such as iterative exchange of posterior probabilities. But in such methods, the agents aren't just moving closer to each other's beliefs. Rather, they go through convoluted chains of deduction to infer what information the other agent must have observed, given his declarations, and then update on that new information. (The process is similar to the one needed to solve the second riddle on this page.) The two agents essentially still have to communicate I(w) and J(w) to each other, except they do so by exchanging posterior probabilities and making logical inferences from them.

Is this realistic for human rationalist wannabes? It seems wildly implausible to me that two humans can communicate all of the information they have that is relevant to the truth of some statement just by repeatedly exchanging degrees of belief about it, except in very simple situations. You need to know the other agent's information partition exactly in order to narrow down which element of the information partition he is in from his probability declaration, and he needs to know that you know so that he can deduce what inference you're making, in order to continue to the next step, and so on. One error in this process and the whole thing falls apart. It seems much easier to just tell each other what information the two of you have directly.

In other words, when I say "what's the evidence for that?", it's not that I don't trust your rationality (although of course I don't trust your rationality either), but I just can't deduce what evidence you must have observed from your probability declaration alone even if you were fully rational.

Comment author: PeterDonis 02 March 2014 02:48:04AM 0 points [-]

(although of course I don't trust your rationality either)

I'm not sure this qualifier is necessary. Your argument is sufficient to establish your point (which I agree with) even if you do trust the other's rationality.

Comment author: Eliezer_Yudkowsky 13 June 2013 10:30:07PM 4 points [-]

Demand for extremely safe assets increased (people wanted to hold more money), the same reason Treasury bonds briefly went to negative returns; demand for loans decreased and this caused destruction of money via the logic of fractional reserve banking; the shadow banking sector contracted so financial entities had to use money instead of collateral; etc.

Comment author: PeterDonis 02 March 2014 02:21:01AM *  0 points [-]

Sorry for the late comment but I'm just running across this thread.

demand for loans decreased and this caused destruction of money via the logic of fractional reserve banking

This is an interesting comment which I haven't seen talked about much on econblogs (or other sources of information about economics, for that matter). I understand the logic: fractional reserve banking is basically using loans as a money multiplier, so fewer loans means less multiplication, hence effectively less money supply. But it makes me wonder: what happens when the loan demand goes up again? Do you then have to reverse quantitative easing and effectively retire money to keep things in balance? Do any mainstream economists talk about that?

Comment author: CronoDAS 13 June 2013 11:55:34PM 4 points [-]

EDIT: Also- how DID the economists figure it out anyway? I would have thought that although circumstances can increase or reduce it inflationary effects would be inevitable if you increased the money supply that much.

When interest rates are virtually zero, cash and short-term debt become interchangeable. There's no incentive to lend your cash on a short-term basis, so people (and corporations) start holding cash as a store of value instead of lending it. (After all, you can spend cash - or, more accurately, checking account balances - directly, but you can't spend a short-term bond.) Prices don't go up if all the new money just ends up under Apple Computer's proverbial mattress instead of in the hands of someone who is going to spend it.

See also.

Comment author: PeterDonis 02 March 2014 02:18:30AM *  0 points [-]

Sorry for the late comment but I'm just running across this thread.

Prices don't go up if all the new money just ends up under Apple Computer's proverbial mattress instead of in the hands of someone who is going to spend it.

But as far as I know the mainstream economists like the Fed did not predict that this would happen; they thought quantitative easing would start banks (and others with large cash balances) lending again. If banks had started lending again, by your analysis (which I agree with), we would have seen significant inflation because of the growth in the money supply.

So it looks to me like the only reason the Fed got the inflation prediction right was that they got the lending prediction wrong. I don't think that counts as an instance of "we predicted critical event W".

View more: Next