thoughtfulape comments on Controlling your inner control circuits - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (146)
An observation: PJeby if you really have a self help product that does what it says on the tin for anyone who gives it a fair try, I would argue that the most efficient way of establishing credibility among the Less wrong community would be to convince a highly regarded poster of that fact. To that end I would suggest that offering your product to Eliezer Yudkowsky for free or even paying him to try it in the form of a donation to his singularity institute would be more effective than the back and forth that I see here. It should be possible to establish an mutually satisfactory set of criteria of what constitutes 'really trying it' beforehand to avoid subsequent accusations of bad faith.
What makes you think that that's my goal?
Pjeby: If your goal isn't to convince the less wrong community of the effectiveness of your methodology then I am truly puzzled as to why you post here. If convincing others is not your goal, then what is?
Helping others.
Do you expect anyone to benefit from your expertise if you can't convince them you have it?
pjeby will be more likely to notice this proposition if you post it as a reply to one of his comments, not one of mine.
Nope. The fact that you, personally, experience winning a lottery, doesn't support a theory that playing a lottery is a profitable enterprise.
What? If the odds of the lottery are uncertain, and your sample size is actually one, then surely it should shift your estimate of profitability.
Obviously a larger sample is better, and the degree to which it shifts your estimate will depend on your prior, but to suggest the evidence would be worthless in this instance seems odd.
It's impossible for playing a lottery to be profitable, both before you ever played it, and after you won a million dollars. The tenth decimal place doesn't really matter.
I wonder what's your definition of 'profit'.
True story: when I was a child, I "invested" about 20 rubles in a slot machine. I won about 50 rubles that day and never played slot machines (or any lottery at all) again since then. So:
Assuming that we're using a dictionary definition of the word 'profit', the entire 'series of transactions' with the slot machine was de-facto profitable for me.
It's obvious that to interpret my words correctly (as not being obviously wrong), you need to consider only big (cumulative) profit. And again, even if you did win a million dollars, that still doesn't count, only if you show that you were likely to win a million dollars (even if you didn't).
The only way I can make sense of your comment is to assume that you're defining the word lottery to mean a gamble with negative expected value. In that case, your claim is tautologically correct, but as far as I can tell, largely irrelevant to a situation such as this, where the point is that we don't know the expected value of the gamble and are trying to discover it by looking at evidence of its returns.
That expected value is negative is a state of knowledge. We need careful studies to show whether a technique/medicine/etc is effective precisely because without such a study our state of knowledge shows that the expected value of the technique is negative. At the same time, we expect the new state of knowledge after the study to show that either the technique is useful, or that it's not.
That's one of the traps of woo: you often can't efficiently demonstrate that it's effective, and through intuition probably related to conservation of expected evidence you insist that if you don't have a better method to show its effectiveness, the best available method should be enough, because it's ridiculous to hold the claim to higher standard of proof on one side than on another. But you have to, the prior belief plays its part, the threshold to changing a decision may be too far away to cross by simple arguments. The intuitive thrust of the principle doesn't carry over to expected utility because of the threshold, it may well be that you have a technique for which there is a potential test that could demonstrate that it's effective, but the test is unavailable, and without performing the test the expected value of the technique remains negative.
I'm afraid I'm struggling to connect this to your original objections. Would you mind clarifying?
ETA: By way of attempting to clarify my issue with your objection, I think the lottery example differs from this situation in two important ways. AFAICT, the uselessness of evidence that a single person has won the lottery is a result of:
the fact that we usually know the odds of winning the lottery are very low, so evidence has little ability to shift our priors; and
that in addition to the evidence of the single winner, we also have evidence of incredibly many losers, so the sum of evidence does not favour a conclusion of profitability.
Neither of these seem to be applicable here.
The analogy is this: using speculative self-help techniques corresponds to playing a lottery, in both cases you expect negative outcome, and in both cases making one more observation, even if it's observation of success, even if you experience it personally, means very little for the estimation of expected outcome. There is no analogy in lottery for studies that support the efficacy of self-help techniques (or some medicine).
I don't think the principle of charity generally extends so far as to make people reinterpret you when you don't go to the trouble of phrasing your comments so they don't sound obviously wrong.
If you see a claim that has one interpretation making it obviously wrong and another one sensible, and you expect a sensible claim, it's a simple matter of robust communication to assume the sensible one and ignore the obviously wrong. It's much more likely that the intended message behind the inapt textual transcription wasn't the obviously wrong one, and the content of communication is that unvoiced thought, not the text used to communicate it.
But if the obvious interpretation of what you said was obviously wrong, then it's your fault, not the reader's, if you're misunderstood.
All a reader can go by is the text used to communicate the thought. What we have on this site is text which responds to other text. I could just assume you said "Why yes, thoughtfulape, that's a marvelous idea! You should do that nine times. Purple monkey dishwasher." if I was expected to respond to things you didn't say.
My point is that the prior under which you interpret the text is shaped by the expectations about the source of the text. If the text, taken alone, is seen as likely meaning something that you didn't expect to be said, then the knowledge about what you expect to be said takes precedence over the knowledge of what a given piece of text could mean if taken out of context. Certainly, you can't read minds without data, but the data is about minds, and that's a significant factor in its interpretation.