Will_Newsome comments on Feed the spinoff heuristic! - Less Wrong

49 Post author: CarlShulman 09 February 2012 07:41AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (85)

You are viewing a single comment's thread. Show more comments above.

Comment author: Will_Newsome 11 February 2012 11:02:21AM 2 points [-]

Good point, but to some extent that might defeat the purpose. Since my model is that psi is evasive I expect that the more people I clue in to the results or even the existence of the experiments, the less likely it is I'll get significant or sensible results. And with the retrocausal effects demonstrated by PEAR and so on, if I ever intend to publicize the results in the future then that itself is enough to cause psi to get evasive. Kennedy actually recommends keeping self-experimentation to oneself and precommiting to telling no one about the results for these reasons. So basically even if you get incredibly strong results you're left with a bunch of incommunicable evidence. Meh.

I have various responses ready for our other conversation by the way, which I'd like to get back to soon. I was finally able to get a solid twenty-two hours of sleep. My fluid intelligence basically stops existing when sleep-deprived.

Comment author: Vaniver 11 February 2012 04:08:55PM 10 points [-]

And with the retrocausal effects demonstrated by PEAR and so on, if I ever intend to publicize the results in the future then that itself is enough to cause psi to get evasive.

This reminds me of the story of the poker player who concluded it was unlucky to track his winnings and losses because whenever he did it, he lost way more than he expected to.

Comment author: gwern 11 February 2012 04:11:12PM 3 points [-]
Comment author: Vaniver 11 February 2012 07:55:45PM 0 points [-]

Thanks for the link! (I think I saw it first in Rational Decisions, since I hadn't upvoted that quote before.)

Comment author: Will_Newsome 11 February 2012 04:27:02PM -1 points [-]

Seems plausible his observations were correct if he had a small sample size, if not his judgment about what to do given his observations. (I say this only because the default reaction of "what an impossibly idiotic person" might deserve a slight buffer when as casual readers we don't know many actual details of the case in question. What evidence filtered/fictional evidence and what not.)

Comment author: FeepingCreature 12 February 2012 03:27:05PM 5 points [-]

Sorry for butting in, but don't you find it strangely convenient that your psi effect is defined just so as to move it outside the domain of scientific inquiry? Do you anticipate ever finding a way to reliably distinguish it from random chance, or do you anticipate forming another excuse, ahem, reason why you should have expected from the start that the way you just tried would not reliably show it? I'd claim you're chasing invisible dragons, but I find it incredulous that you haven't thought of the comparison yourself, which leaves me confused. How does an effect look like that is real but cannot be distinguished from random chance by any reliable method? How would you extract utility from such an effect? And is it worth it to break your tools of inquiry that otherwise work very well, just so you can end up believing in an effect that is true but useless? Food for thought.

Comment author: Will_Newsome 13 February 2012 01:45:35PM 4 points [-]

I am aware of this. I would have to be incredibly stupid not to be aware of it.

Do you anticipate ever finding a way to reliably distinguish it from random chance

I can reliably distinguish it from random chance, but by hypothesis I just can't tell you about it. I can get evidence, just not communicable evidence.

I think maybe every time I post about evasive psi I should include a standard disclaimer along the lines of "Yes, I realize how incredibly dodgy this sounds and I also find it rather frustrating, but bringing it up and harping on it never leads anywhere."

Comment author: Eugine_Nier 12 February 2012 09:56:19PM *  1 point [-]

How about trying to leave a line of retreat and imagine what the world would be like if the theory Will is proposing were correct?

Comment author: FeepingCreature 12 February 2012 10:08:37PM *  2 points [-]

That's my point, I don't expect to be able to make consistently differing observations! If his theory is correct, we still wouldn't be able to reliably exploit that feature.

I'm not saying it's wrong, I'm saying even if it's right it's useless to believe.

I mean if there is some form of reliable Psi I'll have a party because that'd be awesome.

Comment author: Will_Newsome 13 February 2012 01:48:12PM 4 points [-]

I think you should look more closely at the arguments I made above: my hypothesis makes testable predictions, but if verified the evidence isn't reliably communicable to other people. By my hypothesis psi is perhaps "exploitable" but I cringe at the thought of trying to "exploit" a little-understood agentic process in the case that it actually exists.

Comment author: Desrtopa 13 February 2012 01:50:25PM 0 points [-]

but I cringe at the thought of trying to "exploit" a little-understood agentic process in the case that it actually exists.

Why?

Comment author: Will_Newsome 13 February 2012 02:05:28PM 5 points [-]

A safety heuristic. Just say no to demons, for the same reason you should say no to drugs until you figure out what they are, what they do, and the intentions of the agent offering them to you.

Comment author: Will_Newsome 13 February 2012 01:58:38PM *  5 points [-]

(E.g., imagine a transhumanly intelligent agent who only hangs out with you when it knows that no one will believe that it hung out with you. This means that when it hangs out with you it can do arbitrarily magical things, but you'll never be able to tell anyone about it, because the agent went out of its way to keep that from happening, and it's freakin' transhumanly intelligent so you know that any apparent chance of convincing others of its visit is probably not actually a chance. Is this theory improbable? Absolutely. But supposing that the agent actually does hang out with you and does arbitrarily magical stuff, you don't have any way of convincing others that the theory is a posterior probable, and you'll probably just end up making a fool out of yourself if you try, as the agent predicted.

I think a problem might be when people think of psi they think 'ability to shoot fireballs' rather than 'convincing superintelligences to act on your behalf' (note that that's just one possible mechanism of many and we shouldn't privilege any hypotheses yet). If people thought they were dealing with intelligent agents then they'd use the parts of their brain designed for dealing with agents, and those parts are pretty good at what they do. Note we only want to use those parts because, at least in my opinion, psi as a relatively passive phenomenon seems to be a falsified hypothesis, or at the very least it doesn't explain a ton of things that seem just as real as passive psi phenomena.)

Comment author: thomblake 13 February 2012 04:35:26PM 3 points [-]

Oh, you mean Bill Murray.

Comment author: wallowinmaya 13 February 2012 05:50:30PM 1 point [-]

Kennedy actually recommends keeping self-experimentation to oneself and precommiting to telling no one about the results for these reasons.

Does Kennedy recommend a specific type of self-experimentation? What's the best way to test one's psi-abilities in your opinion?

Comment author: Will_Newsome 13 February 2012 06:02:12PM 3 points [-]

I don't remember if he has any specific recommendations. I don't know what the best way to test ones abilities would be but the REG (random event generator) paradigm seems highly conducive to rigorous and thorough experimentation. Alas, I forget what the literature says about pseudo-random generators. I can't in good faith recommend psi experiments; on the one hand if psi is for real then we're probably doing it all the time without realizing it (which is I think the typical Eastern perspective), on the other hand it seems like a generally bad idea to go out of ones way to play around with a little-understood perhaps-agentic process. Playing with Thor seems significantly dumber than playing with fire.