Eliezer_Yudkowsky comments on Post ridiculous munchkin ideas! - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (1240)
As someone with a tulpa, I figure I should probably share my experiences. Vigil has been around since I was 11 or 12, so I can't effectively compare my abilities before and after he showed up.
He has dedicated himself to improving our rationality, and has been a substantial help in pointing out fallacies in my thinking. However, we're skeptical that this is anything a more traditional inner monologue wouldn't figure out. The biggest apparent benefit is that being a tulpa allows him a greater degree of mental flexibility than me, making it easier for him to point out and avoid motivated thinking. Unfortunately, we haven't found a way to test this.
I'm afraid he doesn't know any "tricks" like accessing subconscious thoughts or super math skills.
While Vigil has been around for over a decade, I only found out about the tulpa community very recently, so I know very little about it. I also don't know anything about creating them intentionally, he just showed up one day.
If you have any questions for me or him, we're happy to answer.
...just to be clear on this, you have a persistent hallucination who follows you around and offers you rationality advice and points out fallacies in your thinking?
If I ever go insane, I hope it's like this.
Would what's considered a normal sense of self count as a persistent hallucination?
See "free will".
This is strikingly similar to Epictetus' version of Stoic meditation whereby you imagine a sage to be following you around throughout the day and critiquing your thought patterns and motives while encouraging you towards greater virtue.
Related:
— Edsger W. Dijkstra
That sounds similar. Though I'm afraid I've had difficulty finding anything about this while researching Epictetus.
The hallucination doesn't have auditory or visual components, but does have a sense of presence component that varies in strength.
Indeed, this style of insanity might beat sanity.
Tulpas, especially as construed in this subthread, remind me of daimones in Walter Jon Williams' Aristoi. I've always thought that having / being able to create such mental entities would be super-cool; but I do worry about detrimental effects on mental health of following the methods described in the tulpa community.
You are obligated by law to phrase those insights in the form "If X is Y, I don't want to be not-Y."
From the sound of it it'd seem you can make that happen deliberately, and without the need for going insane. no need for hope.
We also have internet self-reports from people who tried it that they are not insane.
One rarely reads self-reports of insanity.
Yes, their attorney usually reports this on their behalf.
If you're interested in experimenting...
Well, wait. Is there some way of flagging "potentially damaging information that people who do not understand risk-analysis should NOT have access to" on this site? Because I'd rather not start posting ways to hack your wetware without validating whether my audience can recover from the mental equivalent of a SEGFAULT.
In my position, I should experiment with very few things that might be unsafe over the course of my total lifetime. This will probably not be one of them, unless I see very impressive results from elsewhere.
nod that's probably the most sensible response.
To help others understand the potential risks, the creation of a 'tulpa' appears to involve hacking the way your sense-of-self (what current neuroscience identifies as a function of the right inferior parietal cortex) interacts with your ability to empathize and emulate other people (the so-called mirror neuron / "put yourself in others' shoes" modules). Failure modes involve symptoms that mimic dissociative identity disorder, social anxiety disorder, and schizophrenia.
I am absolutely fascinated, although given the lack of effect that any sort of meditation, guided visualisation, or community ritual has ever had on me, I doubt I would get anywhere. On the other hand, not being engaged in saving the world and its future, I don't have quite as much at risk as Eliezer.
A MEMETIC HAZARD warning at the top might be appropriate, as is requested for basilisk discussion.