I have found that the more I use my simulation of HPMOR!Quirrell for advice, the harder it is to shut him up. As with any mental discipline, thinking in particular modes wears thought-grooves into your brain's hardware, and before you know it you've performed an irreversible self-modification. Consequently, I would definitely recommend that anybody attempting to supplant their own personality (for lack of a better phrasing) with a model of some idealized reasoner try to make sure that the idealized reasoner shares your values as thoroughly as possible.
I've now got this horrifying idea that this has been Quirrell's plan all along: to escape from HPMOR to the real world by tempting you to simulate him until he takes over your mind.
In retrospect, I'm kind of glad that my plan to make a Quirrell-tulpa never got off the ground.
Beware acausal trade! Once Eliezer imagined Quirrel, he had to write HPMOR to stop Quirrel from counterfactually simulating 3^^^3 dustspeckings.
Rational agents cannot be successfully blackmailed by other agents that simulate them accurately, and especially not by figments of their own imagination.
Sounds somewhat like exploiting the Fundamental Attribution Error. Other people (including imagined selves) are not that much influenced by external situational factors in our minds. Thus, they act more consistent with their internal characteristics. ActualMe is always thrown around by external forces and emotion chaos.
I was trying to think of what a more rational response to this, since I agreed with your points and also used a very similar trick. I then came up with 'The rational thing to do is to say you agree, upvote, and then get back to the other tasks you have, rather than spending an hour worrying about a perfect response, which sounds a lot like the very social anxiety she was trying to avoid.'
I agree with your post. Upvoted.
Mine went "See if someone has already stated what you want then up-vote them, OP and be done with it"
I use a variant of this from my tabletop gaming days, ever since I noticed that player characters in rpgs don't really suffer from decision fatigue or hyperbolic discounting in the same way. I simulate myself as a player making choices about the Toggle-character in a game. "If the GM asked me what my character was going to do today/for the next hour/in response to this challenge, what would I say?"
So, I just had a weird turn at work, that's made it obvious that I can't stay here.
And when I ask myself, "what does Protagonist Brent do?", I immediately imagine powering through my flu, putting my most valuable possessions in my car, pointing West, and driving until I reach Berkeley - then finding an apartment and walking into start-ups and big companies and saying "I can code. I just moved here from Idaho. I need a job. What have you got?"
And then I don't do that, because I'm too dizzy to get out of bed, let alone drive 10 hours to Berkeley, and I have no idea where I'd stay, and I only have $3,000 to my name.
Because my imagination does NOT conserve detail, it just builds a narrative.
How do you work around that?
Update:
I've slept, rested, stuffed myself full of multivitamins, and got through my flu. My most necessary possessions are in my car. I am pointed West, with a room waiting for me in Berkeley.
puts on Blues Brothers glasses
Hit it.
salutes I profoundly appreciate that. So far, there have been zero police chases inside shopping malls, or their metaphorical equivalents.
Content appropriate to the thread:
Invoking what brave, confident Brent would do has been working SWIMMINGLY WELL for me. Absurdly well. Impossibly well. I have literally spent my entire life not understanding the underlying principle behind "fake it till you make it", but now I get it instinctively.
Thank you all.
How do you work around that?
Hello Past Brent, this is Future Brent, aka the actor playing Protagonist Brent on the popular hit show, "Ialdabaoth".
Here's what you're missing:
"Montage".
It looks like Protagonist Brent has to power through recouperation, driving, interviews, hiring, etc. in a matter of weeks because you forget that Protagonist Brent's super-long slogs get edited down into a montage. Six months of work still takes six months, but Protagonist You gets to construct that into a montage-like narrative where the boring parts take up maybe two sentences each, and the cool parts take up minutes to hours of excitedly-narrated epicness.
But I, the actor playing Protagonist Brent, still have to slog through the full six months of work, so that we can pick the best highlights and edit it down in post-production to a few pithy, iconic representations of "this was hard work and there was lots of improvement and moments of triumph". The payoff of the slog is the moments of triumph and the distilled moments of "I can sweat for this", and neglecting them means a fake montage, which means Protagonist Brent doesn't look very epic.
And that itself ...
Taskify your challenges. To continue the metaphor: Protagonists often have lots of adventures/problems/riddles to solve on their way to the end of the book.
You asked Protagonist Brent what he would do and he told you how he would get a job. That's a good start, but don't let him take all the credit while foisting the legwork off onto you! How does Protagonist Brent find somewhere to live? How does he address his financial concerns?
I might also add there's a lot of scope for dramatic imagery if Protagonist Brent rests up for a day or two and then rises from his bed as if from the grave. :)
This is exactly the point of asking "What Would Jesus Do?" Christians are asking themselves, what would a perfectly moral, all-knowing person do in this situation, and using the machinery their brains have for simulating a person to find out the answer, instead of using the general purpose reasoner that is so easily overworked. Of course, simulating a person (especially a god) accurately can be kind of tricky. Similar thoughts religious people use to get themselves to do things that they want to abstractly but are hard in the moment: What would I do if I were the kind of person I want to become? What would a perfectly moral, all-knowing person think about what I'm about to do?
I've found something like this useful, especially at work, but hard to calibrate. "What would a more less shy kalium do? Tell the CTO that he's wrong, because he's wrong." Sometimes this is a good idea, but sometimes it's not. "What would an optimally shy kalium do?" is not so easy to predict.
This seems to be an extremely powerful method for handling decision fatigue - it's one of the few (maybe the only?) things I've seen on Less Wrong that I'm going to start applying immediately because of the potential I see in it. On the other hand, I doubt it would be so effective for me for handling social anxiety or other emotion-laden situations. A voice in my head telling me to do something that I already know I should do won't make the emotion go away, and, for me, the obstacle in these sorts of situations is definitely the emotion.
A voice in my head telling me to do something that I already know I should do won't make the emotion go away, and, for me, the obstacle in these sorts of situations is definitely the emotion.
A voice in your head isn't a simulation of what the idealized person would do. What you want is for your simulation to be is the experience of observing that idealized person actually doing it. Otherwise, you are just thinking (system 2) instead of simulating (system 1).
To put it another way: a voice in your head is Far, a simulated experience is Near -- and Near has far more influence over your emotions (no pun intended).
Great post!
Others have mentioned the HPMOR-style "take a poll of different aspects of your personality," which I have found to be entertaining and useful.
I'd also like to endorse the method for troubleshooting. I got the idea from Ridiculous Fish's blog post from 3 years ago.
When I have a technical problem I'm stuck on, I try to ask myself "What would someone who's smarter than me do?" This is really just "imagine a parody version of person x and see if that causes you to think about the problem in a different way."
I like to c...
I don't carry around a mental model of myself, but I think I will start on it.
I do explicitly carry around a mental model of my boss. Whenever I am working out a deal and I'm not sure about whether to agree to a point, I ask the little boss simulation. One of my boss's sterling qualities is that he makes decisions quickly, much more quickly than I do. Where I tend to gather lots of information and examine nuances, he simplifies. He is also better at (and institutionally more appropriate for) comparing incommensurate values (like different risks and rewards...
I've tried this before and I find it difficult to trust this "other self". What works better for me is to treat "other self" as an information source and then have "real me" make a decision with this information in mind.
The "real me" is sort of like a benevolent dictator. It has to have the final say, but ultimately it's capable of deferring to "other self" when appropriate. Maybe this is just a thing with me ("real me" always has to have the final say, I can't just trust someone else).
Of course, this is just one data point. Other people very well may be different.
I'd be invited to a party. I'd feel paralyzing fear, and a sense of impending doom as I noticed that I likely believed going to the party was the right decision.
Hah! I know that feeling so well. Then when I notice it, I feel doomed AND stupid!
This sounds like a good technique for dealing with social phobias or ADHD, but I hope you don't use it for complex, long-chain-single-point-of-failure personal or ethical problems. Those are areas where emotions stop us from doing terrible things because we forgot to carry the 1 in column 3.
This is a line of development that-- while clearly useful-- seems somewhat hacky and unpromising to me. While I agree that this is likely to yield useful benefits in the short run, it strikes me that fixing one's internal structure in order to produce reliably correct external actions without these sorts of hacks seems more promising in terms of long-term growth and skills.
About a year ago, I thought that lucid dreaming was a great path to rationality. While lucid dreaming is a great way to train the skill of noticing confusion, I no longer recommend it to...
Hm, so I tried thinking about how I could apply this to a problem I have and can't quite see how to do it. Any suggestions? Am I missing or misinterpreting a point or is this just not a good problem to apply this solution to?
Here's the problem: I like rock climbing, but I'm not very good at it. The thing holding me back in the situation I'm most concerned with here (lead climbing outdoors) is primarily that I'm scared. I'm frequently in a position where I'm capable of executing the next move with very high probability, but I don't want to attempt it becaus...
I like this, and I'm going to try it on a decision I've been putting off for years.
(said decision involves whether or not I should tell deeply religious family that I don't believe in god(s))
May I suggest that you make sure all of your money is in accounts that don't also have their names? You'd be disappointed by how many ~college age people get screwed by this.
This trick is pretty powerful. I'm channeling a model of the more confident version of myself to post this comment, rather than just lurking like I normally do.
Thanks a lot!
This resonated with me instantly, thank you!
I now remember, I used to do something similar if I needed to make decisions, even minor decisions, when drunk. I'd say, "what would I think of this decision sober"? If the answer was "it was silly" or "I'd want to do it but be embarrassed" I'd go ahead and do it. But if the answer was "Eek, obviously unsafe", I'd assume my sober self was right and I was currently overconfident.
A structural explanation would be that your map of yourself doesn't well enough correspond to the territory (your actual self). And the rational self is a way to create a new - better - map. Using the better map (by consulting the rational self)) effectively connects both maps. My understanding of how the brain works lets me guess that the two maps will slowly merge - hopefully fixing whatever was at odds with the old map.
In a way the new map is less detailed, but at least the projection is better - so to speak.
Does this make sense?
Reminds me of http://en.wikipedia.org/wiki/Rubber_duck_debugging
Using an imaginary external source to solve problems internally.
What I'm interested in is whether this method is applicable to social situations as well. I am not a naturally social person, but have studied how people interact and general social behaviors well enough that I can create a simulation of a "socially acceptable helltank".
I already have mental triggers (what I like to call "scripts") in place for a simulation of my rational mind - or rather a portion of my rational mind kept in isolation from bias and metaphorically disconnected from the other parts of my mind to override my "main&q...
I suspect this trick works by not only reducing decision fatigue, but by also offloading rejection fear, which makes this hack appropriate for social anxiety. Fear of embarrassment is one step removed from myself.
I find it interesting that Less Wrong appears to be rediscovering existing ethical theories.
This article argues for a form of virtue ethics arising from utilitarianism - in order to be a good person, simulate an alternate self free of whatever desire is applicable, and then use them as a moral exemplar.
Similarly, Elizer's arguments for Coherent Extrapolated Volition in FAI bear a striking resemblance to Rousseau's arguments regarding the collective will of a state.
Another example of this that springs to mind is this less-popular post on beeminding sin. ...
I loved the exaplanation about using it to control your temper. In fact, just after reading the part where you talk about the party, my very first thought was how I could use this to get a hold of my temper. This happened before I finished reading that paragraph.
I just want to say that the title of this post is fantastic, and in a deep sort of mathy way, beautiful. It's probably usually not possible, but I love it when an appropriate title - especially a nice not-too-long one - manages to contain, by itself, so much intellectual interest. Even just seeing that title listed somewhere could plant an important seed in someone's mind.
I imagined what rational me would do a couple of hours ago, and he'd have gotten a head start on next week's workload until he was tired and then started tomorrow off on much better footing (I'm not talking about being a workaholic -- I'm lazy and have kind of fallen behind -- I could stand to work a bit more)
Instead, I read about why the great filter probably doesn't lie between the evolution of a nervous system and dolphin-level intelligence, learned about 'biological dark matter', dismissed it as viruses, undismissed it, learned that it was probably jus...
It's a good way to "gamify" decision pressure such that insecurities and the typical anxiety involved with "I have to admit I'm doing this wrong" don't rear their ugly heads. Instead, you dissolve the sense of self just a bit, with a "I'm a bunch of modules"-approach that makes it oh-so-easier to acknowledge and fix mistakes, since there's less of the monolithic "I did wrong, I am bad"-response triggered.
Nice trick, in short.
I'd say that to a certain degree all of us do this, even if we're not all consciously aware of it. The unintentional use of this "trick" in my view is as obvious as when people imitate others. To the more extreme end, as you describe here, it can be the deliberate and wilful act of simulating the thoughts of another person for apparent guidance in times of stress.
Overall, I'd say this is just a version of thinking of what a "prudent man" would do, or some might even use the term "straw man" (to separate themselves from reali...
I think I've been doing something like this for a long time, but imagining the simulated decision-maker as a "Ghost of Agency and Making Things Better" rather than an idealized version of myself. People seemed to find that a lot more confusing than this, though, so I'm going to start describing it this way instead.
You mean that you don't have an entire Parliament filled with models designed to represent aspects of your own psychology?
You're buggy software running on corrupted hardware. Fork redundant copies and vote.
...I hope that it will one day. I would rather not have to rely on tricks like this. I hope I'll eventually just be able to go straight from noticing dissonance to re-orienting my whole mind so it's in line with the truth and with whatever I need to reach my goals. Or, you know, not experiencing the dissonance in the first place because I'm already doing
I sometimes let imaginary versions of myself make decisions for me.
I first started doing this after a friend told me (something along the lines of) this story. When they first became executive director of their organization, they suddenly had many more decisions to deal with per day than ever before. "Should we hire this person?" "Should I go buy more coffee for the coffee machine, or wait for someone else deal with it?" "How many participants should attend our first event?" "When can I schedule time to plan the fund drive?"
I'm making up these examples myself, but I'm sure you, too, can imagine how leading a brand new organization might involve a constant assault on the parts of your brain responsible for making decisions. They found it exhausting, and by the time they got home at the end of the day, a question like, "Would you rather we have peas or green beans with dinner?" often felt like the last straw. "I don't care about the stupid vegetables, just give me food and don't make me decide any more things!"
They were rescued by the following technique. When faced with a decision, they'd imagine "the Executive Director of the organization", and ask themselves, "What would 'the Executive Director of the organization' do?" Instead of making a decision, they'd make a prediction about the actions of that other person. Then, they'd just do whatever that person would do!
In my friend's case, they were trying to reduce decision fatigue. When I started trying it out myself, I was after a cure for something slightly different.
Imagine you're about to go bungee jumping off a high cliff. You know it's perfectly safe, and all you have to do is take a step forward, just like you've done every single time you've ever walked. But something is stopping you. The decision to step off the ledge is entirely yours, and you know you want to do it because this is why you're here. Yet here you are, still standing on the ledge.
You're scared. There's a battle happening in your brain. Part of you is going, "Just jump, it's easy, just do it!", while another part--the part in charge of your legs, apparently--is going, "NOPE. Nope nope nope nope NOPE." And you have this strange thought: "I wish someone would just push me so I don't have to decide."
Maybe you've been bungee jumping, and this is not at all how you responded to it. But I hope (for the sake of communication) that you've experienced this sensation in other contexts. Maybe when you wanted to tell someone that you loved them, but the phrase hovered just behind your lips, and you couldn't get it out. You almost wished it would tumble out of your mouth accidentally. "Just say it," you thought to yourself, and remained silent. For some reason, you were terrified of the decision, and inaction felt more like not deciding.
When I heard this story from my friend, I had social anxiety. I didn't have way more decisions than I knew how to handle, but I did find certain decisions terrifying, and was often paralyzed by them. For example, this always happened if someone I liked, respected, and wanted to interact with more asked to meet with them. It was pretty obvious to me that it was a good idea to say yes, but I'd agonize over the email endlessly instead of simply typing "yes" and hitting "send".
So here's what it looked like when I applied the technique. I'd be invited to a party. I'd feel paralyzing fear, and a sense of impending doom as I noticed that I likely believed going to the party was the right decision. Then, as soon as I felt that doom, I'd take a mental step backward and not try to force myself to decide. Instead, I'd imagine a version of myself who wasn't scared, and I'd predict what she'd do. If the party really wasn't a great idea, either because she didn't consider it worth my time or because she didn't actually anticipate me having any fun, she'd decide not to go. Otherwise, she'd decide to go. I would not decide. I'd just run my simulation of her, and see what she had to say. It was easy for her to think clearly about the decision, because she wasn't scared. And then I'd just defer to her.
Recently, I've noticed that there are all sorts of circumstances under which it helps to predict the decisions of a version of myself who doesn't have my current obstacle to rational decision making. Whenever I'm having a hard time thinking clearly about something because I'm angry, or tired, or scared, I can call upon imaginary Rational Brienne to see if she can do any better.
Example: I get depressed when I don't get enough sunlight. I was working inside where it was dark, and Eliezer noticed that I'd seemed depressed lately. So he told me he thought I should work outside instead. I was indeed a bit down and irritable, so my immediate response was to feel angry--that I'd been interrupted, that he was nagging me about getting sunlight again, and that I have this sunlight problem in the first place.
I started to argue with him, but then I stopped. I stopped because I'd noticed something. In addition to anger, I felt something like confusion. More complicated and specific than confusion, though. It's the feeling I get when I'm playing through familiar motions that have tended to lead to disutility. Like when you're watching a horror movie and the main character says, "Let's split up!" and you feel like, "Ugh, not this again. Listen, you're in a horror movie. If you split up, you will die. It happens every time." A familiar twinge of something being not quite right.
But even though I noticed the feeling, I couldn't get a handle on it. Recognizing that I really should make the decision to go outside instead of arguing--it was just too much for me. I was angry, and that severely impedes my introspective vision. And I knew that. I knew that familiar not-quite-right feeling meant something was preventing me from applying some of my rationality skills.
So, as I'd previously decided to do in situations like this, I called upon my simulation of non-angry Brienne.
She immediately got up and went outside.
To her, it was extremely obviously the right thing to do. So I just deferred to her (which I'd also previously decided to do in situations like this, and I knew it would only work in the future if I did it now too, ain't timeless decision theory great). I stopped arguing, got up, and went outside.
I was still pissed, mind you. I even felt myself rationalizing that I was doing it because going outside despite Eliezer being wrong wrong wrong is easier than arguing with him, and arguing with him isn't worth the effort. And then I told him as much over chat. (But not the "rationalizing" part; I wasn't fully conscious of that yet.)
But I went outside, right away, instead of wasting a bunch of time and effort first. My internal state was still in disarray, but I took the correct external actions.
This has happened a few times now. I'm still getting the hang of it, but it's working.
Imaginary Rational Brienne isn't magic. Her only available skills are the ones I have in fact picked up, so anything I've not learned, she can't implement. She still makes mistakes.
Her special strength is constancy.
In real life, all kinds of things limit my access to my own skills. In fact, the times when I most need a skill will very likely be the times when I find it hardest to access. For example, it's more important to consider the opposite when I'm really invested in believing something than when I'm not invested at all, but it's much harder to actually carry out the mental motion of "considering the opposite" when all the cognitive momentum is moving toward arguing single-mindedly for my favored belief.
The advantage of Rational Brienne (or, really, the Rational Briennes, because so far I've always ended up simulating a version of myself that's exactly the same except lacking whatever particular obstacle is relevant at the time) is that her access doesn't vary by situation. She can always use all of my tools all of the time.
I've been trying to figure out this constancy thing for quite a while. What do I do when I call upon my art as a rationalist, and just get a 404 Not Found? Turns out, "trying harder" doesn't do the trick. "No, really, I don't care that I'm scared, I'm going to think clearly about this. Here I go. I mean it this time." It seldom works.
I hope that it will one day. I would rather not have to rely on tricks like this. I hope I'll eventually just be able to go straight from noticing dissonance to re-orienting my whole mind so it's in line with the truth and with whatever I need to reach my goals. Or, you know, not experiencing the dissonance in the first place because I'm already doing everything right.
In the mean time, this trick seems pretty powerful.