You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Abandoning Cached Selves to Re-Write My Source Code Partially, I've Become Unstable

6 diegocaleiro 10 October 2012 05:47PM

For very long I've been caring a lot for the preferences of my past selves. 

Rules I established in childhood became sacred, much like laws are (can't find post in the sequences in which Yudkowsky is amazed by the fact that some things are good just because they are old), and that caused interesting unusual life choices, such as not wearing formal shoes and suits. 

I was spending more and more time doing what my previous selves thought I should, in a sense, I was composed mostly of something akin to what Anna Salomon and Steve Rayhawk called Cached Selves.

That meant more dedication to long term issues (Longevity, Cryonics, Immortality). More dedication to spacially vast issues (Singularity, X-risk, Transhumanism).   

Less dedication to the parts of one's self that have a shorter life-span.  Such as the instantaneous gratification of philosophical traditions of the east (buddhism, hinduism) and some hedonistic traditions of the west (psychedelism, selfish instantaneous hedonism, sex and masturbation-ism, drugs-isms, thrill-isms). 

Also less dedication to time spans such as three months. Personal projects visible, completable and doable in such scales. 

This process of letting your past decisions trump your current decisions/feelings/emotions/intuitions was very fruitful for me, and for very long I thought (and still think) it made my life greater than the life of most around me (schoolmates, university peers, theater friends etc... not necessarily the people I choose to hang out with, after all, I selected those!). 

At some point more recently, and I'm afraid this might happen to the Effective Altruist community and the immortalist community of Less Wrong, I started feeling overwhelmed, a slave of "past me". Even though a lot of "past me" orders were along the lines of "maximize other people's utility, help everyone the most regardless of what those around you are doing".

Then the whole edifice crumbled, and I took 2 days off of all of life to go to a hotel in the woods and think/write alone to figure out what my current values are. 

I wrote several pages, thought about a lot of things. More importantly, I quantified the importance I give to different time-spans of my self (say 30 points to life-goals, 16 points to instantaneous gratification, 23 points to 3MonthGoals etc...). I also quantified differently sized circles of altruism/empathy  (X points for immediate family, Y points for extended family, Z points for near friends, T points for smart people around the globe, U points for the bottom billion, K points for aliens, A points for animals etc...). 

Knowing my past commitment to past selves, I'd expect these new quantificatonal regulatory forces I had just created to take over me, and cause me to spend my time in proportion to their now known quantities. In other words, I allowed myself a major change, a rewriting which dug deeper into my source code than previous re-writings. And I expected the consequences to be of the same kind than those previous re-writings. 

Seems I was wrong. I've become unstable. Trying to give an outside description the algorithm as it feels from the inside, it seems that the natural order of attention allocation which I had, like a blacksmith, annealed over the years, has crumbled. Instead, I find myself being prone to an evolutionary fight between several distinct desires of internal selves. A mix of George Ainslie's piconomics and plain neural darwinism/multiple drafts. 

Such instability, if not for anything else, for hormonal reasons, is bound not to last long. But thus far it carried me into Existentialism audiobooks, considering Vagabonding lifestyle as an alternative to a Utilitarian lifestyle, and considering allowing a personality dissolution into whatever is left of one's personality when we "allow it" (emotionally) to dissolve and reforge itself. 

The instability doesn't cause anxiety, sadness, fear or any negative emotion (though I'm at the extreme tail of the happiness setpoint, the equivalent in happiness of having an IQ 145, or three standard deviations). Contrarywise. It is refreshing and gives a sense of freedom and choice. 

This post can be taken to be several distinct things for different readers. 

1) A warning for utilitarian life-style people that allowing deep changes causes an instability which you don't want to let your future self do. 

2) A tale of a self free of past enslavery (if only for a short period of time), who is feeling well and relieved and open to new experiences. That is, a kind of unusual suggestion for unusual people who are in an unusual time of their lives. 

(Note: because of the unusual set-point thing, positive psychology advice should be discarded as a basis for arguments, I've already achieved ~0 marginal returns after 2000pgs of it)

3) This is the original intention of writing: I wanted to know the arguments in favor of a selfish vagabonding lifestyle, versus the arguments in favor of the Utilitarian lifestyle, because this is a particularly open-minded moment in my life, and I feel less biased than in most other times. For next semester, assume money is not an issue (both Vagabond and Utililtarian are cheap, as opposed to "you have a million dollars"). So, what are the arguments you'd use to decide that yourself? 

Fireplace Delusions [LINK]

32 mas 03 February 2012 04:13AM

 

Sam Harris, in his recent article called The Fireplace Delusion, tries to make you feel what it's like to react to a cached belief being irreparably destroyed. Just incase you forgot what your apostasy (if you had one, of course) was like in its early stages.

 

What are some of the Fireplace Delusions you've come across in your days?

 

EDIT: WOODSMOKE HEALTH EFFECTS

Convincing my dad to sign up for Alcor: Advice?

5 EphemeralNight 25 September 2011 10:01PM

Since it came to my attention that signing up for cryonics is not as pointless as I'd once thought, I've been pondering how to sell my dad on the idea.

This is somewhat urgent for a couple of reasons. First, he's already pushing sixty and would meet increased resistance in acquiring another life insurance policy at a much steeper rate than myself. Second, even given his age, he could afford to sign us both up easily, and after some consideration, convincing him seems like the more efficient path to being signed up myself than trying to arrange it for myself alone. And, well, he's my dad, and while he has his flaws, he's kind of awesome.

The sticking point, I can easily predict, will be getting passed his Cached Skepticism towards the concept of Cryonics. He is proud of being Skeptical; it's important to his self-concept, so I need to hit him with an opening that he can't easily dismiss. I would predict that if I simply linked him to Alcor's webpage, absolutely nothing would happen. I need something that will motivate him to investigate.

The frustrating part, is that I'm quite sure that if he'd never heard of cryonics before, he'd have no resistance to the idea beyond the basic pain of dispelling the emotional numbness towards a Certain Doom by suggesting it might not be certain after all. Unless my model of him is very much inaccurate, his Cached Skepticism of Cryonics is the only notable obstacle. I would appreciate any recommendations on which if any articles in particular would be best for initially getting through that.

"I know I'm biased, but..."

22 [deleted] 10 May 2011 08:03PM

Inspired by: The 5-Second Level, Knowing About Biases Can Hurt People

"I know I'm biased, but..." and its equivalents seem to be relatively common in casual conversation--I've encountered the phrase in classroom discussions, on Internet message boards, and in political arguments. In most cases, "I know I'm biased, but..." is used as a way of feigning humility and deflecting criticism by preemptively responding to accusations of bias. That is, the speaker acknowledges that their argument may be flawed in order to deny their opponent the opportunity point out particular biases. It's a way of signaling to the audience, "Yes, there are errors in this line of reasoning, but I already know that, so you can't accuse me of being biased."

But as we all know by now, it's not enough to just acknowledge biases--you have to actually correct the error before you can move on. Admitting that your argument is based on bias does not absolve you of your error, and it doesn't make your argument any truer.

Therefore, "I know I'm biased, but" is a cached thought that we would be better off without. But how can we get rid of it? Tabooing the phrase "I know I'm biased, but..." is not enough, since your brain will probably end up substituting something similar, such as "I may be wrong, but..." instead of making the appropriate correction. Instead, it is necessary to force your brain to consciously think about the bias instead of instinctively rationalizing the biased argument. This is a skill that takes place on the 5-second level: you have to stop your train of thought mid-sentence and think about the situation more clearly. The following should serve as an anti-pattern for when you notice yourself thinking, "I know I'm biased, but...":

1) Stop. I'm not ready to proceed. If there's a bias in my argument, bulldozing over it is never the correct solution. I need to just cut myself off in mid-sentence and think about this.

2) Identify the bias. What is this bias that my brain is trying to cover up? Does it have a name? Where have I read about it before? What heuristic am I using that is causing the problem? Do I have any emotional attachment to this argument that might cloud my judgment? How would I feel if this argument was wrong? Where is my information coming from? Did I do a thorough job researching this argument?

3) Think about potential solutions. What heuristic should I be using instead of the one I am using? Can I substitute a quantitative analysis or Bayesian update instead of jumping to a particular conclusion? Do I need to do more research to determine if this argument is true? What other sources of evidence can I consult?

4) Re-analyze using a different method. What happens when I use the heuristics I just thought about instead of the ones I originally used? What pieces of evidence really support my argument? What facts would need to be different for it to be false? Can I compare multiple perspectives on this argument?

5) Re-evaluate the argument. Does the argument still look correct? Does approaching the problem with a different method yield the same results? Have I completely explained away the bias?

An abstract explanation isn't always enough, so here is an example:

 


 

"...and that's why," Albert concluded, "the iPhone is absolutely terrible!"

"I know I'm biased," Barry replied, "but iPhone is the best smartphone on the market!"

Uh-oh, thought Barry. I said that phrase again. Something's not right here. "Hang on a moment..."

Why would I think that the iPhone is the best smartphone on the market? How would I feel if it wasn't the best phone? Well, I'd be kind of annoyed that I spent all that money to buy one. I'd feel disappointed because the advertisement made it look really awesome, and I've always told everyone that it was worth the price. Am I rationalizing this? Hmm, maybe I am rationalizing and I just don't want to believe that I made a bad purchase.

Ok, so what if it is rationalization? What am I supposed to do now? Didn't I read something on LessWrong about this? This feels like "politics is the mind-killer" territory--I should probably be re-thinking my arguments and checking for bias.

But how should I be evaluating the quality of my iPhone? I guess I should ask myself what features I care about--let's pick three. Well, the most important thing to me is service--I make a lot of calls for work and I don't want any of them to be dropped. I want my phone to be durable, too--I'm pretty clumsy and I drop it from time to time. And the phone bill is important too.

Alright, let's add all of this up: The iPhone is pretty fragile, I've already cracked the screen slightly. And it does drop calls sometimes--there might be a network with better coverage, I'm not sure. And the phone bill--my old phone was definitely a lot cheaper, but it also wasn't a smartphone. I'd have to research other networks' coverage and pricing to be sure.

Wow, I might've been wrong about this. That means I wasted a lot of money. And it also means that the iPhone probably isn't "the best" phone out there. Wait, that's not right--it could be the best, but I don't have the evidence to prove it, so my argument isn't right. I have to gather more evidence.

"Are you still there?" Albert frowned in puzzlement. "You kinda fuzzed out there for a second."

"Nevermind," said Barry. "What I should have said was, the iPhone doesn't really do all of the things I want it to do. Say, where's the electronics store?"

 


 

Next time you catch yourself thinking, "I know I'm biased, but...", don't let your brain finish the sentence--stop that train of thought and analyze it!

Edit: Many commenters have suggested that "I know I'm biased, but..." is sometimes used to signal being open to counterarguments. As a result, it is best to double-check what you (or your discussion partners) are really signaling so that you can respond appropriately.