All of LauraABJ's Comments + Replies

Thank you for saying this outright.  I was appalled by Scott's lack of epistemic rigor and how irresponsible he was at using his widely-read platform and trust as a physician to fool people into thinking cutting out a major organ has very little risk.  Maybe he really did just fool himself, but I don't think that is an excuse when your whole deal is being the guy with good epistemics who looks at medical research.  A comment he made later about guilting 40,000 randomly selected Americans into donating indicates clearly that he has an Agenda.... (read more)

It seems to me that one needs to place a large amount of trust in one's future self to implement such a strategy. It also requires that you be able to predict your future self's utility function. If you have a difficult time predicting what you will want and how you will feel, it becomes difficult to calculate the utility of any given precomittment. For example, I would be unconvinced that deciding to eat a donut now means that I will eat a donut every day and that not eating a donut now means I will not eat a donut every day. Knowing that I want a don... (read more)

1David_Allen
I worry about making precommitments for many of the reasons you bring up; our natural tendency toward hyperbolic discounting makes sense when we need to reason in the face of uncertain risks. But I have found that when I focus on long term uncertainty I tend to lock myself into my current behavior even if it works against my current goals. To avoid the uncertainty inherent in making a commitment, my approach is about how to make a choice for right now -- based on my current goals. By choosing to not eat a donut right now, I am not deciding anything about my behavior tomorrow. Tomorrow I may have to repeat the same process of reasoning; if my state tomorrow is similar to my state today I will probably make the same choice, but if it isn't similar I may make a different choice. No guilt, no fuss. I am using my assumption -- that I will always make the same choice in similar circumstances -- to help scope and quantify the consequences of my alternatives. In the case of my example it allows me to scale the consequences to a level I can more easily compare to my goals. In a year I want to weigh 10 lbs less than now so eating 13 lbs of calories as donuts appears to work against that goal. This approach allows me to make an immediate decision which supports my long term goals, while only experiencing the actual risk of this specific choice, and not the combined risk of all similar future choices. If I discover unexpected negative consequences from this choice then my state will be changed; which I will take into account the next time I face similar circumstances. For example if I discover that not eating a 180 calorie donut in the morning leads to me eating 300 additional lunch and dinner calories, then clearly I will start choosing to eat donuts in the morning. But in fact I discovered that the opposite was generally true; when I ate a donut in the morning I tended to eat 200-300 more calories during the rest of the day. By lowering my perceived exposure to risk I fel
6[anonymous]
This is exactly right, which is why I suggest documenting how you respond to different behaviors. I think that it's only partly deciding to be predictable; it's also noticing in which ways you already ARE predictable. In a lot of aspects of life, there are patterns in your behavior, you just haven't noticed them yet. I pretty much know how little I can eat before it becomes unsustainable/distracting. This is the advantage of actually keeping a record. (I might be able to push below that threshold but at the moment it doesn't seem worthwhile.) I also have noticed that I eat better when constrained by rules than when trying to follow "good judgment." EAT DONUT, in particular, is bad for me. I've also made observations about how I feel on different amounts of sleep, and how many hours of work I can maintain before going crazy. (It's much easier for me to "push" on my work capacity than to "push" on food past a certain point.) In other words: it's worth it to try to know yourself better so that you know what "EAT DONUT" will do to you.
1Swimmer963 (Miranda Dixon-Luinenburg)
I suspect that some people behave more predictably and/or can predict their own behaviour better than others (I don't think those two things are the same, or necessarily correlated). Which would make it easier to be a TDT agent. Mood stability might be a factor.

I know that feeling, but I don't know how conscious it is. Basically when then outcome matters in a real immediate way and is heavily dependent on my actions, I get calm and go into 'I must do what needs to be done' mode. When my car lost traction in the rain and spun on the highway, I probably saved my life by reasoning how to best get control of it, pumping the break, and getting it into a clearing away from other vehicles/trees, all within a time frame that was under a minute. Immediately afterwards the thoughts running through my head were not, 'Oh f... (read more)

LauraABJ140

Ok- folding a fitted sheet is really fucking hard! I don't think that deserves to be on that list, since it really makes no difference whatsoever in life whether or not you properly fold a fitted sheet, or just kinda bundle it up and stuff it away. Not being able to deposit a check, mail a letter, or read a bus schedule, on the other hand can get you in trouble when you actually need to. Here's to not caring about linen care!

2michaelkeenan
Here is a YouTube video (496,000 views, time 2:26) demonstrating how to fold a fitted sheet.
1David_Gerard
I want to know how to put a cover on a duvet (doona, quilt) without feeling like I'm going to pop a vertebra.

That's kind of my point-- it is a utility calculation, not some mystical er-problem. TDT-type problems occur all the time in real life, but they tend not to involve 'perfect' predictors, but rather other flawed agents. The decision to cooperate or not cooperate is thus dependent on the calculated utility of doing so.

0ata
Right, I was mainly responding to the implication that TDT would be to blame for that wrong answer.

"I think this is different from the traditional Newcomb's problem in that by the time you know there's a problem, it's certainly too late to change anything. With Newcomb's you can pre-commit to one-boxing if you've heard about the problem beforehand."

Agreed. It would be like opening the first box, finding the million dollars, and then having someone explain Newcomb's problem to you as you consider whether or not to open the second. My thought would be, "Ha! Omega was WRONG!!!! " laughing as I dove into the second box.

edit: Because there was no contract made between TDT agents before the first box was opened, there seems to be no reason to honor that contract, which was drawn afterwards.

Ok, so as I understand timeless decision theory, one wants to honor precommitments that one would have made if the outcome actually depended on the answer regardless of whether or not the outcome actually depends on the answer or not. The reason for this seems to be that behaving as a timeless decision agent makes your behavior predictable to other timeless decision theoretical agents (including your future selves), and therefore big wins can be had all around for all, especially when trying to predict your own future behavior.

So, if you buy the idea that... (read more)

3ata
You can't change the form of the problem like that and expect the same answer to apply! If, when you two-box, Omega has a 25% chance of misidentifying you as a one-boxer, and vice versa, then you can use that in a normal expected utility calculation. If you one-box, you have a 75% chance of getting $1 million, 25% nothing; if you two-box, 75% $.5 million, 25% $1.5 million. With linear utility over money, one-boxing and two-boxing are equivalent (expected value: $750,000), and given even a slightly risk-averse dollars->utils mapping, two-boxing is the better deal. (I don't think TDT disagrees with that reasoning...)

This is a truly excellent post. You bring the problem that we are dealing with into a completely graspable inferential distance and set up a mental model that essentially asks us to think like an AI and succeeds. I haven't read anything that has made me feel the urgency of the problem as much as this has in a really long time...

0jacob_cannell
Thank you! Your encouragement is contagious.

This is true. We were (and are) in the same social group, so I didn't need to go out of my way for repeated interaction. Had I met him once and he failed to pick up my sigs, then NO, we would NOT be together now... This reminds me of a conversation I had with Silas, in which he asked me, "How many dates until....?" And I stared at him for a moment and said, "What makes you think there would be a second if the first didn't go so well?"

4wedrifid
By the ellipsis do you mean 'sex', and indicate that lack of it on the first date constitutes a failure? (Good for you if you know what you want!)
LauraABJ120

Self help usually fails because people are terrible at identifying what their actual problems are. Even when they are told! (Ahh, sweet, sweet denial.) As a regular member of the (increasingly successful) OB-NYC meetup, I have witnessed a great deal of 'rationalist therapy,' and frequently we end up talking about something completely different from what the person originally asked for therapy for (myself included). The outside view of other people (preferably rationalists) is required to move forward on the vast majority of problems. We should also no... (read more)

2curiousepic
Is this because you performed some sort of "root cause analysis", or simply where the conversation strayed?

You are very unusual. I love nerds too, and am currently in an amazing relationship with one, but even I have my limits. He needed to pursue me or I wouldn't have bothered. I was quite explicitly testing, and once he realized the game was one, he exceeded expectations. But yeah, there were a couple of months there when I thought, 'To hell with this! If he's not going to make a move at this point, he can't know what he's doing, and he certainly won't be any good at the business...'

[anonymous]250

You are very unusual. I love nerds too, and am currently in an amazing relationship with one, but even I have my limits. He needed to pursue me or I wouldn't have bothered.

If I hadn't already had good evidence that he was crazy about me, I might have gone for more of that sort of testing, I don't know.

At the time I had this idea that I was going to be San Francisco's real-life superheroine. I would get a cape and a mask and call myself Mistra. I went as far as enrolling in a first-responder course and a Wing Chun class. I told Sam (now my husband, but ... (read more)

1wedrifid
A couple of months. Even that is a little unusual. :)

Are you intending to do this online or meet in person? If you are actually meeting, what city is this taking place in? Thanks.

0Morendil
Excellent question, thanks. I can only offer to help with the online version, I live in France where only a few only LessWrongers reside. And there's nothing to prevent the online group from having a F2F continuation. I'll ask people to say where they are.
LauraABJ190

I agree that these virtue ethics may help some people with their instrumental rationality. In general I have noticed a trend at lesswrong in which popular modes of thinking are first shunned as being irrational and not based on truth, only to be readopted later as being more functional for achieving one's stated goals. I think this process is important, because it allows one to rationally evaluate which 'irrational' models lead to the best outcome.

2gwern
As one of the rationalist quote threads said,
5fburnaby
This also fits my (non-LW) experience very well. There's that catchy saying: "evolution is smarter than you are". I think it probably also extends somewhat to cultural evolution. Given that our behaviour is strongly influenced by these, I think we should expect to 'rediscover' much of our own biases and intuitions as useful heuristics for increasing instrumental rationality under some fairly familiar-looking utility function.

It seems that one way society tries to avoid the issue of 'preemptive imprisonment' is by making correlated behaviors crimes. For example, a major reason marijuana was made illegal was to give authorities an excuse to check the immigration status of laborers.

LauraABJ360

Dear Tech Support, Might I suggest that the entire Silas-Alicorn debate be moved to some meta-section. It has taken over the comments section of an instrumentally useful post, and may be preventing topical discussion.

6Violet
The whole affair smells quite a lot like harassment and someone not being content when asked to stop.
Maelin270

Can somebody nonpartisan give us the Cliff's Notes of the whole mess? I tried reading it. Then I tried skimming it. It seems to rely on some whole pre-existing unpleasant dynamic between several commenters of which I am currently blissfully unaware, and it also looks quite seriously dull.

It also looks pretty damn childish, despite having lots of fun mature-sounding rationalist words. A silly playground arguments is still a silly playground argument.

Are we really going to do this kind of thing on LessWrong now? Nothing is going to turn away non-committed me... (read more)

-3[anonymous]
Who found the post useful? Alicorn didn't.
4cousin_it
Seconded.

I have always been curious about the effects of mass-death on human genetics. Is large scale death from plague, war, or natural-disaster likely to have much effect on the genetics of cognitive architecture, or are outcomes generally too random? Is there evidence for what traits are selected for by these events?

0[anonymous]
I'm also interested in Nanani's question below, with a specific emphasis on human-caused mass death selecting for specific characteristics. For example, the Cambodian purges of intellectuals or the Communist purges of successful businesspeople. Are these too tenuous a proxy for genes to cause long-term change in alleles, or did the Cambodians and Communists do long-term harm to their genetic legacy?
7gcochran
Too random to have much effect, I should think. And at the same time, not awful enough to reduce the population to the point where drift would become important. Unless we're talking asteroid impacts. One can imagine exceptions. For example, if alleles that gave resistance to some deadly plague had negative side effects on intelligence, then you'd see an effect. Note that negative side effects are much more likely than positive side effects. I know of some neat anecdotal exceptions. Von Neumann got out of Germany in 1930, while the getting was good. When a friend said that Germany was oh-so-cultured and that there was nothing to worry about, Von Neumann didn't believe it. He started quoting the Melian dialogue - pointed out that the Athenians had been pretty cultured. High intelligence helped save his life.
0Nanani
Seconded, but with a request for contrast, if possible, with human-caused mass-death such as invasion by conquering hordes. What effect do such phenomena have at the genetic level wrt cognition, as opposed to cultural or lingustic transmission?

Most people commenting seem to be involved in science and technology (myself included), with a few in business. Are there any artists or people doing something entirely different out there?

To answer the main question, I am an MD/PhD student in neurobiology.

3Alicorn
I draw and write, but only as a hobby. Does that count?

Awe, this made my night! Welcome to all!

Sure, one can always look at the positive aspects of reality, and many materialists have even tried to put a positive spin on the inevitability of death without an afterlife. But it should not be surprising that what is real is not always what is most beautiful. There are a panoply of reasons not to believe things that are not true, but greater aesthetic value does not seem to be one of them. There is an aesthetic value in the idea of 'The Truth,' but I would not say that this outweighs all of the ways in which fantasy can be appealing for most people.... (read more)

Good post, but I think what people are often seeking in the non-material is not so much an explanation of what they are, but a further connection with other people, deities, spirits, etc. In a crude sense, judeo-christian god gives people an ever-present friend that understands everything about them and always loves them. Materialism would tell them, 'There is no God. You have found that talking to yourself makes you feel that you are unconditionally loved, but it's all in your head.'

On a non-religious note, two lovers may feel that they have bonded... (read more)

2bogus
It depends. To those wise enough to take joy in the merely real, the materialistic explanation could be a challenge to actually become more empathetic and communicative towards their lovers. An alief of communion and transcendence can also enhance trustworthiness and cooperation, which are generally sought in any love relationship. By contrast, if the 'spritual' explanation were real, it would probably lose its charm and even be resented by some as a loss in autonomy, just as fire-breathing dragons and lightning spells might become boring and unexciting in a world where magic actually worked.

While not everyone experiences the 'god-shaped hole,' it would be dense of us not to acknowledge the ubiquity of spirituality across cultures just because we feel no need for it ourselves (feel free to replace 'us' and 'we' with 'many of the readers of this blog'). Spirituality seems to be an aesthetic imperative for much of humanity, and it will probably take a lot teasing apart to determine what aspects of it are essential to human happiness, and what parts are culturally inculcated.

3NancyLebovitz
I think Core transformation offers a plausible theory. People are capable of feeling oneness, being loved (without a material source) and various other strong positive emotions, but are apt to lose track of how to access them. Dysfunctional behavior frequently is the result of people jumping to the conclusion that if only some external condition can be met, they'll feel one of those strong positive emotions. Since the external condition (money, respect, obeying rules) isn't actually a pre-condition for the emotion but the belief about the purpose of the dysfunctional behavior isn't conscious, the person keeps seeking joy or peace or whatever in the wrong place. Core transformation is based on the premise that it's possible to track the motives for dysfunctional behavior back to the desired emotion, and give them access to the emotion-- the dysfunctional behavior evaporates, and the person may find other parts of their life getting better. I've done a little with this system-- enough to think there's at least something to it.
4mattnewport
Well, coming back to the original comment I was responding to: I don't feel that way, despite being a thoroughgoing materialist for as long as I can remember being aware of the concept. I also don't really see how believing in the 'spiritual' or non-material could change how I feel about these concepts. It does seem to be somewhat common for people to feel that only spirituality can 'save' us from feeling this way but I don't really get why. I acknowledge that some people do see 'spirituality' (a word that I admittedly have a tenuous grasp on the supposed meaning of) as important to these things which is why I'm postulating that there is some difference in the way of thinking or perhaps personality type of people who don't see a dilemma here and those for whom it is a source of tremendous existential angst.

Ok, so I am not a student of literature or religion, but I believe there are fundamental human aesthetic principles that non-materialist religious and wholistic ideas satisfy in our psychology. They try to explain things in large concepts that humans have evolved to easily grasp rather than the minutiae and logical puzzles of reality. If materialists want these memes to be given up, they will need to create equally compelling human metaphor, which is a tall order if we want everything to convey reality correctly. Compelling metaphor is frequently incor... (read more)

Jack120

Why produce new metaphors when we can subvert ones we already know are compelling?

For it is written: The Word of God is not a voice from on High but the whispers of our hopes and desires. God's existence is but His soul, which does not have material substance but resides in our hearts and the Human spirit. Yet this is not God's eternal condition. We are commanded: for the God without a home, make the universe His home. For the God without a body, make Him a body with your own hands. For the God without a mind, make Him a mind like your mind, but worthy of... (read more)

3mattnewport
I'm wondering whether your statement is true only when you substitute 'some people's' for 'our' in 'our psychology'. I don't feel a god-shaped emotional hole in my psyche. I'm inclined to believe byrenma's self report that she does. I've talked about this with my lapsed-catholic mother and she feels similarly but I just don't experience the 'loss' she appears to. Whether this is because I never really experienced much of a religious upbringing (I was reading The Selfish Gene at 8, I've still never read the Bible) or whether it is something about our personality types or our knowledge of science I don't know but there appears to be an experience of 'something missing' in a materialist world view amongst some people that others just don't seem to have.

" The negative consequences if I turn out to be wrong seem insignificant - oh no, I tried to deceive myself about my ability to feel differently than I do!"

Repression anyone? I think directly telling yourself, "I don't feel that way, I feel this way!" can be extremely harmful, since you are ignoring important information in the original feeling. You are likely to express your original feelings in some less direct, more destructive, and of course less rational way if you do this. A stereotypical example is that of a man deciding that... (read more)

0Jonathan_Graehl
I'm also not convinced that repression is real. I can't imagine deceiving myself about how I feel in a moment (lying to others, yes, but what's more obvious to me than how I feel?). I do believe people can self-deceive about the reason that they're angry/sad/etc. and attribute that lingering emotion to innocent bystanders instead. Maybe that's what you mean.
0Jonathan_Graehl
That's plausible: "feeling is because of dumb reason X" -> feeling retreats -> "I must have been right." I just don't trust it entirely.

We discussed a similar idea in reference to Godzilla, namely what kind of evidence we would need to believe that 'magical' elements existed in the world. The point you made then was that even something as outside our scientific understanding as Godzilla would be insufficient evidence to change our basic scientific world view, and that such evidence might not exist even in theory. I think this post could be easily improved by an introduction explaining this point, which you currently leave as an open question at the the end.

Monroe, NY (though he is not a Hassid!)

It's not that they have a strict prohibition on pets, more of a general disapproval from appeal to cleanliness. I don't know how the super-orthodox interpret the Torah on this matter.

2JoshuaZ
This isn't a issue from anything coming from the Torah. Rather, a dislike of dogs likely stems from anti-Semites in Eastern Europe having their dogs attack Jews, and later the use of dogs by the Nazis to keep concentration camp inmates in line. However, there's is some connection to cleanliness issues also. Some people claim that the Jewish home should mirror the historical Temple in Jerusalem and thus should not have any non-kosher animals in it at all. See this essay which discusses this in more detail.
LauraABJ580

I would find this argument much more convincing if it were supported by people who actually have children. My mother goes beserk over a smiling infant in a way I cannot begin to comprehend (I am usually afraid I will accidentally hurt them). My husband, likewise, has an instant affinity for babies and always tries to communicate and play with them. He was raised Jewish with the idea that it is unclean to have animals in the home and does not find animals particularly adorable. In our culture we are inundated with anthropomorphised images of animals in ... (read more)

0Nisan
Where is he from, if you don't mind my asking? The Jewish cultures in the United States that I'm familiar with are okay with pets.
4inklesspen
Other hominids have been known to keep pets. I would not be surprised if cetaceans were capable of this as well, though it would obviously be more difficult to demonstrate.
LauraABJ260

Something like this is useful for the types of data points patients would have no reason to self-deceive over, however I worry that the general tendency for people to make their 'data' fit the stories they've written about themselves in their minds will promote superstitions. For example, a friend of mine is convinced that the aspartame in diet soda caused her rosacea/lupus. She's sent me links to chat-rooms that have blamed aspartame for everything from diabetes to alzheimer's, and it's disturbing to see the kind of positive feed-back loops that are cre... (read more)

PhilGoetz110

In spite of chat rooms dedicated to blaming diet soda for every conceivable health problem and the fall of American values, no scientific study to date has shown ANY negative side effect of aspartame even at the upper bounds of current human consumption.

And in spite of those studies, I get a terrible splitting headache within minutes of drinking a diet soda containing aspartame.

I'm in the middle of preparing a proposal that explains one way in which all previous aspartame studies are flawed. Sorry, not going to explain it now. Aspartame studies are ac... (read more)

I think the key is that most people don't care whether or not AGW is occurring unless they can expect it to affect them. Since changing policy will negatively affect them immediately via increased taxes, decreased manufacture, etc., it's easier to just say they don't believe in AGW period. If the key counter-AGW measure on the table were funding for carbon-capture research, I think many fewer people would claim that they didn't believe in AGW.

My take on global warming is that no policy that has significant impact on the problem will be implemented until ... (read more)

1wedrifid
And by 'them', they don't necessarily even mean 'future them'. They mean 'the status of them in the relatively near future'.
-1CronoDAS
I agree.

There were some fantastic links here. Thankyou!

Does anyone here know what the break-down is among cryonics advocates between believing that A) in the future cryopreserved patients will be physically rejuvinated in their bodies and B) in the future cryopreserved patients will be brain-scanned and uploaded?

I think there is a reasonable probability of effective cryopreservation and rejuvination of a mammal (at least a mouse) in the next 25 years, but I think our ability to 'rejuvinate' will be largely dependent on the specific cryoincs technologies develop... (read more)

Yes- but your two-boxing didn't cause i=0, rather the million was there because i=0. I'm saying that if (D or E) = true and you get a million dollars, and you two-box, then you haven't caused E=0. E=0 before you two boxed, or if it did not, then omega was wrong and thought D = onebox, when in fact you are a two-boxer.

2Gary_Drescher
Everything you just said is true.* Everything you just said is also consistent with everything I said in my original post. *Except for one typo: you wrote (D or E) instead of (D xor E).

No, I still don't get why adding in the ith digit of pi clause changes Newcome's problem at all. If omega says you'll one-box and you two-box then omega was wrong, plain and simple. The ith digit of pi is an independent clause. I don't see how one's desire to make i=0 by two-boxing after already getting the million is any different than one wanting to make omega wrong by two-boxing after getting the million. If you are the type of person who, after getting the million thinks, "Gee, I want i=0! I'll two-box!" Then omega wouldn't have given y... (read more)

1Gary_Drescher
If D=false and E=true and there's $1M in the box and I two-box, then (in the particular Newcomb's variant described above) the predictor is not wrong. The predictor correctly computed that (D xor E) is true, and set up the box accordingly, as the rules of this particular variant prescribe.

I'm not clear at all what the problem is, but it seems to be symantic. It's disturbing that this post can get 17 upvotes with almost no (2?) comments actually referring to what you're saying- indicating that no one else here really gets the point either.

It seems you have an issue with the word 'dependent' and the definition that Eliezer provided. Under that definition, E (the ith digit of pi) would be dependent on C (our decision to one or two box) if we two-boxed and got a million dollars, because then we would know that E = 0, and we would not have kno... (read more)

5Gary_Drescher
Sorry, the above post omits some background information. If E "depends on" C in the particular sense defined, then the TDT algorithm mandates that when you "surgically alter" the output of C in the factored causal graph, you then you must correspondingly surgically alter the output of E in the graph. So it's not at all a matter of any intuitive connotation of "depends on". Rather, "depends on", in this context, is purely a technical term that designates a particular test that the TDT algorithm performs. And the algorithm's prescribed use of that test culminates in the algorithm making the wrong decision in the case described above (namely, it tells me to two-box when I should one-box).

Would kids these days even recognize the old 8-bit graphics?

7Emile
I think so. Pixelly graphics are a bit of an universal symbol for video games, the same way that the steam locomotive is the standard symbol for drawing a train in countries that haven't used them for a long time. On a lot of game-related websites you'll find old-style pixel art used for decoration, especially those space invader aliens.
9Zack_M_Davis
Does it matter? What was once a constraint of existing technology is now a vibrant style of its own.

The model you present seems to explain a lot human behavior, though I admit it might just be broad enough to explain anything (which is why I was interested to see it applied and tested). There have been comments referencing the idea that many people don't reason or think but just do, and the world appears magical to them. Your model does seem to explain how these people can get by in the world without much need for thinking- just green-go, red-stop. If you really just meant to model yourself, that is fine, but not as interesting to me as the more general idea.

0MrHen
This model works extremely well for predicting other people's actions. Your point about it being broad is true. People probably shortcut decisions into behavior patterns and habits after a while. I doubt a large number of them do it consciously. I think the model is applicable to more than me. The underlying point was that some people (such as myself) use this as their belief system. I don't know how often people do that or if it is common. In other words, this model can explain and predict people's actions well but I don't know how often it ends up absorbing the role of those people's belief system.
1AdeleneDawner
I agree. This seems to give much more accurate predictions of most peoples' actual actions than modeling them as consequentialists or deontologists. (The latter is close to this, but fails to account for how people fail to generalize rules across contexts.)

I think an important point missing from your post is that this is how many (most?) people model the world. 'Causality' doesn't necessarily enter into most people's computation of true and false. It would be nice to see this idea expanded with examples of how other people are using this model, why it gives them the opinions (output) that it does, and how we can begin to approach reasoning with people who model the world in this way.

1MrHen
Why do you think this? I am not disagreeing, I am just wondering if you had any information I don't. :)

Having a functional model of what will be approved by other people is very useful. I would hardly say that it "has nothing to do with reality." I think much of the trauma of my own childhood would have been completely avoided if I had been able to pull that off. Alas! Pity to my 9-year-old-self trying to convince the other children they were wrong.

2MrHen
Sure, the functional model of predicting other people's approval is great. The problem with what I did is organize all of the beliefs by situation. These things aren't tied to Reality. They are tied to perceptions. It would be the equivalent of claiming your belief system should be a Map of other people's Maps of the Territory. When none of the people around you are terribly concerned about mapping the territory, your map won't be either. Building a worldview based on other peoples' approvals results in a worldview with all of the problems of those peoples. It makes a child's life easier because a child doesn't need to understand reality. At least, not the way a non-child does.
LauraABJ150

Pascal's mugging...

Anyway, if you are sure you are going to hit the reset button every time, then there's no reason to worry, since the torture will end as soon as the real copy of you hits reset. If you don't, then the whole world is absolutely screwed (including you), so you're a stupid bastard anyway.

5byrnema
Yes, the copies are depending upon you to hit reset, and so is the world.

Ah, so moral justifications are better justifications because they feel good to think about. Ah, happy children playing... Ah, lovers reuniting... Ah, the Magababga's chief warrior being roasted as dinner by our chief warrior who slew him nobly in combat...

I really don't see why we should expect 'morality' to extrapolate to the same mathematical axioms if we applied CEV to different subsets of the population. Sure, you can just define the word morality to include the sum total of all human brains/minds/wills/opinions, but that wouldn't change the fact ... (read more)

Your examples of getting tired after sex or satisfied after eating are based on current human physiology and neurochemistry, which I think most people here are assuming will no longer confine our drives after AI/uploading. How can you be sure what you would do if you didn't get tired?

I also disagree with the idea that 'pleasure' is what is central to 'wireheading.' (I acknowledge that I may need a new term.) I take the broader view that wireheading is getting stuck in a positive feed-back loop that excludes all other activity, and for this to occur, anyth... (read more)

2RobinZ
I got bored with playing Gran Turismo all the time in less than a week - the timescale might change, but eventually blessed boredom would rescue me from such a loop. Edit: From most known loops of this type - I agree with your concern about loops in general.
5Kaj_Sotala
The relevant part of those examples was the fact that it is possible to disentangle pleasure from the desire to keep doing the pleasurable thing. Yes, we could upgrade ourselves to a posthuman state where we don't get tired after eating or sex, and want to keep doing it all the time. But it wouldn't be impossible to upgrade us to a state where pleasure and wanting to do something didn't correlate, either. I believe the commonly used definition for 'wireheading' mainly centers around pleasure, but your question is also important.

I'd be interested in seeing your reasoning written out in a top-level post. 2:1 seems beyond optimistic to me, especially if you give AI before uploading 9:1, but I'm sure you have your reasons. Explaining a few of these 'personally credible stories,' and what classes you place them in such that they sum to 10% total may be helpful. This goes for why you think FAI has such a high chance or succeeding as well.

Also, I believe I used the phrase 'outside view' incorrectly, since I didn't mean reference classes. I was interested to know if there are people who are not part of your community that help you with number crunching on the tech-side. An 'unbiased' source of probabilities, if you will.

4MichaelVassar
I think of my community as essentially consisting of the people who are willing to do this sort of analysis, so almost axiomatically no. The simplest reason for thinking that FAI is (relatively) likely to succeed is the same reason for thinking that slavery ending or world peace are more likely than one might assume from psychology or from economics, namely that people who think about them are unusually motivated to try to bring them about.
LauraABJ100

I don't see why darwinian evolution would necessarily create humanoid aliens in other environments-- sure arguing that they are likely to have structures similar to eyes to take advantage of EM waves makes sense, and even arguing that they'll have a structure similar to a head where a centralized sensory-decision-making unit like a brain exists makes sense, but walking on two legs? Even looking at the more intelligent life-forms on our own planet we find a great diversity of structure: from apes to dolphins to elephants to octopi... All I'd say we can really gather from this argument is that aliens will look like creatures and not like flickering light rays or crystals or something incomprehensibly foreign.

4Zachary_Kurtz
No reasonable scientific evidence would suggest so. You're supposition is most likely correct. The OP's scientist's conjecture is anthropocentric drivel.

Your argument is interesting, but I'm not sure if you arrived at your 1% estimate by specific reasoning about uploading/AI, or by simply arguing that paradigmatic 'surprises' occur frequently enough that we should never assign more than a 99% chance to something (theoretically possible) not happening.

I can conceive of many possible worlds (given AGI does not occur) in which the individual technologies needed to achieve uploading are all in place, and yet are never put together for that purpose due to general human revulsion. I can also conceive of global-... (read more)

6MichaelVassar
Paradigmatic surprises vary a lot in how dramatic they are. X-rays and double slit deserved WAY lower probabilities than 1%. I'm basically going on how convincing I find the arguments for uploading first and trying to maintain calibrated confidence intervals. I would not bet 99:1 against uploading happening first. I would bet 9:1 without qualm. I would probably bet 49:1 I find it very easy to tell personally credible stories (no outlandish steps) where uploading happens first for good reasons. The probability of any of those stories happening may be much less than 1%, but they probably constitute exemplars of a large class. Assigning a 1% probability to uploading not happening in a given decade when it could happen, due to politics and/or revulsion, seems much too low. Decade-to-decade correlations could be pretty high but not plausibly near 1, so given civilization's long term survival uploading is inevitable once the required tech is in place, but it's silly to assume civilization's long-term survival. I don't really think that outside views are that widely applicable a methodology and if there isn't an obvious place to look for one there probably isn't one. The buck for judgment and decision-making has to stop somewhere, and stopping with deciding on reference classes seems silly in most situations. That said, I share your concern. I'm sure that there is a bias in the community of interested people, but I think that the community's most careful thinkers can and do largely avoid it. I certainly think bad outcomes are more likely than good ones, but I think that the odds are around 2:1 rather than 100:1.

I actually did reflect after posting that my probability estimate was 'overconfident,' but since I don't mind being embarrassed if I'm wrong, I'm placing it at where I actually believe it to be. Many posts on this blog have been dedicated to explaining how completely difficult the task of FAI is and how few people are capable of making meaningful contributions to the problem. There seems to be a panoply of ways for things to go horribly wrong in even minute ways. I think 1 in 10,000, or even 1 in a million is being generous enough with the odds that the... (read more)

5MichaelVassar
I would be very surprised if uploading was easier than AI, maybe slightly more surprised than I would be by cold fusion being real, but with the sort of broad probabilities I use that's still a bit over 1%. AGI is terribly difficult too. It's not FAI or uploading but very high caliber people have failed over and over. The status quo points to AGI before FAI, but the status quo continually changes, both due to trends and due to radical surprises. The world wouldn't have to change more radically than it has numerous times in the past for the sanity waterline to rise far enough that people capable of making significant progress towards AGI reliably understood that they needed to aim for FAI or for uploading instead. Once Newton could unsurprisingly be a Christian theist and an Alchemist. By the mid 20th century the priors against Einstein being a theist were phenomenal and in fact he wasn't one. (his Spinozaism is closer to what we call atheism than what most people call atheism is). I don't think that extreme low probabilities are self defeating to me, though they might be for some people, I just disagree with them.

Interesting. I remember my brother saying, "I want to be frozen when I die, so I can be brought back to life in the future," when he was child (somewhere between ages 9-14, I would guess). Probably got the idea from a cartoon show. I think the idea lost favor with him when he realized how difficult a proposition reanimating a corpse really was (he never thought about the information capture aspect of it.)

Well, I look at it this way:

I place the odds of humans actually being able to resuspend a frozen corpse near zero.

Therefore, in order for cryonics to work, we would need some form of information capture technology that would scan the in tact frozen brain and model the synaptic information in a form that could be 'played.' This is equivalent to the technology needed for uploading.

Given the complicated nature of whole brain simulations, some form of 'easier' quick and dirty AI is vastly more likely to come into being before this could take place.

I place... (read more)

1MichaelVassar
By default AI isn't friendly, but independent of SIAI succeeding does it really make sense to have 99% confidence in humanity as a whole not doing a given thing which is critical for our survival correctly or in FAI being impossibly difficult not merely for humans but for the gradually enhanced transhumans which humanity could technologically self-modify into if we don't wipe ourselves out? If we knew how to cheaply synthetically create 'clicks' of the type discussed in this post we would already have the tech to avoid UFAI indefinitely, enabling massive self-enhancement prior to work on FAI.

A question for Eliezer and anyone else with an opinion: what is your probability estimate of cryonics working? Why? An actual number is important, since otherwise cryonics is an instance of pascal's mugging. "Well, it's infinitely more than zero and you can multiply it by infinity if it does work" doesn't cut it for me. Since I place the probability of a positive singularity diminishingly small (p<0.0001), I don't see a point in wasting the money I could be enjoying now on lottery tickets or spending the social capital and energy on something that will make me seem insane.

-3drimshnick
And there is also the downside risk even if it does work - what if you are reanimated to be a slave of some non-FAI warlord! From this example, we can see that the probability of successful cryo actually resulting in a negative outcome is at least as big as the probability of non-FAI winning out.* So the question actually reduces to a classic heaven-and-hell or nothing argument - would you rather the chance of heaven with the possibility of hell, or neither? (*Of course, if the non-FAI sees no need to even have you around and doesn't even bother thawing you out and just kills you, this is a negative outcome also, as you've wasted lots of money.)
9Eliezer Yudkowsky
My estimate of the core technology working would be "it simply looks like it should work", which in terms of calibration should probably go to 90% or 80% or something like that. Estimates of cryonics organizations staying alive are outside the range of my comparative advantage in predictions, but I'll note that I tend to think in terms of them staying around for 30 years, not 300 years. The weakest link in the chain is humankind's overall probability of surviving. This is generally something I've refused to put a number on, with the excuse that I don't know how to estimate the probability of doing the "impossible" - though for those who insist on using silly reference classes, I should note that my success rate on the AI-Box Experiment is 60%. (It's at least possible, though, that once you're frozen, you would have no way of noticing all the Everett branches where you died - there wouldn't be anyone who experienced that death.)

This is obviously true, but I'm not suggesting that all people will become heroin junkies. I'm using heroin addiction as an example of where neurochemistry changes directly change preference and therefore utility function- IE the 'utility function' is not a static entity. Neurochemistry differences among people are vast, and heroin doesn't come close to a true 'wire-head,' and yet some percent of normal people are susceptible to having it alter their preferences to the point of death. After uploading/AI, interventions far more invasive and complete th... (read more)

1mattnewport
I find the prospect of an AI changing people's preferences to make them easier to satisfy rather disturbing. I'm not really worried about people changing their own preferences or succumbing en-masse to wireheading. It seems to me that if people could alter their own preferences then they would be much more inclined to move their preferences further away from a tendency towards wireheading. I see a lot more books on how to resist short term temptations (diet books, books on personal finance, etc.) than I do on how to make yourself satisfied with being fat or poor which suggests that generally people prefer preference changes that work in their longer term rather than short term interests.

Combination of being broke, almost dying, mother-interference, naltrexone, and being institutionalized. I think there are many that do not quit though.

2mattnewport
There are people who die from their drug habits but there are also many recovered former addicts. There are also people who sustain a drug habit without the rest of their life collapsing completely, even a heroin habit. It is clearly possible for people to make choices other than just taking another hit.

There's a far worse problem with the concept of 'utility function' as a static entity than that different generations have different preferences: The same person has very different preferences depending on his environment and neurochemistry. A heroin addict really does prefer heroin to a normal life (at least during his addiction). An ex-junkie friend of mine wistfully recalls how amazing heroin felt and how he realized he was failing out of school and slowly wasting away to death, but none of that mattered as long as there was still junk. Now, it's no... (read more)

1Blueberry
What's wrong with wireheading? Seriously. Heroin is harmful for numerous health and societal reasons, but if we solve those problems with wireheading, I don't see the problem with large portions of humanity choosing ultimate pleasure forever. We could also make some workarounds: for instance, timed wireheading, where you wirehead for a year and then set your brain to disable wireheading for another year, or a more sophisticated Fun Theory based version of wireheading that allows for slightly more complex pleasures.
1mattnewport
Why did your ex-junkie friend quit? That may suggest a possible answer to your dilemma.
Load More