It seems to me that one needs to place a large amount of trust in one's future self to implement such a strategy. It also requires that you be able to predict your future self's utility function. If you have a difficult time predicting what you will want and how you will feel, it becomes difficult to calculate the utility of any given precomittment. For example, I would be unconvinced that deciding to eat a donut now means that I will eat a donut every day and that not eating a donut now means I will not eat a donut every day. Knowing that I want a don...
I know that feeling, but I don't know how conscious it is. Basically when then outcome matters in a real immediate way and is heavily dependent on my actions, I get calm and go into 'I must do what needs to be done' mode. When my car lost traction in the rain and spun on the highway, I probably saved my life by reasoning how to best get control of it, pumping the break, and getting it into a clearing away from other vehicles/trees, all within a time frame that was under a minute. Immediately afterwards the thoughts running through my head were not, 'Oh f...
Ok- folding a fitted sheet is really fucking hard! I don't think that deserves to be on that list, since it really makes no difference whatsoever in life whether or not you properly fold a fitted sheet, or just kinda bundle it up and stuff it away. Not being able to deposit a check, mail a letter, or read a bus schedule, on the other hand can get you in trouble when you actually need to. Here's to not caring about linen care!
That's kind of my point-- it is a utility calculation, not some mystical er-problem. TDT-type problems occur all the time in real life, but they tend not to involve 'perfect' predictors, but rather other flawed agents. The decision to cooperate or not cooperate is thus dependent on the calculated utility of doing so.
"I think this is different from the traditional Newcomb's problem in that by the time you know there's a problem, it's certainly too late to change anything. With Newcomb's you can pre-commit to one-boxing if you've heard about the problem beforehand."
Agreed. It would be like opening the first box, finding the million dollars, and then having someone explain Newcomb's problem to you as you consider whether or not to open the second. My thought would be, "Ha! Omega was WRONG!!!! " laughing as I dove into the second box.
edit: Because there was no contract made between TDT agents before the first box was opened, there seems to be no reason to honor that contract, which was drawn afterwards.
Ok, so as I understand timeless decision theory, one wants to honor precommitments that one would have made if the outcome actually depended on the answer regardless of whether or not the outcome actually depends on the answer or not. The reason for this seems to be that behaving as a timeless decision agent makes your behavior predictable to other timeless decision theoretical agents (including your future selves), and therefore big wins can be had all around for all, especially when trying to predict your own future behavior.
So, if you buy the idea that...
This is a truly excellent post. You bring the problem that we are dealing with into a completely graspable inferential distance and set up a mental model that essentially asks us to think like an AI and succeeds. I haven't read anything that has made me feel the urgency of the problem as much as this has in a really long time...
This is true. We were (and are) in the same social group, so I didn't need to go out of my way for repeated interaction. Had I met him once and he failed to pick up my sigs, then NO, we would NOT be together now... This reminds me of a conversation I had with Silas, in which he asked me, "How many dates until....?" And I stared at him for a moment and said, "What makes you think there would be a second if the first didn't go so well?"
Self help usually fails because people are terrible at identifying what their actual problems are. Even when they are told! (Ahh, sweet, sweet denial.) As a regular member of the (increasingly successful) OB-NYC meetup, I have witnessed a great deal of 'rationalist therapy,' and frequently we end up talking about something completely different from what the person originally asked for therapy for (myself included). The outside view of other people (preferably rationalists) is required to move forward on the vast majority of problems. We should also no...
You are very unusual. I love nerds too, and am currently in an amazing relationship with one, but even I have my limits. He needed to pursue me or I wouldn't have bothered. I was quite explicitly testing, and once he realized the game was one, he exceeded expectations. But yeah, there were a couple of months there when I thought, 'To hell with this! If he's not going to make a move at this point, he can't know what he's doing, and he certainly won't be any good at the business...'
Are you intending to do this online or meet in person? If you are actually meeting, what city is this taking place in? Thanks.
I agree that these virtue ethics may help some people with their instrumental rationality. In general I have noticed a trend at lesswrong in which popular modes of thinking are first shunned as being irrational and not based on truth, only to be readopted later as being more functional for achieving one's stated goals. I think this process is important, because it allows one to rationally evaluate which 'irrational' models lead to the best outcome.
It seems that one way society tries to avoid the issue of 'preemptive imprisonment' is by making correlated behaviors crimes. For example, a major reason marijuana was made illegal was to give authorities an excuse to check the immigration status of laborers.
Dear Tech Support, Might I suggest that the entire Silas-Alicorn debate be moved to some meta-section. It has taken over the comments section of an instrumentally useful post, and may be preventing topical discussion.
I have always been curious about the effects of mass-death on human genetics. Is large scale death from plague, war, or natural-disaster likely to have much effect on the genetics of cognitive architecture, or are outcomes generally too random? Is there evidence for what traits are selected for by these events?
Most people commenting seem to be involved in science and technology (myself included), with a few in business. Are there any artists or people doing something entirely different out there?
To answer the main question, I am an MD/PhD student in neurobiology.
Awe, this made my night! Welcome to all!
Sure, one can always look at the positive aspects of reality, and many materialists have even tried to put a positive spin on the inevitability of death without an afterlife. But it should not be surprising that what is real is not always what is most beautiful. There are a panoply of reasons not to believe things that are not true, but greater aesthetic value does not seem to be one of them. There is an aesthetic value in the idea of 'The Truth,' but I would not say that this outweighs all of the ways in which fantasy can be appealing for most people....
Thank you for saying this outright. I was appalled by Scott's lack of epistemic rigor and how irresponsible he was at using his widely-read platform and trust as a physician to fool people into thinking cutting out a major organ has very little risk. Maybe he really did just fool himself, but I don't think that is an excuse when your whole deal is being the guy with good epistemics who looks at medical research. A comment he made later about guilting 40,000 randomly selected Americans into donating indicates clearly that he has an Agenda.... (read more)