Comment author: MBlume 16 January 2016 06:48:21AM 9 points [-]

I think for me the problem is that I'm not being Bayesian. I can't make my brain assign 50% probability in a unified way. Instead, half my brain is convinced the hotel's definitely behind me, half is convinced it's ahead, they fail to cooperate on the epistemic prisoner's dilemma and instead play tug-of-war with the steering wheel. And however I decide to make up my mind, they don't stop playing tug-of-war with the steering wheel.

Comment author: SatvikBeri 16 January 2016 08:56:57PM 3 points [-]

My brain often defaults to thinking of these situations in terms of potential loss, and I find the CFAR technique of reframing it as potential gain helpful. For example, my initial state might be "If I go ahead at full speed and the hotel is behind me, I'll lose half an hour. But if I turn around and the hotel is ahead of me, I'll also lose time." The better state is "By default, driving at half speed might get me to the hotel in 15 minutes if I'm going in the right direction, and I'll save ~8 minutes by going faster. Even if the hotel is behind me, I'll save time by driving ahead faster."

Comment author: Jack_LaSota 16 January 2016 06:33:38PM 4 points [-]

Anyone have a better procedure for fixing this than the following?

  1. Notice the feeling.
  2. Treat it as a signal that your S1 wants you to search for cheaper ways to figure out which option is right than continuing to drive. Search for cheaper ways and execute them. Make it a thorough search and show your S1 the thoroughness of your search. Acknowledge the awfulness of "drive back and forth in an expensive search pattern" and only choose that as a last resort.
  3. If you don't immediately become much more certain of which way the hotel is in, and the "go 30mph" feeling does not go away, treat it as a signal that your S1 thinks the thought process by which you chose (under evidence-starvation) is wrong, which does not necessarily mean that the conclusion is wrong.
  4. List the ways your S1 thinks you're biased which are screwing up your evidenced-starved reasoning.
  5. Perform sanity-inducing rituals to counter those biases. (Think about your actual goal of getting to the hotel as soon as possible, forgive yourself for maybe driving past it, imagine all 4 outcomes (60mph forward, 60mph backward) x (get to hotel on next try after this, don't get to hotel on next try after this) and how you would feel about them)
  6. If the feeling is still there, this procedure has failed.
Comment author: SatvikBeri 16 January 2016 08:24:32PM 5 points [-]

My procedure is probably similar cost, but more general:

  1. State my goal(s), e.g. "get to the hotel"
  2. Find the point of highest uncertainty towards the goal, e.g. "not sure if the hotel is ahead or behind me"
  3. Come up with plans for reducing the uncertainty, e.g. "find the next gas station and ask someone"
  4. Check whether the plan I have actually feels like it'll work

Note that this can be applied pretty broadly, e.g. to business strategy, software design, making friends etc.

Comment author: SatvikBeri 23 December 2015 10:19:22PM 2 points [-]

The process of revival is imperfect, and pieces of memories are frequently missing. None of your loved ones remember you, and some of them are in permanent Alzheimers-like states. One person claims to have been close to pre-revival you, but you don't remember them. Having felt the pain of being rejected by your closest friend, you decide to trust them. That turns out to be an elaborate scam, possibly motivated by pure sadism, and you're now alone in a world you don't recognize and where you have to be suspicious of everyone you meet.

Comment author: SatvikBeri 23 December 2015 10:12:50PM 1 point [-]

Playing off of #2: The process of revival also allows for essentially infinite cloning. Unable to reconcile this with a desire for uniqueness, people decided that revived humans aren't quite real, and don't have legal rights. Thousands of copies of you are cloned or simulated for human experimentation, which has become extremely common now that it can be accurately done without hurting "real" humans. No version of you is ever revived in a context you would enjoy, because after all, you don't count as real.

Comment author: SatvikBeri 26 June 2015 04:35:02PM 1 point [-]

One approach I've been working with is sharing models rather than arguments. For example, nbouscal and I recently debated the relative importance of money for effective altruism. It turned out that our disagreement came down to a difference in our models of self-improvement: he believes that personal growth mostly comes from individual work and learning, while I believe that it mostly comes from working with people who have skills you don't have.

Any approach that started with detailed arguments would have been incredibly inefficient, because it would have taken much longer to find the source of our disagreement. Starting with a high-level approach that described our basic models made it much easier for us to hone in on arguments that we hadn't thought about before and make better updates.

Comment author: SatvikBeri 26 June 2015 04:20:52PM 0 points [-]

I found my motivation to frequently depend much more on external factors than the task itself. For example, I worked on Math for essentially every waking hour at Berkeley (and completed all the Master's courses when I was 19 as a result), and I worked on programming/data science tasks 80 hours/week at one job. But I've had very similar types of tasks in different environments and found it quite difficult to put it anything near the same number of hours.

I've found that my motivation seems to be constraint-based: if I feel like I'm getting enough sleep, enough socialization, enough food etc. then I find it very easy to work a lot. But if any one of these is lacking then my motivation plummets. In particular, all the environments where I worked exceptionally hard were ones where I was surrounded by people who felt like part of "my tribe": simply being around people whom I like isn't enough.

Comment author: John_Maxwell_IV 19 June 2015 06:57:00AM 3 points [-]

That suggests one way to motivate yourself to do something is to surround yourself with other people who are doing it badly.

Comment author: SatvikBeri 26 June 2015 04:15:02PM 1 point [-]

I've found a mixed approach helpful: spend some time with people who don't know how to do X, because you can add a lot of value to their lives by showing them how to do it better. Spend some time with people who are much better at X than you, so you consistently improve (and have new things to teach.)

I think most people tend to be far too hesitant to teach things they know, because they know that someone else understands it better. But if that person isn't doing the work to teach, then simply talking about what you know can be incredibly valuable.

Comment author: Anders_H 26 June 2015 03:59:39PM *  11 points [-]

I don't believe you can obtain an understanding of the idea that "correlation does not imply causation" from even a very deep appreciation of the material in Statistics 101. These courses usually make no attempt to define confounding, comparability etc. If they try to define confounding, they tend to use incoherent criteria based on changes in the estimate. Any understanding is almost certainly going to have to originate from outside of Statistics 101; unless you take a course on causal inference based on directed acyclic graphs it will be very challenging to get beyond memorizing the teacher's password

Comment author: SatvikBeri 26 June 2015 04:10:48PM 10 points [-]

Agree completely, and I'll also point out that at least for me, a very shallow understanding of the ideas in Causality did much more to help me understand correlation vs. causation, confounding etc. than any amount of work with Statistics 101. And this was enormously practical–I was able to make significantly better financial decisions at Fundation due to understanding concepts like Simpson's Paradox on a system 1 level.

Comment author: ChristianKl 28 October 2014 05:39:20PM 0 points [-]

If you say paper do you mean real physical paper? If so do you digitize it someway?

Comment author: SatvikBeri 02 November 2014 12:45:35AM 0 points [-]

It depends on the problem. For problems where outline-based thinking works well, I use Checkvist, which is very similar to Workflowy. If a problem doesn't conform to that format I'll generally use pen & paper. I don't usually digitize paper, although I might copy certain useful insights into my spaced repetition deck on ThoughtSaver.

In response to Power and difficulty
Comment author: Ben_LandauTaylor 25 October 2014 02:54:09PM 6 points [-]

I've noticed a related phenomenon where, when someone acquires a new insight, they judge its value by how difficult it was to understand, instead of by how much it improves their model of the world. It's the feeling of "Well, I hadn't thought of that before, but I suppose it's pretty obvious." But of course this is a mistake because the important part is "hadn't thought of that before," no matter whether you think you could've realized it in hindsight. (The most pernicious version of this is "Oh, yeah, I totally knew that already. I just hadn't make it so explicit.")

Comment author: SatvikBeri 28 October 2014 05:23:08PM 2 points [-]

A while back I deliberately switched from thinking of new ideas primarily in my head to thinking on paper, using notebooks or text editors. I had a strong, intuitive sense that the quality of my insights dropped, and nearly stopped. But instead I spent five minutes writing down the ideas I'd had using the two different systems, and found that I had substantially more insights thinking on paper–and those insights were usually better. But because they were easier to obtain, I wasn't valuing them as much.

View more: Next