Posts

Sorted by New

Wiki Contributions

Comments

Sorted by
lucidian130

Highly recommend kazerad, for Scott-level insights about human behavior. Here's his analysis of 4chan's anonymous culture. Here's another insightful essay of his. And a post on memetics. And these aren't necessarily the best posts I've read by him, just the three I happened to find first.

By the way, I'm really averse to the label "hidden rationalists". It's like complimenting people by saying "secretly a member of our ingroup, but just doesn't know it yet". Which simultaneously presupposes the person would want to be a member of our ingroup, and also that all worthwhile people are secretly members of our ingroup and just don't know it yet.

Here are the ten I thought of:

  • decorations for your house/apartment
  • a musical instrument
  • lessons for the musical instrument
  • nice speakers (right now I just have computer speakers and they suck)
  • camping equipment
  • instruction books for crafts you want to learn (I'm thinking stuff like knitting, sewing etc.)
  • materials for those crafts
  • gas money / money for motels, so you can take a random road trip to a place you've never been before
  • gym membership
  • yoga classes (or martial arts or whatever)

Also I totally second whoever said "nice kitchen knives". I got one as a Christmas present once, and it's probably the best holiday gift I've ever received.

Read more things that agree with what you want to believe. Avoid content that disagrees with it or criticizes it.

I don't have an answer, but I would like to second this request.

This post demonstrates a common failure of LessWrong thinking, where it is assumed that there is one right answer to something, when in fact this might not be the case. There may be many "right ways" for a single person to think about how much to give to charity. There may be different "right ways" for different people, especially if those people have different utility functions.

I think you probably know this, I am just picking on the wording, because I think that this wording nudges us towards thinking about these kinds of questions in an unhelpful way.

I think that we should have fewer meta posts like this. We spend too much time trying to optimize our use of this website, and not enough time actually just using the website.

lucidian120

Thanks for this post! I also spend far too much time worrying about inconsequential decisions, and it wouldn't surprise me if this is a common problem on LessWrong. In some sense, I think that rationality actually puts us at risk for this kind of decision anxiety, because rationality teaches us to look at every situation and ask, "Why am I doing it this way? Is there a different way I could do it that would be better?" By focusing on improving our lives, we end up overthinking our decisions. And we tend to frame these things as optimization problems: not "How can I find a good solution for X?", but "How can I find the best solution for X?" When we frame everything as optimization, the perfect can easily become the enemy of the good. Why? Because suppose you're trying to solve problem X, and you come up with a pretty decent solution, x. If you are constantly asking how to improve things, then you will focus on all the negative aspects of x that make it suboptimal. On the other hand, if you accept that some things just don't need to be optimized, you can learn to be content with what you have; you can focus on the positive aspects of x instead.

I think this is how a lot of us develop decision anxiety, actually. In general, we feel anxiety about a decision when we know it's possible for things to go wrong. The worse the possible consequences, the more anxiety we feel. And the thing is, when we focus on the downsides of our decisions, then we have negative feelings about our decisions. The more negative feelings we have about every decision we make, the more it seems like making a decision is an inherently fraught endeavor. Something in our minds says, "Of course I should feel anxiety when making decisions! Every time I make a decision, the result always feels really bad!"

Based on all of this, I'm trying to remedy my own decision anxiety by focusing on the positive more, and trying to ignore the downsides of decisions that I make. Last weekend, I was also looking for a new apartment. I visited two places, and they both looked great, but each of them had its downsides. One was in the middle of nowhere, so it was really nice and quiet, but very inaccesible. The other was in a town, and was basically perfect in terms of accesibility, but if you stood outside, you could vaguely hear the highway. At first I was pretty stressed about the decision, because I was thinking about the downsides of each apartment. And my friend said to me, "Wow, this is going to be a hard decision." But then I realized that both apartments were really awesome, and I'd be very happy in either of them, so I said, "Actually this is a really easy decision." Even if I accidentally picked the 'wrong' apartment, I would still be very happy there.

But here's the thing: whether I'm happy with my decision will depend on my mindset as I live in the apartment. I ended up picking the accesible apartment where you can hear the highway a little. If I spend everyday thinking "Wow, I hate that highway, I should have chosen the other apartment," then I'll regret my decision (even though the other place would have also had its faults). But if I spend every day thinking "Wow, this apartment is beautiful, and so conveniently located!", then I won't regret my decision at all.

I think it's worth including inference on the list of things that make machine learning difficult. The more complicated your model is, the more computationally difficult it will be to do inference in it, meaning that researchers often have to limit themselves to a much simpler model than they'd actually prefer to use, in order to make inference actually tractable.

Analogies are pervasive in thought. I was under the impression that cognitive scientists basically agree that a large portion of our thought is analogical, and that we would be completely lost without our capacity for analogy? But perhaps I've only been exposed to a narrow subsection of cognitive science, and there are many other cognitive scientists who disagree? Dunno.

But anyway I find it useful to think of analogy in terms of hierarchical modeling. Suppose you have a bunch of categories, but you don't see any relation between them. So maybe you know the categories "dog" and "sheep" and so on, and you understand both what typical dogs and sheep look like, and how a random dog or sheep is likely to vary from its category's prototype. But then suppose you learn a new category, such as "goat". If you keep categories totally separate in your mind, then when you first see a goat, you won't relate it to anything you already know. And so you'll have to see a whole bunch of goats before you get the idea of what goats are like in general. But if you have some notion of categories being similar to one another, then when you see your first goat, you can think to yourself "oh, this looks kind of like a sheep, so I expect the category of goats to look kind of like the category of sheep". That is, after seeing one goat and observing that it has four legs, you can predict that pretty much all goats also have four legs. That's because you know that number-of-legs is a property that doesn't vary much in the category "sheep", and you expect the category "goat" to be similar to the category "sheep". (Source: go read this paper, it is glorious.)

Anyway I basically think of analogy as a way of doing hierarchical modeling. You're trying to understand some situation X, and you identify some other situation Y, and then you can draw conclusions about X based on your knowledge of Y and on the similarities between the two situations. So yes, analogy is an imprecise reasoning mechanism that occasionally makes errors. But that's because analogy is part of the general class of inductive reasoning techniques.

Load More