wolverdude

I've only recently become involved in the LessWrong community after a prolonged existential crisis that pulled me away from Evangelical Christianity. Outside of catastrophizing about the end of the world and the meaninglessness of the cosmos, I do software stuff and generally enjoy my life.

Blog: https://wolverdude.wordpress.com/about/

Wiki Contributions

Comments

Sorted by

"(Classes with lots of) Tests are amazing - you get to fail so much!" - no one says this.

Classes with lots of tests are amazing, thanks to the testing effect.

What I meant by "info bubble" is just all the things I'm aware of at this point in time. Presumably there are actions outside of my info bubble which are more beneficial (or more harmful) than any inside, simply because things I'm unaware of encompasses a much larger expanse of possibility space. This is more true the more insular my life has been up to the present moment. The fact that I didn't "sow my wild oats", as the expression goes, did spare me from some harm, but it also stopped me from discovering things that could have set my life on a different, more optimal path.

I suppose I didn't mean "wholistic" in quite that way. That said, maybe I should have. Perhaps as I level up according to my goals, I'll discover that I need to do these things too (or others I haven't thought about).

This leads me into a tangential question about goal-setting in general: What if I don't currently have enough information to know what I should be aiming for? What if there are unknown unknowns out there? How do I account for that?

Thanks for your thoughts; they're all good ones! I've actually already engaged with the Rationality literature enough to have encountered most of them (I'm about 2/3 through The Sequences at the moment).

I think after reading people's responses to this post, I realize that the scenario I outline here is even less likely than I originally thought. There are wrong ways to apply rationality, it's true. But those are the failure modes @LeBleu alluded to. For everyone else, Rationality isn't a destination, it's a path. The updating is continuous. What happened for me is that I came from a different epistemological tradition and jumped ship to this one. Bushwhacking across terrain to get from one path to another is no fun. But now that I'm on a path, I'm not going to get into that kind of trouble again unless I leave the Rationality path entirely. So then the only question I need to be this worried about is whether the Rationality path is correct, and I'm pretty well convinced of that... but still willing to update, I suppose.

Go one step more meta, and realize that perfectionism itself is imperfect

The point about perfectionism is a good one. I've already recognized that perfectionism is not rational though, and it's more of a compulsive behavior / default mental state to inherently assume that information is free and be down on myself for not already knowing it and executing perfectly on it. Perhaps I actually can fully overcome that, but I'm not expecting it (which would be the perfectionist thing to do anyway ;)

Thanks for the welcome!

This is super helpful. It sounds like you've lived the thing that I'm only hypothesizing about here. Hopefully "Can't wait for round three" isn't sarcastic. This first round for me was extremely painful, but it sounds like round 2 was possibly more pleasant for you.

I like the framework you're using now, and I'm gonna try to condense it into my own words to make sure I understand what you mean. Basically, you're trying to optimize around keeping the various and conflicting hopes, needs, fears, etc. within you at least relatively cool with your choices. It also seems like there might be an emphasis on choosing to pursue the things that you find most meaningful. Is that correct? I would actually love to hear more on this. Are there good posts / sequences on it?

Regarding examples: I'll need to spend some time brainstorming and collating, but I'll post some here when I get to it. I tend to do the lazy thing of using examples to derive a general principle and then discarding the examples. This is probably not good practice wrt: Rationality.

Thanks for the tips!

Learning how to critique arguments is a skill you can study.

I suppose that large portions of The Sequences are devoted to precisely the task of critiquing arguments without requiring a contrary position. It's kind of an extension of a logical syntax check, but the question isn't just whether it's complete and deductively sound, but also whether it's empirically sound and bayesianly sound.

It's gonna take me a while to master those techniques, but it's a worthy goal. Not 100% sure I can do it on the timeline I need, but I can at least practice and start developing the habits.

Reading about those who have taken Rationalist-style approaches to get to obviously crazy conclusions is also useful, for seeing where people are prone to going off the rails, so you can avoid the same mistakes, or recognize the signs when others do.

I love reading about failure modes! Not sure why I find it so fascinating. Maybe it's connected to the perfectionism? Speaking of...

if you aren't failing, you aren't taking big enough risks to find something new.

I consider my greatest failure in life to be that I haven't failed enough. I have too few experiences of what works and what doesn't, I failed to make critical course-corrections because they lay outside my info bubble, and I missed out on many positive life experiences along with the negative ones.