It's the little things.
Using LessWrong as part of my internet-as-television recreational candy diet reminds me of stuff:
Tim Ferriss' books The Four-Hour Work Week and The Four-Hour Body are full of deeply annoying rubbish, but there's quite a bit of brilliance in there too.
Most of all it just made me sad and depressed. The whole "expected utility" thing being the worst part. If you take it seriously you'll forever procrastinate having fun because you can always imagine that postponing some terminal goal and instead doing something instrumental will yield even more utility in future. So if you enjoy mountain climbing you'll postpone it until it is safer or after the Singularity when you can have much more safe mountain climbing. And then after the Singularity you won't be able to do it because the resources for a galactic civilization are better used to fight hostile aliens and afterwards fix the heat death of the universe. There's always more expected utility in fixing problems, it is always about expected utility never about gathering or experiencing utility. And if you don't believe into risks from AI then there is some other existential risk and if there is no risk then it is poverty in Obscureistan. And if there is nothing at all then you should try to update your estimates because if you're wrong you'll lose more than by trying to figure out if you're wrong. You never hit diminishing returns. And in the end all your complex values are replaced by the tools and heuristics that were originally meant to help you achieve them. It's like you'll have to become one of those people who work all their life to save money for their retirement when they are old and lost most of their interests.
What on EARTH are you trying to -
Important note: Currently in NYC for 20 days with sole purpose of finding out how to make rationalists in Bay Area (and elsewhere) have as much fun as the ones in NYC. I am doing this because I want to save the world.
XiXiDu, I have been reading your comments for some time, and it seems like your reaction to this whole rationality business is unique. You take it seriously, or at least part of you does; but your perspective is sad and strange and pessimistic. Yes, even more pessimistic than Roko or Mass Driver. What you are taking away from this blog is not what other readers are taking away from it. The next step in your rationalist journey may require something more than a blog can provide.
From one aspiring rationalist to another, I strongly encourage you to talk these things over, in person, with friends who understand them. If you are already doing so, please forgive my unsolicited advice. If you don't have friends who know Less Wrong material, I encourage you to find or make them. They don't have to be Less Wrong readers; many of my friends are familiar with different bits and pieces of the Less Wrong philosophy without ever having read Less Wrong.
(Who voted down this sincere expression of personal feeling? Tch.)
This is why remembering to have fun along the way is important. Remember: you are an ape. The Straw Vulcan is a lie. The unlived life is not so worth examining. Remember to be human.
This is why remembering to have fun along the way is important.
I know that argument. But I can't get hold of it. What can I do, play a game? I'll have to examine everything in terms of expected utility. If I want to play a game I'll have to remind myself that I really want to solve friendly AI and therefore have to regard "playing a game" as an instrumental goal rather than a terminal goal. And in this sense, can I justify to play a game? You don't die if you are unhappy, I could just work overtime as street builder to earn even more money to donate it to the SIAI. There is no excuse to play a game because being unhappy for a few decades can not outweigh the expected utility of a positive Singularity and it doesn't reduce your efficiency as much as playing games and going to movies. There is simply no excuse to have fun. And that will be the same after the Singularity too.
The reason it's important is because it counts as basic mental maintenance, just as eating reasonably and exercising a bit and so on are basic bodily maintenance. You cannot achieve any goal without basic self-care.
For the solving friendly AI problem in particular: the current leader in the field has noticed his work suffers if he doesn't allow play time. You are allowed play time.
You are not a moral failure for not personally achieving an arbitrary degree of moral perfection.
You sound depressed, which would mean your hardware was even more corrupt and biased than usual. This won't help achieve a positive Singularity either. Driving yourself crazier with guilt at not being able to work for a positive Singularity won't help your effectiveness, so you need to stop doing that.
You are allowed to rest and play. You need to let yourself rest. Take a deep breath! Sleep! Go on holiday! Talk to friends you trust! See your doctor! Please do something. You sound like you are dashing your mind to pieces against the rock of the profoundly difficult, and you are not under any obligation to do such a thing, to punish yourself so.
As a result of this thinking, are you devoting every moment of your time and every Joule of your energy towards avoiding a negative Singularity?
No?
No, me neither. If I were to reason this way, the inevitable result for me would be that I couldn't bear to think about it at all and I'd live my whole life neither happily nor productively, and I suspect the same is true for you. The risk of burning out and forgetting about the whole thing is high, and that doesn't maximize utility either. You will be able to bring about bigger changes much more effectively if you look after yourself. So, sure, it's worth wondering if you can do more to bring about a good outcome for humanity - but don't make gigantic changes that could lead to burnout. Start from where you are, and step things up as you are able.
Could you expand on why offering this advice makes sense to you in this situation, when it hasn't otherwise?
It's not obvious to me that after rejecting Pascal's Mugging there is anything left to say about XiXiDu's fears or any reason to reject expected utility maximization(!!!).
Well, in so far as it isn't obvious why Pascal's Mugging should be rejected by a utility maximizer, his fears are legitimate. It may very well be that a utility maximizer will always be subject to some form of possible mugging. If that issue isn't resolved the fact that people are rejecting Pascal's Mugging doesn't help matters.
I'm going to be poking at this question from several angles-- I don't think I've got a complete and concise answer.
I think you've got a bad case of God's Eye Point of View-- thinking that the most rational and/or moral way to approach the universe is as though you don't exist.
The thing about GEPOV is that it isn't total nonsense. You can get more truth if you aren't territorial about what you already believe, but since you actually are part of the universe and you are your only point of view, trying to leave yourself out completely is its own flavor of falseness.
As you are finding out, ignoring your needs leads to incapacitation. It's like saying that we mustn't waste valuable hydrocarbons on oil for the car engine. All the hydrocarbons should be used for gasoline! This eventually stops working. It's important to satisfy needs which are of different kinds and operate on different time scales.
You may be thinking that, since fun isn't easily measurable externally, the need for it isn't real.
I think you're up against something which isn't about rationality exactly-- it's what I call the emotional immune system. Depression is partly about not being able to resist (or even being attracted to) ideas which cause damage.
An emotional immune system is about having affection for oneself, and if it's damaged, it needs to be rebuilt, probably a little at a time.
On the intellectual side, would you want all the people you want to help to defer their own pleasure indefinitely?
*”Politics is the mind killer”: This got me to take a serious look at my political views. I have changed a few of my positions, and my level of confidence on several others. I've also (mostly) stopped using people's political views to decide whether they are "on my side" or not.
*A Human's Guide to Words: I have gotten better at catching myself when I say unclear or potentially misleading things. I have also learned to stop getting involved in arguments over the meanings of words, or whether some entity belongs in an ill-defined category.
*Overall, Less Wrong made me less of a jerk. I am able to have discussions with people on things where we don't agree without thinking of them as evil or inferior. Better yet, I know when not to have the discussion in the first place. This saves both me and other people a lot of time and unpleasant feelings. I have a more realistic self-assessment, which lets me avoid missing opportunities to win or being disappointed when I overreach. I can understand other people a bit better and my social interactions are somewhat improved. Note that this last is kind of hard to test, so I don't know how big the effect is.
Evidence for each point:
The largest effect in my life has been in fighting mental illness, both indirectly by making me seek help and identify problems that I need to work with, and directly by getting rid of delusions.
It's also given me the realization that I have long term goals and that I might actually have an impact on them. Without that I'd for example never have put the effort in to get an actual education for example or even realized that was important.
These are just the largest and most concrete things, I have a hard time thinking of ANYTHING positive in my life that's not due to rationality.
On Less Wrong, I found thoroughness. Society today advocates speed over effectiveness - 12 year old college students over soundly rational adults. People who can Laplace transform diff-eqs in their heads over people who can solve logical paradoxes. In Less Wrong, I found people that could detach themselves from emotions and appearances, and look at things with an iron rationality.
I am sick of people who presume to know more than they do. Those that "seem" smart rather than actually being smart.
People on less wrong do not seem to be something they are not ~"Seems, madam! nay it is; I know not 'seems.'" (Hamlet)
What cool/important/useful things has rationality gotten you?
What sticks out for me are some bad things. "Comforting lies" is not an ironic phrase, and since ditching them I haven't found a large number of comforting truths. So far I haven't been able to marshal my true beliefs against my bad habits -- I come to less wrong partly to try to understand why.
I've benefited immensely, I think, but more from the self-image of being a person who wants/tries to be rational rather than something direct. I'm not particularly luminous or impervious to procrastination. However, valuing looking critically at things even when feelings are involved has been so incredibly important. I could have taken a huge, life-changing wrong turn. My sister took that turn, and she's never been really interested in rationality so I guess that's evidence for self-image as a (wanna-be) rationalist being important though it could've been something else.
So after reading SarahC's latest post I noticed that she's gotten a lot out of rationality.
More importantly, she got different things out of it than I have.
Off the top of my head, I've learned...
Where she got...
I've only recently making a habit out of trying new things, and that's been going really well for me. Is there other low hanging fruit that I'm missing?
What cool/important/useful things has rationality gotten you?