So after reading SarahC's latest post I noticed that she's gotten a lot out of rationality.

More importantly, she got different things out of it than I have.

Off the top of my head, I've learned...

On top of becoming a little bit more effective at a lot of things, and with many fewer problems.
(I could post more on the consequences of this, but I'm going for a different point)

Where she got...

  • a habit of learning new skills
  • better time-management habits
  • an awesome community
  • more initiative
  • the idea that she can change the world

I've only recently making a habit out of trying new things, and that's been going really well for me. Is there other low hanging fruit that I'm missing?

What cool/important/useful things has rationality gotten you?

New to LessWrong?

New Comment
91 comments, sorted by Click to highlight new comments since: Today at 10:18 PM
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

It's the little things.

Using LessWrong as part of my internet-as-television recreational candy diet reminds me of stuff:

  • Be less dumb. Little things, every day. This in itself makes everything go better.
  • Respond, not react. (This one can get lost in conversation. Trying!)
  • Don't hold others' irrationality against them. (Lump of lard theory. Beware anthropomorphising humans.)
  • Ask yourself "How do you know that?"
  • Ask "what's this for?" That's one of my favourite universal questions ever and dissolves remarkable quantities of rubbish.
  • Be more curious. Picking out random e-books is a current avenue for this. Or deciding on my daily commute to actually look for interesting things about these streets I've walked countless times.

Tim Ferriss' books The Four-Hour Work Week and The Four-Hour Body are full of deeply annoying rubbish, but there's quite a bit of brilliance in there too.

  • 80/20 everything that makes demands of your time or resources. This has reached the point where in the last several months I've actually experienced and savoured the considerable luxury of boredom, after thinking I'd never have time for such a thing in the foreseeable future (looking after
... (read more)
4atucker13y
What does this one mean? I think it has something to do with you should incorporate information from what just happened and try to come up with an effective response to things, rather than your immediate gut reaction. Is that what you were going for?
6David_Gerard13y
Pretty much. I mean: when something upsetting happens and you get a visceral reaction, try to catch that and engage your brain. I expect it should ideally also be applied when something pleasing happens.
4[anonymous]13y
Hey what does this mean?
5David_Gerard13y
The Pareto principle: 80% of the effects come from 20% of the effort. Really quite a lot of things show a power law. Ferriss puts it like this: He considers this a useful principle to apply to everything. And it is - I don't necessarily throw out the unproductive 80% on a given measure (I might want it for other reasons), but it is interesting to see if there's a ready win there. And it's useful even when you work an ordinary salaried day job, as I do. (e.g. these two weeks, when my boss is on holiday and I'm doing all his job as well as my own.)
2jsalvatier13y
Can you expand on asking "what's this for?". Maybe an example or two? I'm not clear on what the context is.
1David_Gerard13y
* "Why am I doing this task?" - applies to pretty much any action * "What's the best operating system?" - in what context? * "What is the morally right course of action here?" * "Which of these is a better movie/record?" Particularly useful when you spot a free-floating comparative, seems to have wider application. (e.g. you just asked it about itself.) Try it yourself, for all manner of values of "this"!

Most of all it just made me sad and depressed. The whole "expected utility" thing being the worst part. If you take it seriously you'll forever procrastinate having fun because you can always imagine that postponing some terminal goal and instead doing something instrumental will yield even more utility in future. So if you enjoy mountain climbing you'll postpone it until it is safer or after the Singularity when you can have much more safe mountain climbing. And then after the Singularity you won't be able to do it because the resources for a galactic civilization are better used to fight hostile aliens and afterwards fix the heat death of the universe. There's always more expected utility in fixing problems, it is always about expected utility never about gathering or experiencing utility. And if you don't believe into risks from AI then there is some other existential risk and if there is no risk then it is poverty in Obscureistan. And if there is nothing at all then you should try to update your estimates because if you're wrong you'll lose more than by trying to figure out if you're wrong. You never hit diminishing returns. And in the end all your complex values are replaced by the tools and heuristics that were originally meant to help you achieve them. It's like you'll have to become one of those people who work all their life to save money for their retirement when they are old and lost most of their interests.

What on EARTH are you trying to -

Important note: Currently in NYC for 20 days with sole purpose of finding out how to make rationalists in Bay Area (and elsewhere) have as much fun as the ones in NYC. I am doing this because I want to save the world.

3katydee13y
Saving the world by overloading it with fun? Now where have I heard that before...

XiXiDu, I have been reading your comments for some time, and it seems like your reaction to this whole rationality business is unique. You take it seriously, or at least part of you does; but your perspective is sad and strange and pessimistic. Yes, even more pessimistic than Roko or Mass Driver. What you are taking away from this blog is not what other readers are taking away from it. The next step in your rationalist journey may require something more than a blog can provide.

From one aspiring rationalist to another, I strongly encourage you to talk these things over, in person, with friends who understand them. If you are already doing so, please forgive my unsolicited advice. If you don't have friends who know Less Wrong material, I encourage you to find or make them. They don't have to be Less Wrong readers; many of my friends are familiar with different bits and pieces of the Less Wrong philosophy without ever having read Less Wrong.

(Who voted down this sincere expression of personal feeling? Tch.)

This is why remembering to have fun along the way is important. Remember: you are an ape. The Straw Vulcan is a lie. The unlived life is not so worth examining. Remember to be human.

This is why remembering to have fun along the way is important.

I know that argument. But I can't get hold of it. What can I do, play a game? I'll have to examine everything in terms of expected utility. If I want to play a game I'll have to remind myself that I really want to solve friendly AI and therefore have to regard "playing a game" as an instrumental goal rather than a terminal goal. And in this sense, can I justify to play a game? You don't die if you are unhappy, I could just work overtime as street builder to earn even more money to donate it to the SIAI. There is no excuse to play a game because being unhappy for a few decades can not outweigh the expected utility of a positive Singularity and it doesn't reduce your efficiency as much as playing games and going to movies. There is simply no excuse to have fun. And that will be the same after the Singularity too.

The reason it's important is because it counts as basic mental maintenance, just as eating reasonably and exercising a bit and so on are basic bodily maintenance. You cannot achieve any goal without basic self-care.

For the solving friendly AI problem in particular: the current leader in the field has noticed his work suffers if he doesn't allow play time. You are allowed play time.

You are not a moral failure for not personally achieving an arbitrary degree of moral perfection.

You sound depressed, which would mean your hardware was even more corrupt and biased than usual. This won't help achieve a positive Singularity either. Driving yourself crazier with guilt at not being able to work for a positive Singularity won't help your effectiveness, so you need to stop doing that.

You are allowed to rest and play. You need to let yourself rest. Take a deep breath! Sleep! Go on holiday! Talk to friends you trust! See your doctor! Please do something. You sound like you are dashing your mind to pieces against the rock of the profoundly difficult, and you are not under any obligation to do such a thing, to punish yourself so.

As a result of this thinking, are you devoting every moment of your time and every Joule of your energy towards avoiding a negative Singularity?

No?

No, me neither. If I were to reason this way, the inevitable result for me would be that I couldn't bear to think about it at all and I'd live my whole life neither happily nor productively, and I suspect the same is true for you. The risk of burning out and forgetting about the whole thing is high, and that doesn't maximize utility either. You will be able to bring about bigger changes much more effectively if you look after yourself. So, sure, it's worth wondering if you can do more to bring about a good outcome for humanity - but don't make gigantic changes that could lead to burnout. Start from where you are, and step things up as you are able.

9Mycroft6553613y
Lets say the Singularity is likely to happen in 2045 like Kurzweil says, and you want to maximize the chances that it's positive. The idea that you should get to work making as much money to donate to SIAI, or that you should start researching fAGI (depending on your talents). What you do tomorrow doesn't matter. What matters is the average output over the next 35 years. This is important because a strategy where you have a emotional breakdown in 2020 fails. If you get so miserable you kill yourself you've failed at your goal. You need to make sure that this fallible agent, XIXIDu, stays at a very high level of productivity for the next 35 years. That almost never happens if you're not fulfilling the needs your monkey brain demands. Immediate gratification isn't a terminal goal, you've figured this out, but it does work as an instrumental goal on the path of a greater goal.
0MatthewBaker13y
Ditto
7Gray13y
One thing that I've come with when thinking about personal budgeting, of all things, is the concept of granularity. For someone who is poor, the situation is analogous to yours. The dad, lets say, of the household might be having a similar attack of conscience as you are on whether he should buy a candy bar at the gas station, when there are bills that can't be paid. But it turns out that a small enough purchase, such as a really cheap candy bar (for the sake of argument), doesn't actually make any difference. No bill is going to go from being unpaid to paid because that candy was bought rather than unbought. So relax. Buy a candy bar every once in a while. It won't make a difference.
4atucker13y
I took too long to link to this.
0Eliezer Yudkowsky13y
I don't tell people this very often. In fact I'm not sure I can recall ever telling anyone this before, but then I wouldn't necessarily remember it. But yes, in this case and in these exact circumstances, you need to get laid.

Could you expand on why offering this advice makes sense to you in this situation, when it hasn't otherwise?

6[anonymous]13y
Totally can relate to this. I was dealing with depression long before LW, but improved rationality sure made my depression much more fun and exciting. Sarcastically, I could say that LW gave me the tools to be really good at self-criticism. I can't exactly give you any advice on this, as I'm still dealing with this myself and I honestly don't really know what works or even what the goal exactly is. Just wanted to say that the feeling "this compromise 'have some fun now' crap shouldn't be necessary if I really were rational!" is only too familiar. It lead me to constantly question my own values and how much I was merely signalling (mostly to myself). Like, "if I procrastinate on $goal or if I don't enjoy doing $maximally_effective_but_boring_activity, then I probably don't really want $goal", but that just leads into deeper madness. And even when I understand (from results, mostly, or comparisons to more effective people) that I must be doing something wrong, I break down until I can exactly identify what it is. So I self-optimize so that I can be better at self-optimizing, but I never get around to doing anything. (That's not to say that LW was overall a negative influence for me. Quite the opposite. It's just that adding powerful cognitive tools to a not-too-sane mind has a lot of nasty side-effects.)
5atucker13y
If I understood this correctly (as you procrastinating on something, and concluding that you don't actually want it), then most people around here call that akrasia. Which isn't really something to go mad about. Basically, your brain is a stapled together hodgepodge of systems which barely work together well enough to have worked in the ancestral environment. Nowadays, we know and can do much more stuff. But there's no reason to expect that your built in neural circuitry can turn your desire to accomplish something into tangible action, especially when your actions are only in the long term, and non-viscerally, related to you accomplishing your goal.
3[anonymous]13y
It's not just akrasia, or rather, the implication of strong akrasia really weirds me out. The easiest mechanism to implement goals would not be vulnerable to akrasia. At best it would be used to conserve limited resources, but that's clearly not the case here. In fact, some goals work just fine, while others fail. This is especially notable when the same activity can have very different levels of akrasia depending on why I'm doing it. Blaming this on hodge-podge circuitry seems false to me (in the general case). So I look for other explanations, and signaling is a pretty good starting point. What I thought was a real goal was just a social facade, e.g. I don't want to study, I just want to be seen as having a degree. (Strong evidence for this is that I enjoy reading books for some personal research when I hated literally the same books when I had to read them for class.) Because of this, I'm generally not convinced that my ability to do stuff is broken (at least not as badly), but rather, that I'm mistaken about what I really want. But as Xixidu mentioned, when you start applying rationality to that, you end up changing your own values in the process and not always in a pretty way.
2NancyLebovitz13y
At least at my end, I'm pretty sure that part of my problem isn't that signalling is causing me to override my real desires, it's that there's something about feeling that I have to signal leads to me not wanting to cooperate, even if the action is something that I would otherwise want to do, or at least not mind all that much. Writing this has made the issue clearer for me than it's been, but it's not completely clear-- I think there's a combination of fear and anger involved, and it's a goddam shame that my customers (a decent and friendly bunch) are activating stuff that got built up when I was a kid.
1atucker13y
Fair enough, I guess I misunderstood what you were saying. I guess its not guaranteed to turn out well, and when I was still working through my value-conflicts it wasn't fun. In the end though, the clarity that I got from knowing a few of my actual goals and values feels pretty liberating. Knowing (some of) what I want makes it soooo much easier for me to figure out how to do things that will make me happy, and with less regret or second thoughts after I decide.
6atucker13y
Integrate your utility over time. There are plenty of cheap (in terms of future utility) things that you can do now to enjoy yourself. Like, eating healthy feels nice and keeps you in better shape for getting more utility. You should do it. Friends help you achieve future goals, and making and interacting with them is fun. Reframe your "have to"s as "want to"s, if that's true.
7XiXiDu13y
I know, it would be best to enjoy the journey. But I am not that kind of person. I hate the eventual conclusion being made on LW. I am not saying that it is wrong, which is the problem. For me it only means that life sucks. If you can't stop caring then life sucks. For a few years after I was able to overcome religion I was pretty happy. I decided that nothing matters and I could just enjoy life, that I am not responsible. But that seems inconsistent as caring about others is caring about yourself. You also wouldn't run downstairs faster than necessary just because it is fun to run fast, it is not worth a fracture. And there begins the miserable journey where you never stop to enjoy because it is not worth it. It is like rationality is a parasite that is hijacking you and turns you into a consequentialist that maximizes only rational conduct.
6David_Gerard13y
Memetic "basilisk" issue: this subthread may be important: This (combined with such as Roko's meltdown, as Nisan notes above) appears to be evidence of the possibility of LessWrong rationalism as memetic basilisk. (Thus suggesting the "basilisks" so far, e.g. the forbidden post, may have whatever's problematic in the LW memeplex as prerequisite, which is ... disconcerting.) As muflax notes: What's a proper approach to use with those who literally can't handle that much truth?
5NancyLebovitz13y
Good question, though we might also want to take a careful look at whether there's something a little askew about the truth we're offering. How can the folks who can't handle this stuff easily or perhaps at all be identified? Rationality helps some depressed people and knocks others down farther. Even if people at risk can be identified, I can't imagine a spoiler system which would keep all of them away from the material. On the other hand, maybe there are ways to warn off at least some people.
4TheOtherDave13y
Well, that question is hardly unique to this forum. My own preferred tactic depends on whether I consider someone capable of making an informed decision about what they are willing to try to handle -- that is, they have enough information, and they are capable of making such judgments, and they aren't massively distracted. If I do, I tell them that there's something I'm reluctant to tell them, because I'm concerned that it will leave them worse off than my silence, but I'm leaving the choice up to them. If not, then I keep quiet. In a public forum, though, that tactic is unavailable.
2timtyler13y
It is common for brains to get hijacked by parasites: Dan Dennett: Ants, terrorism, and the awesome power of memes
5NancyLebovitz13y
Thanks for the link. I note that when Dennett lists dangerous memes, he skips the one that gets the most people killed-- nationalism.
0MatthewBaker13y
Dont despair, help will come :)
0[anonymous]13y
I think you need to be a bit more selfish. The way I see it, the distant future can most likely take care of itself, and if it can't, then you won't be able to save it anyway. If you suddenly were given a very good reason to believe that things are going to turn out Okay regardless of what you personally do, what would you do then?
3Emile13y
That, and the rest, doesn't sound rational at all. "Maximizing expected utility" doesn't mean "systematically deferring enjoyment"; it's just a nerdy way of talking about tradeoffs when taking risks. The concept of "expected utility" doesn't seem to have much relevance at the individual level, it's more something for comparing government policies, or moral philosophies, or agents in game theory/decision theory ... or maybe also some narrow things like investing in stock. But not deciding whether to go rock-climbing or not.
6XiXiDu13y
I agree, but I can't pinpoint what is wrong. There are other people here who went bonkers (no offense) thanks to the kind of rationality being taught on LW. Actually Roko stated a few times that he would like to have never learnt about existential risks because of the negative impact it had on his social life etc. I argued that "ignorance is bliss" can under no circumstances be right and that I value truth more than happiness. I think I was wrong. I am not referring to bad things happening to people here but solely to the large amount of positive utility associated with a lot of scenarios that force you to pursue instrumental goals that you don't enjoy at all. Well, it would probably be better to never exist in the first place, living seems to have an overall negative utility if you are not the kind if person who enjoys being or helping Eliezer Yudkowsky. What are you doing all day, is it the most effective way to earn money or help solving friendly AI directly? I doubt it. And if you know that and still don't do anything about it then many people here would call you irrational. It doesn't matter what you like to do because whatever you value, there will always be more of it tomorrow if you postpone doing it today and instead pursue an instrumental goal. You can always do something, even if that means you'd have to sell your blood. No excuses there, it is watertight. And this will never end. It might sound absurd to talk about trying to do something about the heat death of the universe or trying to hack the Matrix, but is it really improbable enough to outweigh the utility associated with gaining the necessary resources to support 3^^^^3 people for 3^^^^3 years rather than a galactic civilisation for merely 10^50 years? Give me a good argument of why an FAI shouldn't devote all its resources to trying to leave the universe rather than supporting a galactic civilization for a few years? How does this differ from devoting all resources to working on friendly AI for
9Perplexed13y
I can. You are trying to "shut up and multiply" (as Eliezer advises) using the screwed up, totally undiscounted, broken-mathematics version of consequentialism taught here. Instead, you should pay more attention to your own utility than to the utility of the 3^^^3itudes in the distant future, and/or in distant galaxies, and/or in simulated realities. You should pay no more attention to their utility than they pay to yours. Don't shut up and multiply until someone fixes the broken consequentialist math which is promoted here. Instead, (as Eliezer also advises) get laid or something. Worry more about about the happiness of the people (including yourself) within a temporal radius of 24 hours, a spatial radius of a few meters, and in your own branch of the 'space-time continuum', than you worry about any region of space-time trillions of times the extent, if that region of space time is also millions of times as distant in time, space, or Hilbert-space phase-product. (I'm sure Tim Tyler is going to jump in and point out that even if you don't discount the future (etc.) as I recommend, you should still not worry much about the future because it is so hard to predict the consequences of your actions. Pace Tim. That is true, but beside the point!) If it is important to you (XiXiDu) to do something useful and Singularity related, why don't you figure out how to fix the broken expected-undiscounted-utility math that is making you unhappy before someone programs it into a seed AI and makes us all unhappy.
6Eliezer Yudkowsky13y
Excuse me, but XiXiDu is taking for granted ideas such as Pascal's Mugging - in fact Pascal's Mugging seems to be the main trope here - which were explicitly rejected by me and by most other LWians. We're not quite sure how to fix it, though Hanson's suggestion is pretty good, but we did reject Pascal's Mugging! It's not obvious to me that after rejecting Pascal's Mugging there is anything left to say about XiXiDu's fears or any reason to reject expected utility maximization(!!!).

It's not obvious to me that after rejecting Pascal's Mugging there is anything left to say about XiXiDu's fears or any reason to reject expected utility maximization(!!!).

Well, in so far as it isn't obvious why Pascal's Mugging should be rejected by a utility maximizer, his fears are legitimate. It may very well be that a utility maximizer will always be subject to some form of possible mugging. If that issue isn't resolved the fact that people are rejecting Pascal's Mugging doesn't help matters.

7XiXiDu13y
I fear that the mugger is often our own imagination. If you calculate the expected utility of various outcomes you imagine impossible alternative actions. The alternatives are impossible because you already precommited to choosing the outcome with the largest expected utility. There are three main problems with that: * You swap your complex values for a certain terminal goal with the highest expected utility, indeed your instrumental and terminal goals converge to become the expected utility formula. * There is no minimum amount of empirical evidence necessary to extrapolate the expected utility of an outcome. * The extrapolation of counterfactual alternatives is unbounded, logical implications can reach out indefinitely without ever requiring new empirical evidence. All this can cause any insignificant inference to exhibit hyperbolic growth in utility.
5atucker13y
I don't trust my brain's claims of massive utility enough to let it dominate every second of my life. I don't even think I know what, this second, would be doing the most to help achieve a positive singularity. I'm also pretty sure that my utility function is bounded, or at least hits diminishing returns really fast. I know that thinking my head off about every possible high-utility counterfactual will make me sad, depressed, and indecisive, on top of ruining my ability to make progress towards gaining utility. So I don't worry about it that much. I try to think about these problems in doses that I can handle, and focus on what I can actually do to help out.
2XiXiDu13y
Yet you trust your brain enough to turn down claims of massive utility. Given that our brains could not evolve to yield reliable inutions about such scenarios and given that the parts of rationality that we do understand very well in principle are telling us to maximize expected utility, what does it mean not to trust your brain? In all of the scenarios in question that involve massive amounts of utility your uncertainty is included and being outweighed. It seems that what you are saying is that you don't trust your higher order thinking skills and instead trust your gut feelings? You could argue that you are simply risk averse, but that would require you to set some upper bound regarding bargains with uncertain payoffs. How are you going to define and justify such a limit if you don't trust your brain? Anyway, I did some quick searches today and found out that the kind of problems I talked about are nothing new and mentioned in various places and contexts: The St. Petersburg Paradox The Infinitarian Challenge to Aggregative Ethics Omohundro's "Basic AI Drives" and Catastrophic Risks
0atucker13y
I take risks when I actually have a grasp of what they are. Right now I'm trying to organize a DC meetup group, finish up my robotics team's season, do all of my homework for the next 2 weeks so that I can go college touring, and combining college visits with LW meetups. After April, I plan to start capoiera, work on PyMC, actually have DC meetups, work on a scriptable real time strategy game, start contra dancing again, start writing a sequence based on Heuristics and Biases, improve my dietary and exercise habits, and visit Serbia. All of these things I have a pretty solid grasp of what they entail, and how they impact the world. I still want to do high-utility things, but I just choose not to live in constant dread of lost opportunity. My general strategy of acquiring utility is to help/make other people get more utility too, and multiply the effects of getting the more low hanging fruit. The issue with long-shots like this is that I don't know where to look for them. Seriously. And since they're such long-shots, I'm not sure how to go about getting them. I know that trying to do so isn't particularly likely to work. Sorry, I said that badly. If I knew how to get massive utility, I would try to. Its just the planning is the hard part. The best that I know to do now (note: I am carving out time to think about this harder in the forseeable future) is to get money and build communities. And give some of the money to SIAI. But in the meantime, I'm not going to be agonizing over everything I could have possibly done better.
8David_Gerard13y
Well, nothing philosophically. There's probably quite a lot to say about, or rather in the aid of, one of our fellows who's clearly in trouble. The problem appears to be depression, i.e., more corrupt than usual hardware. Thus, despite the manifestations of the trouble as philosophy, I submit this is not the actual problem here.
7Perplexed13y
We are in disagreement then. I reject, not just Pascal's mugging, but also the style of analysis found in Bostrom's "Astronomical Waste" paper. As I understand XiXiDu, he has been taught (by people who think like Bostrom) that even the smallest misstep on the way to the Singularity has astronomical consequences and that we who potentially commit these misteps are morally responsible for this astronomical waste. Is the "Astronomical Waste" paper an example of "Pascal's Mugging"? If not, how do you distinguish (setting aside the problem of how you justify the distinction)? Do you have a link to Robin's suggestion? I'm a bit surprised that a practicing economist would suggest something other than discounting. In another Bostrom paper, "The Infinitarian Challenge to Aggregative Ethics", it appears that Bostrom also recognizes that something is broken, but he, too, doesn't know how to fix it.
6XiXiDu13y
Exactly, I describe my current confusion in more detail in this thread, especially the comment here and here which led me to conclude this. Fairly long comments, but I wish someone would dissolve my confusion there. I really don't care if you downvote them to -10, but without some written feedback I can't tell what exactly is wrong, how I am confused. Can be found via the Wiki: I don't quite get it.

I'm going to be poking at this question from several angles-- I don't think I've got a complete and concise answer.

I think you've got a bad case of God's Eye Point of View-- thinking that the most rational and/or moral way to approach the universe is as though you don't exist.

The thing about GEPOV is that it isn't total nonsense. You can get more truth if you aren't territorial about what you already believe, but since you actually are part of the universe and you are your only point of view, trying to leave yourself out completely is its own flavor of falseness.

As you are finding out, ignoring your needs leads to incapacitation. It's like saying that we mustn't waste valuable hydrocarbons on oil for the car engine. All the hydrocarbons should be used for gasoline! This eventually stops working. It's important to satisfy needs which are of different kinds and operate on different time scales.

You may be thinking that, since fun isn't easily measurable externally, the need for it isn't real.

I think you're up against something which isn't about rationality exactly-- it's what I call the emotional immune system. Depression is partly about not being able to resist (or even being attracted to) ideas which cause damage.

An emotional immune system is about having affection for oneself, and if it's damaged, it needs to be rebuilt, probably a little at a time.

On the intellectual side, would you want all the people you want to help to defer their own pleasure indefinitely?

6Eliezer Yudkowsky13y
This sounds very true and important.
5NancyLebovitz13y
As far as I can tell, a great deal of thinking is the result of wanting thoughts which match a pre-existing emotional state. Thoughts do influence emotions, but less reliably.
2XiXiDu13y
No, but I don't know what a solution would look like. Most of the time I am just overwhelmed as it feels like everything I come up with isn't much better than throwing a coin. I just can't figure out the right balance between fun (experiencing; being selfish), moral conduct (being altruistic), utility maximization (being future-oriented) and my gut feelings (instinct; intuition; emotions). For example, if I have a strong urge to just go out and have fun, should I just give in to that urge or think about it? If I question the urge I often end up thinking about it until it is too late. Every attempt at a possible solution looks like browsing Wikipedia, each article links to other articles that again link to other articles until you end up with something completely unrelated to the initial article. It seems impossible to apply a lot of what is taught on LW in real life.
3NancyLebovitz13y
Maybe require yourself to have a certain amount of fun per week?
1Goobahman13y
NancyLebovitz's comment I think is highly relevant here. I can only speak from my personal experience, but I've found than part of going through Less Wrong and understanding all the great stuff on this website, is understanding the type of creature I am. At this current moment, I am comparitively a very simple one. In terms of the singularity, and Friendly AI, they are miles from what I am, and I am not at a point where I can emotionally take on those causes. I can intellectual but the fact is the simple creature that I am doesn't comprehend those connections yet. I want to one day, but a Baby has to crawl before it can walk. Much of what I do provides me with satisfaction, joy, happiness. I don't even fully understand why. But what I do know, is that I need those emotions to not just function, but to improve, to continue the development of myself. Maybe it might help to reduce yourself to that simple creature. Understand that for a baby to do math, it has to understand symbols. Maybe that what you understand intellectually, in terms of emotional function your not yet ready to deal with. Just my two cents. sorry if I'm not as concise as I should be. I do hope the best for you though.
1timtyler13y
Peace - I think that is what you meant to say. We mostly agree. I am not sure you can tell someone else what they "should" be doing, though. That is for them to decide. I expect your egoism is not of the evangelical kind. Saving the planet does have some merits though. People's goals often conflict - but many people can endorse saving the planet. It is ecologically friendly, signals concern with Big Things, paints you as a Valiant Hero - and so on. As causes go, there are probably unhealthier ones to fall in with.
3Perplexed13y
I'm kinda changing the subject here, but that wasn't a typo. "Pace" was what I meant to write. Trouble is, I'm not completely sure what it means. I've seen it used in contexts that suggest it means something like "I know you disagree with this, but I don't want to pick a fight. At least not now." But I don't know what it means literally, nor even how to pronounce it. My guess is that it is church Latin, meaning (as you suggest) 'peace'. 'Requiescat in pace' and all that. I suppose, since it is a foreign language word, I technically should have italicized. Can anyone help out here?
5timtyler13y
Latin (from pax "peace"), "with due respect offered to...", e.g. "pace Brown" means "I respectfully disagree with Brown", though the disagreement is often in fact not very respectful!
8atucker13y
There is a difference between negative utility, and less than maximized utility. There are lots of people who enjoy their lives despite not having done as much as they could, even if they know that they could be doing more. Its only when you dwell on what you haven't done, aren't doing, or could have done that you actually become unhappy about it. If you don't start from maximum utility and see everything as a worse version of that, then you can easily enjoy the good things in your life.
6nazgulnarsil13y
you seem to be holding yourself morally responsible for future states. why? my attitude is that it was like this when I got here.
4Vladimir_Nesov13y
Now this looks like a wrong kind of question to consider in this context. The amount of fun your human existence is delivering, in connection with what you abstractly believe is the better course of action, is something relevant, but the details of how FAI would manage the future is not your human existence's explicit problem, unless you are working on FAI design. If it's better for FAI to spend the next 3^^^3 multiverse millenia planning the future, why should that have a reflection in your psychological outlook? That's an obscure technical question. What matters is whether it's better, not whether it has a certain individual surface feature.
2Davorak13y
Irrational seems like the wrong world here after all the person could be rational but working with a dataset that does not allow them to reach that conclusion yet. There are also people who reach that conclusion irrationally, reach the right conclusion with a flawed method(unreliable) but are not more rational for having the right conclusions.
0CronoDAS13y
Why do you care what happens 3^^^^3 years from now?
1Kaj_Sotala13y
That presumes no time discounting. Time discounting is neither rational nor irrational. It's part of the way one's utility function is defined, and judgements of instrumental rationality can only be made by reference to a utility function. So there's not necessarily any conflict between expected utility maximization and having fun now: indeed, one could even have a utility function that only cared about things that happened during the next five seconds, and attached zero utility to everything afterwards. I'm obviously not suggesting that anyone should try to start thinking like that, but I do suggest introducing a little more discounting into your utility measurements. That's even without taking into account the advice about needing rest that other people have brought up, and which I agree with completely. I tried going by the "denial of pleasures" route before, and the result was a burnout which began around three years ago and which is still hampering my productivity. If you don't allow yourself to have fun, you will crash and burn sooner or later.
1Vladimir_Nesov13y
All else equal, if having less fun improves expected utility, you should have less fun. But all else is not equal, it's not clear to me that the search for more impact often leads to particularly no-fun plans. In other words, some low-hanging fun cuts are to be expected, you shouldn't play WoW for weeks on end, but getting too far into the no-fun territory would be detrimental to your impact, and the best ways of increasing your impact probably retain a lot of fun. Also, happiness set point would probably keep you afloat.
1Kutta13y
Couldn't you just take all these negative stuff you came up with in connection to rationality, mark them as things to avoid, and then define rationality as efficiently pursuing whatever you actually find desirable?
-1Vladimir_Nesov13y
That would be ignoring the arguments, as opposed to addressing them. How you define "rationality" shouldn't matter for what particular substantive arguments incite you to do.
2Kutta13y
If you accept the "rationality is winning" definition, it makes little sense to come up with downsides about rationality, that's what I was trying to point out. It is quite similar to what you said in this comment.
-1Vladimir_Nesov13y
A wrong way to put it. If a decision is optimal, there still remain specific arguments for why it shouldn't be taken. Optimality is estimated overall, not for any singled out argument, that can therefore individually lose. See "policy debates shouldn't appear one-sided". If, all else equal, it's possible to amend a downside, then it's a bad idea to keep it. But tradeoffs are present in any complicated decision, there will be specialized heuristics that disapprove of a plan, even if overall it's optimized. In our case, we have the heuristic of "personal fun", which is distinct from overall morality. If you're optimizing morality, you should expect personal fun to remain suboptimal, even if just a little bit. (Yet another question is that rationality can give independent boost to the ability to have personal fun, which can offset this effect.)

*”Politics is the mind killer”: This got me to take a serious look at my political views. I have changed a few of my positions, and my level of confidence on several others. I've also (mostly) stopped using people's political views to decide whether they are "on my side" or not.

*A Human's Guide to Words: I have gotten better at catching myself when I say unclear or potentially misleading things. I have also learned to stop getting involved in arguments over the meanings of words, or whether some entity belongs in an ill-defined category.

*Overall, Less Wrong made me less of a jerk. I am able to have discussions with people on things where we don't agree without thinking of them as evil or inferior. Better yet, I know when not to have the discussion in the first place. This saves both me and other people a lot of time and unpleasant feelings. I have a more realistic self-assessment, which lets me avoid missing opportunities to win or being disappointed when I overreach. I can understand other people a bit better and my social interactions are somewhat improved. Note that this last is kind of hard to test, so I don't know how big the effect is.

6[anonymous]13y
.
  • Realising that I was irrationally risk-averse and correcting for this in at least one major case
  • In terms of decision making, imagining that I had been created at this point in time, and that my past was a different person.
  • Utilitarian view of ethics
  • Actually having goals. Trying to live as if I was maximizing a utility function.

Evidence for each point:

  • moving from the UK to Canada to be with my girlfriend.
  • There is a faster bus I could have been using on my commute to work. I knew about the bus but I wasn't taking it. Why not? I honestly don't know. I
... (read more)
1Dorikka13y
If you mean by "non-moral-realist" "someone who doesn't think objective morality exists," I think that you've expressed my current reason for why I haven't read the meta-ethics sequence. Could you elaborate a bit more on why you changed your mind?
2Giles13y
Another point I should elaborate on. "Would you sacrifice yourself to save the lives of 10 others?" you ask person J. "I guess so", J replies. "I might find it difficult bringing myself to actually do it, but I know it's the right thing to do". "But you give a lot of money to charity" you tell this highly moral, saintly individual. "And you make sure to give only to charities that really work. If you stay alive, the extra money you will earn can be used to save the lives of more than 10 people. You are not just sacrificing yourself, you are sacrificing them too. Sacrificing the lives of more than 10 people to save 10? Are you so sure it's the right thing to do?". "Yes", J replies. "And I don't accept your utilitarian model of ethics that got you to that conclusion". What I figured out (and I don't know if this has been covered on LW yet) is that J's decision can actually be rational, if: * J's utility function is strongly weighted in favour of J's own wellbeing, but takes everyone else's into account too * J considers the social shame of killing 10 other people to save himself worse (according to this utility function) than his own death plus a bunch of others The other thing I realised was that people with a utility function such as J's should not necessarily be criticized. If that's how we're going to behave anyway, we may as well formalize it and that should leave everyone better off on average.
5Dorikka13y
Yes, but only if this is really, truly J's utility function. There's a significant possibility that J is suffering from major scope insensitivity and failing to fully appreciate the loss of fun happening when all those people die that he could have saved by living and donating to effective charity. When I say "significant possibility", I'm estimating P>.95. Note: I interpreted "charities that really work" as "charities that you've researched well and concluded that they're the most effective one's out there." If you just mean that the charity donation produces positive instead of negative fun (considering that there exist some charities that actually don't help people), then my P estimate drops.
3Giles13y
It seems plausible to me that J really, truly cares about himself significantly more than he cares about other people, certainly with P > 0.05. The effect could be partly due to this and partly due to scope insensitivity but still... how do you distinguish one from the other? It seems: caring about yourself -> caring what society thinks of you -> following society's norms -> tendency towards scope insensitivity (since several of society's norms are scope-insensitive). In other words: how do you tell whether J has utility function F, or a different utility function G which he is doing a poor job of optimising due to biases? I assume it would have something to do with pointing out the error and seeing how he reacts, but it can't be that simple. Is the question even meaningful? Re: "charities that work", your assumption is correct.
0Dorikka13y
Considering that J is contributing a lot of money to truly effective charity, I think that his utility function is such that he will gain more utils from the huge amount of fun generated from his continued donations minus that by social shame minus that of ten people dying compared to J himself dying if his biases did not render him incapable of appreciating just how much fun his charity was generating. If he's very selfish, my probability estimate is raised (not above .95, but above whatever it would have been before) by the fact that most people don't want to die. One way to find out the source of such a decision is telling them to read the Sequences, and see what they think afterwards. The question is very meaningful, because the whole point of instrumental rationality is learning how to prevent your biases from sabotaging your utility function.
0Giles13y
First off, I'm using "epistemic and instrumental rationality" as defined here: http://lesswrong.com/lw/31/what_do_we_mean_by_rationality/ If you don't believe objective morality exists then epistemic rationality can't be applied directly to morality. "This is the right thing to do" is not a question about the "territory" so you can't determine its truth or falsehood. But I choose some actions over others and describe that choice as a moral one. The place where I changed my mind is that it's no longer enough for me to be "more moral than the average person". I want to do the best that I can, within certain constraints. This fits into the framework of epistemic rationality. I am essentially free to choose which outcomes I consider preferable to which. But I have to follow certain rules. Something can't be preferable to itself. I shouldn't switch your preference and back just by phrasing the question differently. And so on.
0Dorikka13y
Okay, thanks.
0Normal_Anomaly13y
All the terms in this area have nearly as many definitions as they have users, but I think you'll find the meta-ethics posts to be non-morally-realist. Oft-repeated quote from those posts:
0[anonymous]13y
Interesting hack. I've been doing something similar with the thought process of "What happened, happened. What can I do now?"

The largest effect in my life has been in fighting mental illness, both indirectly by making me seek help and identify problems that I need to work with, and directly by getting rid of delusions.

It's also given me the realization that I have long term goals and that I might actually have an impact on them. Without that I'd for example never have put the effort in to get an actual education for example or even realized that was important.

These are just the largest and most concrete things, I have a hard time thinking of ANYTHING positive in my life that's not due to rationality.

4David_Gerard13y
Friends and loved ones are pretty good in the general case :-) But yes, learning to be less dumb is a general formula for success.
3Armok_GoB13y
I don't really have friends except a few online ones, and if no for rationality's effects on mental health I probably would not have the ability to interact with family. So no, not even those.
2David_Gerard13y
I was wondering if that was the case, hence the "general case" disclaimer.
3Goobahman13y
"The largest effect in my life has been in fighting mental illness," Hey Armok, I'd love to hear more details on this. Maybe do a post in discussion? Doesn't have to be elaborate or anything but I'm really curious.
0Armok_GoB13y
No, but I can link to a previous discussion on why not: http://lesswrong.com/lw/4ws/collecting_successes_and_deltas/3qml
  • I am prone to identifying with ideas and LW style thought has helped me keep in mind that the state of the world is external, which helps me step back and allow myself the possibility that I am wrong.
  • Thinking in terms of decision theory helps me frequently ask, "how could I do this better?"
  • the notion that there are big important ideas/skills that aren't hard to learn which not everyone knows (like decision theory and knowing about heuristics and biases) led me to look for more ideas/skills like this. The ones that come to mind:
    • science of fo
... (read more)

On Less Wrong, I found thoroughness. Society today advocates speed over effectiveness - 12 year old college students over soundly rational adults. People who can Laplace transform diff-eqs in their heads over people who can solve logical paradoxes. In Less Wrong, I found people that could detach themselves from emotions and appearances, and look at things with an iron rationality.

I am sick of people who presume to know more than they do. Those that "seem" smart rather than actually being smart.

People on less wrong do not seem to be something they are not ~"Seems, madam! nay it is; I know not 'seems.'" (Hamlet)

[-][anonymous]13y20

What cool/important/useful things has rationality gotten you?

What sticks out for me are some bad things. "Comforting lies" is not an ironic phrase, and since ditching them I haven't found a large number of comforting truths. So far I haven't been able to marshal my true beliefs against my bad habits -- I come to less wrong partly to try to understand why.

1David_Gerard13y
For comfort from uncomforting truths, going meta may help: clearer thinking from more truthful data works to internalise your locus of control - or, to use the conventional term, empower you. It can take a while, and possibly some graspable results, for the prospect of a more internal locus of control to comfort.

I've benefited immensely, I think, but more from the self-image of being a person who wants/tries to be rational rather than something direct. I'm not particularly luminous or impervious to procrastination. However, valuing looking critically at things even when feelings are involved has been so incredibly important. I could have taken a huge, life-changing wrong turn. My sister took that turn, and she's never been really interested in rationality so I guess that's evidence for self-image as a (wanna-be) rationalist being important though it could've been something else.

[-][anonymous]13y00
  • I am prone to identifying with ideas and LW style thought has helped me keep in mind that the state of the world is external, which helps me step back and allow myself the possibility that I am wrong.
  • Thinking in terms of decision theory helps me frequently ask, "how could I do this better?"
  • the notion that there are big important ideas/skills that aren't hard to learn which not everyone knows (like decision theory and knowing about heuristics and biases) led me to look for more ideas/skills like this. The ones that come to mind:
    • science of fo
... (read more)