How can I spend money to improve my life?
On ChrisHallquist's post extolling the virtues of money, the top comment is Eliezer pointing out the lack of concrete examples. Can anyone think of any? This is not just hypothetical: if I think your suggestion is good, I will try it (and report back on how it went)
I care about health, improving personal skills (particularly: programming, writing, people skills), gaining respect (particularly at work), and entertainment (these days: primarily books and computer games). If you think I should care about something else, feel free to suggest it.
I am early-twenties programmer living in San Francisco. In the interest of getting advice useful to more than one person, I'll omit further personal details.
Budget: $50/day
If your idea requires significant ongoing time commitment, that is a major negative.
Using vs. evaluating (or, Why I don't come around here no more)
[Summary: Trying to use new ideas is more productive than trying to evaluate them.]
I haven't posted to LessWrong in a long time. I have a fan-fiction blog where I post theories about writing and literature. Topics don't overlap at all between the two websites (so far), but I prioritize posting there much higher than posting here, because responses seem more productive there.
The key difference, I think, is that people who read posts on LessWrong ask whether they're "true" or "false", while the writers who read my posts on writing want to write. If I say something that doesn't ring true to one of them, he's likely to say, "I don't think that's quite right; try changing X to Y," or, "When I'm in that situation, I find Z more helpful", or, "That doesn't cover all the cases, but if we expand your idea in this way..."
Whereas on LessWrong a more typical response would be, "Aha, I've found a case for which your step 7 fails! GOTCHA!"
It's always clear from the context of a writing blog why a piece of information might be useful. It often isn't clear how a LessWrong post might be useful. You could blame the author for not providing you with that context. Or, you could be pro-active and provide that context yourself, by thinking as you read a post about how it fits into the bigger framework of questions about rationality, utility, philosophy, ethics, and the future, and thinking about what questions and goals you have that it might be relevant to.
A question about utilitarianism and selfishness.
Utilitarianism seems to indicate that the greatest good for the most people generally revolves around their feelings. A person feeling happy and confident is a desired state, a person in pain and misery is undesirable.
But what about taking selfish actions that hurt another person's feelings? If I'm in a relationship and breaking up with her would hurt her feelings, does that mean I have a moral obligation to stay with her? If I have an employee who is well-meaning but isn't working out, am I morally allowed to fire him? Or what about at a club? A guy is talking to a woman, and she's ready to go home with him. I could socially tool him and take her home myself, but doing so would cause him greater unhappiness than I would have felt if I'd left them alone.
In a nutshell, does utilitarianism state that I am morally obliged to curb my selfish desires so that other people can be happy?
Crossing the experiments: a baby
I've always been more of a theoretician, but it's important to try one's hand at practical problems from time to time. In that vein, I've decided to try three simultaneous experiments on major Less Wrong themes. I will aim to acquire something to protect, I will practice training a seed intelligence, and I will become more familiar with many consequences of evolutionary psychology.
In the spirit of efficiency I'll combine all these experiments into one:

She's never seen Star Wars or Doctor Who.
She's never seen David Attenborough or read J. L. Borges.
She's never had a philosophical debate.
She's never been skiing.
Never had sex, never been hugged or even been licked by a dog!
She has so much to look forwards to...
(Though she'll be very boring for several months yet!)
[Link] Intelligence, a thermodynamic POV
A deeply satisfying view on intelligence here:
http://www.insidescience.org/content/physicist-proposes-new-way-think-about-intelligence/987/
The idiot savant AI isn't an idiot
A stub on a point that's come up recently.
If I owned a paperclip factory, and casually told my foreman to improve efficiency while I'm away, and he planned a takeover of the country, aiming to devote its entire economy to paperclip manufacturing (apart from the armament factories he needed to invade neighbouring countries and steal their iron mines)... then I'd conclude that my foreman was an idiot (or being wilfully idiotic). He obviously had no idea what I meant. And if he misunderstood me so egregiously, he's certainly not a threat: he's unlikely to reason his way out of a paper bag, let alone to any position of power.
If I owned a paperclip factory, and casually programmed my superintelligent AI to improve efficiency while I'm away, and it planned a takeover of the country... then I can't conclude that the AI is an idiot. It is following its programming. Unlike a human that behaved the same way, it probably knows exactly what I meant to program in. It just doesn't care: it follows its programming, not its knowledge about what its programming is "meant" to be (unless we've successfully programmed in "do what I mean", which is basically the whole of the challenge). We can't therefore conclude that it's incompetent, unable to understand human reasoning, or likely to fail.
We can't reason by analogy with humans. When AIs behave like idiot savants with respect to their motivations, we can't deduce that they're idiots.
Public Service Announcement Collection
P/S/A: There are single sentences which can create life-changing amounts of difference.
- P/S/A: If you're not sure whether or not you've ever had an orgasm, it means you haven't had one, a condition known as primary anorgasmia which is 90% treatable by cognitive-behavioral therapy.
- P/S/A: The people telling you to expect above-trend inflation when the Federal Reserve started printing money a few years back, disagreed with the market forecasts, disagreed with standard economics, turned out to be actually wrong in reality, and were wrong for reasonably fundamental reasons so don't buy gold when they tell you to.
- P/S/A: There are many many more submissive/masochistic men in the world than there are dominant/sadistic women, so if you are a woman who feels a strong temptation to command men and inflict pain on them, and you want a large harem of men serving your every need, it will suffice to state this fact anywhere on the Internet and you will have fifty applications by the next morning.
- P/S/A: Most of the personal-finance-advice industry is parasitic and/or self-deluded, and it's generally agreed on by economic theory and experimental measurement that an index fund will deliver the best returns you can get without huge amounts of effort.
- P/S/A: If you are smart and underemployed, you can very quickly check to see if you are a natural computer programmer by pulling up a page of Python source code and seeing whether it looks like it makes natural sense, and if this is the case you can teach yourself to program very quickly and get a much higher-paying job even without formal credentials.
Estimates vs. head-to-head comparisons
(Cross-posted from my blog.)
Summary: when choosing between two options, it’s not always optimal to estimate the value of each option and then pick the better one.
Suppose I am choosing between two actions, X and Y. One way to make my decision is to predict what will happen if I do X and predict what will happen if I do Y, and then pick the option which leads to the outcome that I prefer.
My predictions may be both vague and error-prone, and my value judgments might be very hard or nearly arbitrary. But it seems like I ultimately must make some predictions, and must decide how valuable the different outcomes are. So if I have to evaluate N options, I could do it by evaluating the goodness of each option, and then simply picking the option with the highest value. Right?
Post ridiculous munchkin ideas!
A Munchkin is the sort of person who, faced with a role-playing game, reads through the rulebooks over and over until he finds a way to combine three innocuous-seeming magical items into a cycle of infinite wish spells. Or who, in real life, composes a surprisingly effective diet out of drinking a quarter-cup of extra-light olive oil at least one hour before and after tasting anything else. Or combines liquid nitrogen and antifreeze and life-insurance policies into a ridiculously cheap method of defeating the invincible specter of unavoidable Death. Or figures out how to build the real-life version of the cycle of infinite wish spells.
It seems that many here might have outlandish ideas for ways of improving our lives. For instance, a recent post advocated installing really bright lights as a way to boost alertness and productivity. We should not adopt such hacks into our dogma until we're pretty sure they work; however, one way of knowing whether a crazy idea works is to try implementing it, and you may have more ideas than you're planning to implement.
So: please post all such lifehack ideas! Even if you haven't tried them, even if they seem unlikely to work. Post them separately, unless some other way would be more appropriate. If you've tried some idea and it hasn't worked, it would be useful to post that too.
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)