Comment author: brainoil 05 September 2013 11:41:37AM 8 points [-]

I was instructed long ago by a wise editor, "If you understand something you can explain it so that almost anyone can understand it. If you don't, you won't be able to understand your own explanation." That is why 90% of academic film theory is bullshit. Jargon is the last refuge of the scoundrel.

Roger Ebert

Comment author: brainoil 30 July 2013 02:39:29AM 5 points [-]

Completely agree. For example, if you're feeling suicidal, please don't kill yourself at least until you have moved to another country.

Comment author: brainoil 30 July 2013 02:04:58AM *  0 points [-]

Well, curing cancer might be more important than finding a cure for common cold, but that doesn't necessarily mean you should be trying to cure cancer instead of trying to get rid of common cold, unless of course you have some inner quality that makes you uniquely capable of curing cancer. There are other considerations.

Reducing existential risks is important. But suppose it is not as important as ending world poverty. There's also lot of uncertainty. It may be that no matter how hard we try, something will come out of the blue and kill us all (three hours from now). Still, if you are the only one who is doing something about existential risks, and is capable of reducing it a tiny bit, your work is very valuable.

The thing is, outside few communities like this one, no one really cares about existential risks (even global warming is a political phenomenon for most people, rather than a scientific one. Other existential risks make blue-collar oil drillers go to space and blow up asteroids).

Comment author: EHeller 24 July 2013 07:09:02AM 4 points [-]

You don't end up permanently irrevocably unemployed until all the work you can do has been automated away.

And in a world where all work CAN be automated, human service can still exist side-by-side. A robot might be able to cut my hair, but I'd pay a premium to have a person do it because I enjoy the experience (I sometimes pay for barber shaves before job interviews rather than do it myself). Similarly, I'd probably pay a premium for an actual bartender over a barmonkey type robot in many settings. I pay a premium over Amazon at the nearby bookstore because I enjoy the old medieval history phd who runs the shop and his conversations/recommendations,etc. I can imagine a world full of robots where face-to-face service becomes the luxury item.

Comment author: brainoil 28 July 2013 03:30:49AM 2 points [-]

If this happens, then some of the robots will start to look and behave exactly like humans. Robot prostitutes would look like human supermodels. This'll cause more unemployment.

In response to Scope Insensitivity
Comment author: brainoil 26 May 2013 10:43:56AM *  3 points [-]

From Abhijit V. Benerjee and Esther Duflo's Poor Economics,

Researchers gave students $5 to fill out a short survey. They then showed them a flyer and asked them to make a donation to Save the Children, one of the world’s leading charities. There were two different flyers. Some (randomly selected) students were shown this:Food shortages in Malawi are affecting more than 3 million children; In Zambia, severe rainfall deficits have resulted in a 42% drop in maize production from 2000. As a result, an estimated 3 million Zambians face hunger; Four million Angolans—one third of the population—have been forced to flee their homes; More than 11 million people in Ethiopia need immediate food assistance.

Other students were shown a flyer featuring a picture of a young girl and these words:Rokia, a 7-year-old girl from Mali, Africa, is desperately poor and faces a threat of severe hunger or even starvation. Her life will be changed for the better as a result of your financial gift. With your support, and the support of other caring sponsors, Save the Children will work with Rokia’s family and other members of the community to help feed her, provide her with education, as well as basic medical care and hygiene education. The first flyer raised an average of $1.16 from each student. The second flyer, in which the plight of millions became the plight of one, raised $2.83. The students, it seems, were willing to take some responsibility for helping Rokia, but when faced with the scale of the global problem, they felt discouraged.

Some other students, also chosen at random, were shown the same two flyers after being told that people are more likely to donate money to an identifiable victim than when presented with general information. Those shown the first flyer, for Zambia, Angola, and Mali, gave more or less what that flyer had raised without the warning—$1.26. Those shown the second flyer, for Rokia, after this warning gave only $1.36, less than half of what their colleagues had committed without it. Encouraging students to think again prompted them to be less generous to Rokia, but not more generous to everyone else in Mali.

Comment author: brainoil 14 May 2013 04:45:26AM 24 points [-]

Oftentimes, when I'm not in a good mood, I simply decide to be in a good mood, and soon I am in a good mood. It's surprisingly effective. You just have to consciously tell yourself that you decide to be in a good mood and try to be in a good mood. Of course this doesn't work all the time. I'm generally a happy person, so it's perhaps easier for me.

Comment author: earthwormchuck163 06 May 2013 08:17:45AM *  25 points [-]

Mugger: Give me five dollars, and I'll save 3↑↑↑3 lives using my Matrix Powers.

Me: I'm not sure about that.

Mugger: So then, you think the probability I'm telling the truth is on the order of 1/3↑↑↑3?

Me: Actually no. I'm just not sure I care as much about your 3↑↑↑3 simulated people as much as you think I do.

Mugger: "This should be good."

Me: There's only something like n=10^10 neurons in a human brain, and the number of possible states of a human brain exponential in n. This is stupidly tiny compared to 3↑↑↑3, so most of the lives you're saving will be heavily duplicated. I'm not really sure that I care about duplicates that much.

Mugger: Well I didn't say they would all be humans. Haven't you read enough Sci-Fi to know that you should care about all possible sentient life?

Me: Of course. But the same sort of reasoning implies that, either there are a lot of duplicates, or else most of the people you are talking about are incomprehensibly large, since there aren't that many small Turing machines to go around. And it's not at all obvious to me that you can describe arbitrarily large minds whose existence I should care about without using up a lot of complexity. More generally, I can't see any way to describe worlds which I care about to a degree that vastly outgrows their complexity. My values are complicated.

Comment author: brainoil 06 May 2013 12:54:36PM *  2 points [-]

I'm not really sure that I care about duplicates that much.

Didn't you feel sad when Yoona-939 was terminated, or wish all happiness for Sonmi-451?

Comment author: brainoil 02 May 2013 06:23:38AM 1 point [-]

"Take a step back. Look at the bigger picture. That's how you devour a whale. One bite at a time."

-Congressman Frank Underwood in the TV series House of Cards

In response to A Priori
Comment author: brainoil 30 April 2013 09:54:06AM 1 point [-]

How is observing this pattern in someone else's brain any different, as a way of knowing, from observing your own brain doing the same thing? When "pure thought" tells you that 1 + 1 = 2, "independently of any experience or observation", you are, in effect, observing your own brain as evidence.

No no no. The difference between a priori and a posteriori is where the justification lies. You may be counting your fingers when you count 1 + 1. It may be that you won't be able to figure out the answer if someone cut off your fingers. In fact, it may be you won't be able to understand what 1 means if you didn't have your fingers. But the justification for 1 + 1 being 2 is not in your fingers.

So it may be that you are able to observe how your brain operates when you're counting 1 + 1. But even if your brain operated in a different way, 1 + 1 is still 2. If B is taller than A, and C is taller than B, C is taller than A. It may be that you're not able to understand this without three pencils. But C being taller than A is a priori knowledge.

Comment author: Qiaochu_Yuan 30 April 2013 06:30:25AM *  2 points [-]

Is this really how you think it works? Do you honestly watch Game of Thrones because it helps to better other people's lives? I'd be surprised. More likely, you start with "I like Game of Thrones" and end up with "it helps me to save the world." I can't read your mind. But that'd be my guess.

That's a reasonable guess, and it's certainly something I have to watch out for. (I don't watch Game of Thrones, but I'm mentally substituting with a show I do watch.) If I genuinely didn't think that watching Game of Thrones was better as measured by my utility function than the alternative upon reflection, I hope I would be able to stop. I've stopped doing various other things this way recently (most recently browsing Tumblr).

You already justified your iPhone when you could have bought a cheap android phone that has pretty much the same features.

This wasn't clear to me at the time of my purchase. My impression from several people I talked to (that I trusted to be reasonably knowledgeable) was that Android is ultimately more powerful but requires more effort and tinkering to be put to use whereas an iPhone can be used out of the box. I'm not much of a power user and I wanted something that just worked. I also had the sense that there were more apps available for the latter than the former.

And, again, the perfect is the enemy of the good. It takes too much time to make optimal decisions, but I can at least try to make better decisions.

P.S. Is there any research done that suggests smartphones make people more productive?

Who needs research? It's pretty clear to me that my smartphone has made me more productive, and that's the question that actually needed answering. (I expect most people get distracted by games, but I adopted a general policy of not downloading games which I have only rarely broken, and the games I do download I don't play very much.)

I'm still not sure I understand what you're getting at with this line of questioning. You seem to think there's something wrong with the way I try to make decisions, which is to attempt to maximize expected utility while recognizing that I have limited time to search the space of possible things to do. What would you suggest as a superior alternative? (The alternative you've presented so far is justifying that you should think about political questions because of voting even though you don't vote.)

Comment author: brainoil 30 April 2013 08:23:58AM 1 point [-]

The intent was to show that asking whether I don't have anything more valuable to do than voting was an unfair question because even those who profess utilitarianism don't always do the things that are most valuable in utilitarian terms. But it seems this strategy won't work with you.

View more: Prev | Next