Comment author: michael_vassar3 17 July 2008 03:39:10PM 13 points [-]

OK Eliezer, answer the easy question "why are flowers beautiful" and dodge the hard one about fugues, rainbows, stars, sunsets, and iridescent beetles, as well as the beauty of difficult to catch gazelles, the ugliness of easy to catch pigs, and the ugliness of tasty and nutritious bottom-dwelling fish.

Comment author: omalleyt 22 September 2016 04:56:22AM 0 points [-]

Most things humans like are super-colorful. Colorful things were probably a good sign of fertile land or some other desirable thing. As to the stars, don't you think the guy who looks up every night and likes what he sees is gonna have a better, more productive life then the guy who looks up and grimaces?

Comment author: omalleyt 18 September 2016 06:27:25PM 0 points [-]

Eliezer is jousting with Immanuel Kant here, who believed that our rationality would lead us to a supreme categorical imperative, i.e. a bunch of "ought" statements with which everyone with a sufficiently advanced ability to reason would agree.

Kant is of course less than compelling. His "treat people as ends, not just means" is cryptic enough to sound cool but be meaningless. If interpreted to mean that one should weighs the desires of all rational minds equally (the end result of contemplating both passive and active actions as influencing the fulfillment of the desires of others), then it dissolves into utilitarianism.

In response to Fake Selfishness
Comment author: omalleyt 18 September 2016 01:31:56AM 0 points [-]

When we weigh options in our mind, we pick the one that yields the cocktail of chemicals/neurotransmitters that induce the strongest positive response in our reward center. Or rather, the cocktail of chemicals/neurotransmitters that elicits the strongest positive response Is able to pass its signals through to the motor neurons.

A desire to be moral, a desire to avoid pain, a desire to protect kin, all release chemicals.

Seen in this light, the phrase "everything one does is selfish" appears to reduce to "all choices are weighed through one's own neural algorithm." Which is so obvious as to be trivial. The only way to get around this would be to detach your motor neurons from your reward center, and hook them up to a committee of, say, ten other people's reward centers, with the action that receives the highest average response being performed. And the detachment is crucial. You can't just willingly abide by the committee's decision, because your choice to obey would still be passing through your own neural algorithm.

Is this what people mean when they boldly assert that everything a person does is selfish? I don't think so. I think, when looked at like this, the question dissolves.

Comment author: DanielLC 29 December 2009 03:26:25AM *  18 points [-]

You're ignoring the probability of succeeding at something else. If you're still doing this, it's zero. If you give up, it's not.

Of course, that can also be considered a cost of failure, in which case you didn't ignore it.

Edit: This is equivalent to counting opportunity cost as a cost of failure that's not a cost of giving up, so maybe you weren't ignoring it.

Comment author: omalleyt 11 September 2016 04:25:28PM 0 points [-]

In addition, as Eliezer's earlier post about the math proof shows, if the original reason that led you to believe you could do something was shown to be false, you should almost certainly give up. It's very unlikely you were right for the wrong reasons. If, knowing what you know now, you would never have tried, then you should probably stop.

Comment author: omalleyt 08 September 2016 06:59:04PM 1 point [-]

I'm going to go off the assumption that this post is deliberate satire, and say it's brilliant.

"Even if it's not true, I'm going to decide to believe that people can't sincerely self-deceive."

In response to The Fallacy of Gray
Comment author: James_Bach 07 January 2008 08:31:31AM 0 points [-]

It sounds like you are trying to rescue induction from Hume's argument that it has no basis in logic. "The future will be like the past because in the past the future was like the past" is a circular argument. He was the first to really make that point. Immanuel Kant spent years spinning elaborate philosophy to try to defeat that argument. Immanuel Kant, like lots of people, had a deep need for universal closure.

An easier way to go is to overcome your need for universal closure.

Induction is not logically justified, but you can make a different argument. You could point out that creatures who ignore the apparent patterns in nature tend to die pretty quick. Induction is a behavior that seems to help us stay alive. That's pretty good. That's why people can't just wave their hands and claim reality is whatever anyone believes-- if they do that, they will discover that acting on that belief won't necessarily, say, win them the New York lottery.

My concern with your argument is, again, structural. You are talking about "gray", and then you link that to probability. Wait a minute, that oversimplifies the metaphor. You present the idea of gray as a one-dimensional quantity, similar to probability. But when people invoke "gray" in rhetoric they are simply trying to say that there are potentially many ways to see something, many ways to understand and analyze it. It's not a one-dimensional gray, it's a many dimensional gray. You can't reduce that to probability, in any actionable way, without specifying your model.

Here's the tactic I use when I'm trying to stand up for a distinction that I want other people to accept (notice that I don't need to invoke "reality" when I say that, since only theories of reality are available to me). I ask them to specify in what way the issue is gray. Let's distinguish between "my spider senses are telling me to be cautious" and "I can think of five specific factors that must be included in a competent analysis. Here they are..."

In other words, don't deny the gray, explore it.

A second tactic I use is to talk about the practical implications of acting-as-if a fact is certain: "I know that nothing can be known for sure, but if we can agree, for the moment, that X, Y, and Z are 'true' then look what we can do... Doesn't that seem nice?"

I think you can get what you want without ridiculing people who don't share your precise worldview, if that sort of thing matters to you.

Comment author: omalleyt 06 September 2016 08:20:25PM 0 points [-]

But let's really look at the statement "The future will be like the past because in the past the future was like the past."

If by "like the past," do we mean obey the same physical laws?

If we do, then I think what we're trying to estimate is the chance, over a specified time frame, that the physical laws will change.

The problem then reduces to the problem of drawing red and blue marbles out of a hat. We can look at all the available time frames that we have "drawn" up to this point and get a confidence estimate on how likely it is that the physical laws will change over the next "draw" of the time frame

In response to Bayesian Judo
Comment author: Silas 31 July 2007 02:49:57PM 22 points [-]

A few questions and comments:

1) What kind of dinner party was this? It's great to expose non-rigorous beliefs, but was that the right place to show off your superiority? It seems you came off as having some inferiority complex, though obviously I wasn't there. I know that if I'm at a party (of most types), for example, my first goal ain't exactly to win philosophical arguments ...

2) Why did you have to involve Aumann's theorem? You caught him in a contradiction. The question of whether people can agree to disagree, at least it seems to me, is an unnecessary distraction. And for all he knows, you could just be making that up to intimidate him. And Aumann's Theorem certainly doesn't imply that, at any given moment, rectifying that particularly inconsistency is an optimal use of someone's time.

3) It seems what he was really trying to say was someting along the lines of "while you could make an intelligence, its emotions would not be real the way humans' are". ("Submarines aren't really swimming.") I probably would have at least attempted to verify if that's what he meant rather latching onto the most ridiculous meaning I could find.

4) I've had the same experience with people who fervently hold beliefs but don't consider tests that could falsify them. In my case, it's usually with people who insist that the true rate of inflation in the US is ~12%, all the time. I always ask, "so what basket of commodity futures can I buy that consistently makes 12% nominal?"

In response to comment by Silas on Bayesian Judo
Comment author: omalleyt 02 September 2016 06:14:32PM 0 points [-]

To point 4 and inflation, the trick is to not invest in commodity futures (where the deflationary pressures of improved production technology cancel some of the inflationary pressures of currency devaluation) but rather assets. You can invest in the S&P 500 and achieve ~11% nominal returns. Now whether asset prices are relevant to "inflation" is dependent upon whether you are trying to answer the question of "how many apples could I buy for a dollar in 1960 versus today?" or the question "how many apples could I buy for a dollar today if they were produced with the same inputs and technological process as they were in 1960?"