Comment author: Blueberry 31 July 2010 12:06:34AM 0 points [-]

I upvoted the grandparent and parent, because what you said seems right to me. I wish people wouldn't downvote someone asking why e was downvoted.

Comment author: Cameron_Taylor 31 July 2010 01:39:14AM 0 points [-]

Upvoted entire ancestor tree, for similar reasons.

Comment author: Cameron_Taylor 27 July 2010 08:39:44AM 0 points [-]

Some people also think the ability to argue and selectively not comprehend arguments arose due to runaway sexual selection for ability to manipulate and resist manipulation.

Comment author: Cameron_Taylor 28 July 2010 08:05:34AM -2 points [-]

This additional point is controversial even here?

Comment author: Unknowns 27 July 2010 07:07:53AM 0 points [-]

http://www.usatoday.com/news/offbeat/2010-07-13-lottery-winner-texas_N.htm?csp=obinsite

My prior for the probability of winning the lottery by fraud is high enough to settle the question: the woman discussed in the article is cheating.

Does anyone disagree with this?

Comment author: Cameron_Taylor 27 July 2010 08:55:23AM 0 points [-]

My prior for the probability of winning the lottery by fraud

What's your secret? ;)

Comment author: cousin_it 27 July 2010 08:05:48AM *  10 points [-]

1: Philosophical ability is "almost" universal in mind space. Utility maximizers are a pathological example of an atypical mind.

I wouldn't spend much time thinking about this alternative, because it will probably be true for some ideas of "mind space" and false for others, and I don't believe we have enough information to describe the correct "mind space".

2: Evolution created philosophical ability as a side effect while selecting for something else.

Many people think the ability to argue and comprehend arguments arose due to runaway sexual selection for ability to manipulate and resist manipulation. I'm not sure how to test such an explanation.

Comment author: Cameron_Taylor 27 July 2010 08:39:44AM 0 points [-]

Some people also think the ability to argue and selectively not comprehend arguments arose due to runaway sexual selection for ability to manipulate and resist manipulation.

Comment author: wedrifid 12 July 2010 07:39:55AM 3 points [-]

My most extreme Anti-Akrasia tactic. Somewhat on the crude but extremely effective the couple of times I have used it:

timecave.com is a service that sends emails with time delay, scheduling them at some time in the future. My use for it is to generate a random password for a forum that is a time sink and have it emailed to me at a specified time in the future. In this case lesswrong.com until 1 Jan. I've duplicated the email in emailalibi.com in case timecave goes down.

I have real learning to do and have more or less mastered 'one boxing' in counterfactuals (although come to think of it even decision theory hasn't been cropping up much of late). I'll take the chance to remove a procrastination temptation. This way I can save my limited reserves of willpower for areas that don't have such a neat technical solution.

Comment author: Cameron_Taylor 27 July 2010 07:16:37AM 1 point [-]

I'm giving this one a rating of 8. Effective, but not quite bullet proof. At least it provides a significant roadblock before one form of procrastination.

Comment author: SilasBarta 27 July 2010 01:56:41AM *  15 points [-]

What is your definition of philosophy for this article?
Why is it a failing of a highly intelligent mind that it can't "do philosophy"?
Why would a Bayesian EU maximizer necessarily be unable to tell that a computable prior is wrong?
When is Bayesian updating the wrong thing to do?
What should I have learned from your link to Updateless Decision Theory that causes me to suspect that EU maximizing with Bayesian updating on a universal prior is wrong?
Doesn't rationality require identification of one's goals, therefore inheriting the full complexity of value of oneself?
What would count as an example of a metaphilosophical insight?

Comment author: Cameron_Taylor 27 July 2010 07:08:06AM 2 points [-]

What should I have learned from your link to Updateless Decision Theory that causes me to suspect that EU maximizing with Bayesian updating on a universal prior is wrong?

From what I can glean from the UDT descriptions it seems that UDT defines 'updating' to include things that I would prefer to describe as 'naive updating', 'updating wrong' or 'updating the wrong thing'.