Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: SecondWind 07 September 2015 07:29:05PM 2 points [-]

"Local rationalist learns to beat akrasia using this one weird trick!"

Comment author: Lumifer 19 November 2014 03:34:10AM 4 points [-]

Now consider a similar-sounding stereotype: "Men are physically stronger than women". Think that's fixable by different expectations?

While some stereotypes reflect cultural expectations, some reflect biological reality.

Comment author: SecondWind 09 June 2015 09:14:48PM 0 points [-]

Strength is determined by biology and behavior; the stereotype reflects both biological reality and cultural expectations. Note that boys are/were expected to be stronger than girls even before puberty actually creates a meaningful biological gap...

Comment author: RomeoStevens 04 April 2015 09:45:14AM 4 points [-]

Largest effects with the smallest error bars around them: Processed meats bad, fruits and vegetables good, fish and nuts good.

Everything else is significantly smaller effect size or large error bars AFAIK.

Comment author: SecondWind 06 April 2015 01:17:18AM 1 point [-]

Can we safely tack "processed sugar bad" onto that list?

Comment author: SecondWind 18 February 2014 06:35:51AM 0 points [-]

Me, 60%.

Comment author: SilasBarta 18 May 2013 07:55:41PM 0 points [-]

Well, yes, if you make the test non-adaptive, it's (exponentially) easier to pass. For example, if you limit the "conversation" to a game of chess, it's already possible. But those aren't the "full" Turing Test; they're domain-specific variants. Your criticism would only apply to the latter.

Comment author: SecondWind 26 May 2013 08:53:04AM 2 points [-]

Are AI players actually indistinguishable from humans in Chess? Could an interrogator not pick out consistent stylistic differences between equally-ranked human and AI players?

Comment author: shminux 28 April 2013 03:26:08AM 1 point [-]

...d-do I get the prize?

You have, in the local currency.

So, you are saying that free will is an illusion due to our limited predictive power?

Comment author: SecondWind 19 May 2013 07:10:35AM 0 points [-]


If we perfectly understood the decision-making process and all its inputs, there'd be no black box left to label 'free will.' If instead we could perfectly predict the outcomes (but not the internals) of a person's cognitive algorithms... so we know, but don't know how we know... I'm not sure. That would seem to invite mysterious reasoning to explain how we know, for which 'free will' seems unfitting as a mysterious answer.

That scenario probably depends on how it feels to perform the inerrant prediction of cognitive outcomes, and especially how it feels to turn that inerrant predictor on the self.

Comment author: Karl_Smith 03 March 2010 06:20:12PM *  4 points [-]

So the easy answers might be:

Ben Bernanke

Mark Gertler

Micheal Wooford

Greg Mankiw

Its not clear to me why macro-economists are rightly subject to such criticism. To me its like asking a mathematician, "If you're so good at logical reasoning why didn't you create the next killer app"

Understanding how the economy works and applying that knowledge to a particular task are completely different.

Comment author: SecondWind 19 May 2013 06:03:38AM *  1 point [-]

"If you're so good at logical reasoning why didn't you create the next killer app"

'Designing the next killer app' seems to rely heavily on predicting what people will want, which is many steps and a lot of knowledge away from logical reasoning.

Comment author: Tom_McCabe2 18 September 2008 02:42:49AM 6 points [-]

"And I wonder if that advice will turn out not to help most people, until they've personally blown off their own foot, saying to themselves all the while, correctly, "Clearly I'm winning this argument.""

I fell into this pattern for quite a while. My basic conception was that, if everyone presented their ideas and argued about them, the best ideas would win. Hence, arguing was beneficial for both me and the people on transhumanist forums- we both threw out mistaken ideas and accepted correct ones. Eliezer_2006 even seemed to support my position, with Virtue #5. It never really occurred to me that the best of everyone's ideas might not be good enough.

"It is Nature that I am facing off against, who does not match Her problems to your skill, who is not obliged to offer you a fair chance to win in return for a diligent effort, who does not care if you are the best who ever lived, if you are not good enough."

Perhaps we should create an online database of open problems, if one doesn't exist already. There are several precedents (http://en.wikipedia.org/wiki/Hilbert%27s_problems). So far as I know, if one wishes to attack open problems in physics/chemistry/biology/comp. sci./FAI, the main courses of action are to attack famous problems (where you're expected to fail and don't feel bad if you do), or to read the educational literature (where the level of problems is pre-matched to the level of the material).

Comment author: SecondWind 17 May 2013 07:15:01AM 1 point [-]

Seems like idea-fights between humans result in vastly more effort put into the fight than into the idea.

Comment author: Doug_S. 18 December 2008 02:38:47AM 1 point [-]

The transhumanist philosopher David Pearce is an advocate of what he calls the Hedonistic Imperative: The eudaimonic life is the one that is as pleasurable as possible. So even happiness attained through drugs is good? Yes, in fact: Pearce's motto is "Better Living Through Chemistry".

Well, it's definitely better than the alternative. We don't necessarily want to build Jupiter-sized blobs of orgasmium, but getting rid of misery would be a big step in the right direction. Pleasure and happiness aren't always good, but misery and pain are almost always bad. Getting rid of most misery seems like a necessary, but not sufficient, condition for Paradise.

I can only analogize the experience to a theist who's suddenly told that they can know the mind of God, and it turns out to be only twenty lines of Python.

You know, I wouldn't be surprised, considering that you can fit most of physics on a T-shirt. (Isn't God written in Lisp, though?)

Comment author: SecondWind 16 May 2013 06:30:03AM 1 point [-]

Twenty lines of close paren.

Comment author: vroman 19 March 2009 01:57:01AM 1 point [-]

I read and understood the Least convenient possible world post. given that, then let me rephrase your scenario slightly

If every winner of a certain lottery receives $X * 300 million, a ticket costs $X, the chances of winning are 1 in 250 million, you can only buy one ticket, and $X represents an amount of money you would be uncomfortable to lose, would you buy that ticket?

answer no. If the ticket price crosses a certain threshold, then I become risk averse. if it were $1 or some other relatively inconsequential amount of money, then I would be rationally compelled to buy the nearly-sure loss ticket.

Comment author: SecondWind 07 May 2013 03:04:38PM 0 points [-]

If you'd be rationally compelled to buy one low-cost ticket, then after you've bought the ticket you should be rationally compelled to buy a ticket. And then rationally compelled to buy a ticket.

Sure, at each step you're approaching the possibility with one fewer dollar, but by your phrasing, the number of dollars you have does not influence your decision to buy a ticket (unless you're broke enough that $1 is not longer a relatively inconsequential amount of money). This method seems to require an injunction against iteration.

View more: Next