Now consider a similar-sounding stereotype: "Men are physically stronger than women". Think that's fixable by different expectations?
While some stereotypes reflect cultural expectations, some reflect biological reality.
Now consider a similar-sounding stereotype: "Men are physically stronger than women". Think that's fixable by different expectations?
While some stereotypes reflect cultural expectations, some reflect biological reality.
Strength is determined by biology and behavior; the stereotype reflects both biological reality and cultural expectations. Note that boys are/were expected to be stronger than girls even before puberty actually creates a meaningful biological gap...
Largest effects with the smallest error bars around them: Processed meats bad, fruits and vegetables good, fish and nuts good.
Everything else is significantly smaller effect size or large error bars AFAIK.
Can we safely tack "processed sugar bad" onto that list?
Me, 60%.
Well, yes, if you make the test non-adaptive, it's (exponentially) easier to pass. For example, if you limit the "conversation" to a game of chess, it's already possible. But those aren't the "full" Turing Test; they're domain-specific variants. Your criticism would only apply to the latter.
Are AI players actually indistinguishable from humans in Chess? Could an interrogator not pick out consistent stylistic differences between equally-ranked human and AI players?
...d-do I get the prize?
You have, in the local currency.
So, you are saying that free will is an illusion due to our limited predictive power?
...hmm.
If we perfectly understood the decision-making process and all its inputs, there'd be no black box left to label 'free will.' If instead we could perfectly predict the outcomes (but not the internals) of a person's cognitive algorithms... so we know, but don't know how we know... I'm not sure. That would seem to invite mysterious reasoning to explain how we know, for which 'free will' seems unfitting as a mysterious answer.
That scenario probably depends on how it feels to perform the inerrant prediction of cognitive outcomes, and especially how it feels to turn that inerrant predictor on the self.
So the easy answers might be:
Ben Bernanke
Mark Gertler
Micheal Wooford
Greg Mankiw
Its not clear to me why macro-economists are rightly subject to such criticism. To me its like asking a mathematician, "If you're so good at logical reasoning why didn't you create the next killer app"
Understanding how the economy works and applying that knowledge to a particular task are completely different.
"If you're so good at logical reasoning why didn't you create the next killer app"
'Designing the next killer app' seems to rely heavily on predicting what people will want, which is many steps and a lot of knowledge away from logical reasoning.
"And I wonder if that advice will turn out not to help most people, until they've personally blown off their own foot, saying to themselves all the while, correctly, "Clearly I'm winning this argument.""
I fell into this pattern for quite a while. My basic conception was that, if everyone presented their ideas and argued about them, the best ideas would win. Hence, arguing was beneficial for both me and the people on transhumanist forums- we both threw out mistaken ideas and accepted correct ones. Eliezer_2006 even seemed to support my position, with Virtue #5. It never really occurred to me that the best of everyone's ideas might not be good enough.
"It is Nature that I am facing off against, who does not match Her problems to your skill, who is not obliged to offer you a fair chance to win in return for a diligent effort, who does not care if you are the best who ever lived, if you are not good enough."
Perhaps we should create an online database of open problems, if one doesn't exist already. There are several precedents (http://en.wikipedia.org/wiki/Hilbert%27s_problems). So far as I know, if one wishes to attack open problems in physics/chemistry/biology/comp. sci./FAI, the main courses of action are to attack famous problems (where you're expected to fail and don't feel bad if you do), or to read the educational literature (where the level of problems is pre-matched to the level of the material).
Seems like idea-fights between humans result in vastly more effort put into the fight than into the idea.
The transhumanist philosopher David Pearce is an advocate of what he calls the Hedonistic Imperative: The eudaimonic life is the one that is as pleasurable as possible. So even happiness attained through drugs is good? Yes, in fact: Pearce's motto is "Better Living Through Chemistry".
Well, it's definitely better than the alternative. We don't necessarily want to build Jupiter-sized blobs of orgasmium, but getting rid of misery would be a big step in the right direction. Pleasure and happiness aren't always good, but misery and pain are almost always bad. Getting rid of most misery seems like a necessary, but not sufficient, condition for Paradise.
I can only analogize the experience to a theist who's suddenly told that they can know the mind of God, and it turns out to be only twenty lines of Python.
You know, I wouldn't be surprised, considering that you can fit most of physics on a T-shirt. (Isn't God written in Lisp, though?)
I read and understood the Least convenient possible world post. given that, then let me rephrase your scenario slightly
If every winner of a certain lottery receives $X * 300 million, a ticket costs $X, the chances of winning are 1 in 250 million, you can only buy one ticket, and $X represents an amount of money you would be uncomfortable to lose, would you buy that ticket?
answer no. If the ticket price crosses a certain threshold, then I become risk averse. if it were $1 or some other relatively inconsequential amount of money, then I would be rationally compelled to buy the nearly-sure loss ticket.
If you'd be rationally compelled to buy one low-cost ticket, then after you've bought the ticket you should be rationally compelled to buy a ticket. And then rationally compelled to buy a ticket.
Sure, at each step you're approaching the possibility with one fewer dollar, but by your phrasing, the number of dollars you have does not influence your decision to buy a ticket (unless you're broke enough that $1 is not longer a relatively inconsequential amount of money). This method seems to require an injunction against iteration.
View more: Next
"Local rationalist learns to beat akrasia using this one weird trick!"