@Eliezer, you mention that "This hypothesis then has two natural subdivisions:" I suppose you consider the second correct and the first incorrect?
@Eliezer: is the following a real experiment that was actually made or are you hypothesizing?
The actual experiment which shows that parental grief correlates strongly to the expected reproductive potential of a child of that age in a hunter-gatherer society - not the different reproductive curve in a modern society - does not disturb me.
Thanks Vladimir. Maybe a game like chess could also be a model. When you start to see the patterns you start getting better. Btw, I noticed that in fact a lot of errors people make when playing chess can be attributed to biases.
Allan/Eliezer: sorry, I misheard that, my fault.
Eliezer, at 39:38 if I heard correctly you say:
"I have to say I'm the first person who actually ran to the opposite extreme and put the entire burden of rationality on system one fast perceptual intuitive judgement."
Correct me if I'm wrong, but wouldn't that make overcomingbias meaningless because what we do here is more on the side of deliberative reasoning? After all we can't change system one that much.
I think it would be interesting if you could make a posting contrasting the notion of rationality against verbality since a lot of people might fall into the latter trap.
Regarding Obama vs. Bush, I wonder why even rationalists seem to operate under the assumption that the president has the power to make all important decisions. Even if Obama wanted he probably couldn't go against the power elite that is operating behind the scenes. JFK tried it.
Eliezer, I don't understand the following:
probably via resolving some other problem in AI that turns out to hinge on the same reasoning process that's generating the confusion
If you use the same reasoning process again what help can that be? I would suppose that the confusion can be solved by a new reasoning process that provides a key insight.
As for the article one idea I had was that the AI could have empathy circuits like we have, yes, the modelling of humans would be restricted but good enough I hope. The thing is, would the AI have to be sentient in order for that to work?
Ted talk as clickable link: http://www.ted.com/index.php/talks/barry_schwartz_on_the_paradox_of_choice.html
Eliezer,
what would be the right thing to do regarding our own choices? Should we limit them? Somehow this seems related to the internet where you always have to choose when to click another link and when to stop reading. Timothy Ferris also recommends a low information diet. I'm just brainstorming a bit here.
I don't understand what is so shocking about this story. The lessons seems quite clear: the mouth that you feed today will bite you tomorrow. It's not as if we don't have this in western culture.