I'm increasingly impressed by the power of Zach Wiener's comic to demonstrate in a few images why hard problems are hard. It would be a vast task, but perhaps it would be useful to create an index of such problem-demonstrating comics to add to the Wiki, giving us something to point newbies at which would be less intimidating than formal Sequence postings. I get the impression that a common hurdle is just to get people to accept that problems of AI (and simulation, ethics, what have you) are actually difficult.
Fair enough, but the point seems to be that the woman didn't modify her preferences sufficiently, or didn't fully think through the modifications - in effect, she was bluffing, even if perhaps she didn't realise it.
Link
I'm increasingly impressed by the power of Zach Wiener's comic to demonstrate in a few images why hard problems are hard. It would be a vast task, but perhaps it would be useful to create an index of such problem-demonstrating comics to add to the Wiki, giving us something to point newbies at which would be less intimidating than formal Sequence postings. I get the impression that a common hurdle is just to get people to accept that problems of AI (and simulation, ethics, what have you) are actually difficult.