Representative democracy can only last so long as people prefer losing an election to fighting a civil war.
He might also argue "even if you can match a human brain with a billion dollar supercomputer, it still takes a billion dollar supercomputer to run your AI, and you can make, train, and hire an awful lot of humans for a billion dollars."
Because there were enough people selling for prices lower than $40 to satisfy the demand for greater fools?
Also, stocks can be sold short if the price goes too high.
My father thinks that ASI is going to be impractical to achieve with silicon CMOS chips because Moore's law is eventually going to hit fundamental limits - such as the thickness of individual atoms - and the hardware required to create it would end up "requiring a supercomputer the size of the Empire State Building and consume as much electricity as all of New York City".
Needless to say, he has very long timelines for generally superhuman AGI. He doesn't rule out that another computing technology could replace silicon CMOS, he just doesn't think it would be practical unless that happens.
My father is usually a very smart and rational person (he is a retired professor of electrical engineering) and he loves arguing, and I suspect that he is seriously overestimating the computing hardware it would take to match a human brain. Would anyone here be interested in talking to him about it? Let me know and I'll put you in touch.
Update: My father later backpedaled and said he was mostly making educated guesses on limited information, that he knows that he really doesn't know very much about current AI, and isn't interested enough to talk to strangers online - he's in his 70s and if AI does eventually destroy the world it probably won't be in his own lifetime. :/
(yes I know you've being ironic)
Well, trade does have a more zero-sum character when both sides of the trade have the same preferences, but if you can credibly claim to have different preferences, you're also in a better position to convince the person on the other side of the trade that you're not trying to offer them a bad deal. (For example, if you're selling stock because you want to spend the money, you don't care if you disagree with someone about what the stock will be worth in the future; you just want to sell it for the best offer you can get right now.)
I think you missed the point of the Laffy Taffy example. He got the flavor he didn't like because he'd been systematically eating the ones he did like while leaving the flavor he didn't like in the bowl. (Or his friend wasn't actually picking at random.)
I brought it up with him again, and my father backpedaled and said he was mostly making educated guesses on limited information, that he knows that he really doesn't know very much about current AI, and isn't interested enough to talk to strangers online - he's in his 70s and figures that if AI does eventually destroy the world it probably won't be in his own lifetime. :/