Comments

Writing correct code given a specification is the relatively easy part of software engineering. The hard part is deciding what you need that code to actually do. In other words, "requirements" - your dumbass customer has only the vaguest idea what they want and contradicts themselves half the time, but they still expect you to read their mind and give them something they're going to be happy with.

Possibly one of the only viable responses to a hostile AI breakout onto the general Internet would be to detonate several nuclear weapons in space, causing huge EMP blasts that would fry most of the world's power grid and electronic infrastructure, taking the world back to the 1850s until it can be repaired. (Possible AI control measure: make sure that "critical" computing and power infrastructure is not hardened against EMP attack, just in case humanity ever does find itself needing to "pull the plug" on the entire goddamn world.)

Hopefully whichever of Russia, China, and the United States didn't launch the nukes would be understanding. It might make sense for the diplomats to get this kind of thing straightened out before we get closer to the point where someone might actually have to do it.

(This is because eventually they do lose an election, and then they do fight a civil war. For example, the American South fought a civil war rather than allow Lincoln to become their President.)

I brought it up with him again, and my father backpedaled and said he was mostly making educated guesses on limited information, that he knows that he really doesn't know very much about current AI, and isn't interested enough to talk to strangers online - he's in his 70s and figures that if AI does eventually destroy the world it probably won't be in his own lifetime. :/

Representative democracy can only last so long as people prefer losing an election to fighting a civil war.

He might also argue "even if you can match a human brain with a billion dollar supercomputer, it still takes a billion dollar supercomputer to run your AI, and you can make, train, and hire an awful lot of humans for a billion dollars."

Because there were enough people selling for prices lower than $40 to satisfy the demand for greater fools?

Also, stocks can be sold short if the price goes too high.

My father thinks that ASI is going to be impractical to achieve with silicon CMOS chips because Moore's law is eventually going to hit fundamental limits - such as the thickness of individual atoms - and the hardware required to create it would end up "requiring a supercomputer the size of the Empire State Building and consume as much electricity as all of New York City".

Needless to say, he has very long timelines for generally superhuman AGI. He doesn't rule out that another computing technology could replace silicon CMOS, he just doesn't think it would be practical unless that happens.

My father is usually a very smart and rational person (he is a retired professor of electrical engineering) and he loves arguing, and I suspect that he is seriously overestimating the computing hardware it would take to match a human brain. Would anyone here be interested in talking to him about it? Let me know and I'll put you in touch.

Update: My father later backpedaled and said he was mostly making educated guesses on limited information, that he knows that he really doesn't know very much about current AI, and isn't interested enough to talk to strangers online - he's in his 70s and if AI does eventually destroy the world it probably won't be in his own lifetime. :/

Load More