Perplexed comments on What I would like the SIAI to publish - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (218)
It seems like you're essentially saying "This argument is correct. Anyone who thinks it is wrong is irrational." Could probably do without that; the argument is far from as simple as you present it. Specifically, the last point:
So I agree that there's no reason to assume an upper bound on intelligence, but it seems like you're arguing that hard takeoff is inevitable, which as far as I'm aware has never been shown convincingly.
Furthermore, even if you suppose that Foom is likely, it's not clear where the threshold for Foom is. Could a sub-human level AI foom? What about human-level intelligence? Or maybe we need super-human intelligence? Do we have good evidence for where the Foom-threshold would be?
I think the problems with resolving the Foom debate stem from the fact that "intelligence" is still largely a black box. It's very nice to say that intelligence is an "optimization process", but that is a fake explanation if I've ever seen one because it fails to explain in any way what is being optimized.
I think you paint in broad strokes. The Foom issue is not resolved.
So when did the goalposts get moved to proving that hard takeoff is inevitable?
The claim that research into FAI theory is useful requires only that it be shown that uFAI might be dangerous. Showing that is pretty much a slam dunk.
The claim that research into FAI theory is urgent requires only that it be shown that hard takeoff might be possible (with a probability > 2% or so).
And, as the nightmare scenarios of de Garis suggest, even if the fastest possible takeoff turns out to take years to accomplish, such a soft, but reckless, takeoff may still be difficult to stop short of war.
Assuming there aren't better avenues to ensuring a positive hard takeoff.
Good point. Certainly the research strategy that SIAI seems to currently be pursuing is not the only possible approach to Friendly AI, and FAI is not the only approach to human-value-positive AI. I would like to see more attention paid to a balance-of-power approach - relying on AIs to monitor other AIs for incipient megalomania.
Calls to slow down, not publish, not fund seem common in the name of friendliness.
However, unless those are internationally coordinated, a highly likely effect will be to ensure that superintelligence is developed elsewhere.
What is needed most - IMO - is for good researchers to be first. So - advising good researchers to slow down in the name of safety is probably one of the very worst possible things that spectators can do.
It doesn't even seem hard to prevent. Topple civilization for example. It's something that humans have managed to achieve regularly thus far and it is entirely possible that we would never recover sufficiently to construct a hard takeoff scenario if we nuked ourselves back to another dark age.