CarlShulman comments on Should I believe what the SIAI claims? - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (600)
Crazy in which respect? It seemed to me that those critiques were narrow and mostly talking past Stross. The basic point that space is going to remain much more expensive and less pleasant than expansion on Earth for quite some time, conditioning on no major advances in AI, nanotechnology, biotechnology, etc, is perfectly reasonable. And Stross does so condition.
He has a few lines about it in The Singularity is Near, basically saying that FAI seems very hard (no foolproof solutions available, he says), but that AI will probably be well integrated. I don't think he means "uploads come first, and manage AI after that," as he predicts Turing-Test passing AIs well before uploads, but he has said things suggesting that those Turing Tests will be incomplete, with the AIs not capable of doing original AI research. Or he may mean that the ramp up in AI ability will be slow, and that IA will improve our ability to monitor and control AI systems institutionally, aided by non-FAI engineering of AI motivational systems and the like.
Look at his answer for The Singularity:
He doesn't even consider the possibility of trying to nudge it in a good direction. It's either "plan on the assumption that it ain't going to happen", or sit around waiting for AIs to save us.
ETA: The "He" in your second paragraph is Kurtzweil, I presume?
That quote could also be interpreted as saying that UFAI is far more likely than FAI.
Thinking that FAI is extremely difficult or unlikely isn't obviously crazy, but Stross isn't just saying "don't bother trying FAI" but rather "don't bother trying anything with the aim of making a good Singularity more likely". The first sentence of his answer, which I neglected to quote, is "Forget it."
Pretty much how I read it. It should acknowledge the attempts to make a FAI, but it seems like a reasonable pessimistic opinion that FAI is too difficult to ever be pulled off successfully before strong AI in general.
Seems like a sensible default stance to me. Since humans exist, we know that a general intelligence can be built out of atoms, and since humans have many obvious flaws as physical computation systems, we know that any successful AGI is likely to end up at least weakly superhuman. There isn't a similarly strong reason to assume a FAI can be built, and the argument for one seems to be more on the lines of things being likely to go pretty weird and bad for humans if one can't be built but an AGI can.