I certainly hope the next survey will ask how many of those saying they agree or disagree with it
Since our certainty was given as a percentage, none of us said we agreed or disagreed with it in the survey, unless you define "agree" as certainty > 50% and "disagree" as certainty below 50%
Or are you saying that we should default to 50%, in all cases we aren't scientifically qualified to answer of our own strength? That has obvious problems.
"So ask the question next survey. I do, however, strongly suspect they're expressing an opinion on something they don't actually understand"
That's like asking people to explain how consciousness works before they express their belief in the existence of brains, or their disbelief in the existence of ghosts.
My friend, hearing me recount tales of LessWrong, recently asked me if I thought it was simply a coincidence that so many LessWrong rationality nerds cared so much about creating Friendly AI. "If Eliezer had simply been obsessed by saving the world from asteroids, would they all be focused on that?"
Obviously one possibility (the inside view) is simply that rationality compels you to focus on FAI. But if we take the outside view for a second, it does seem like FAI has a special attraction for armchair rationalists: it's the rare heroic act that can be accomplished without ever confronting reality.
After all, if you want to save the planet from an asteroid, you have to do a lot of work! You have to build stuff and test it and just generally solve a lot of gritty engineering problems. But if you want to save the planet from AI, you can conveniently do the whole thing without getting out of bed.
Indeed, as the Tool AI debate as shown, SIAI types have withdrawn from reality even further. There are a lot of AI researchers who spend a lot of time building models, analyzing data, and generally solving a lot of gritty engineering problems all day. But the SIAI view conveniently says this is all very dangerous and that one shouldn't even begin to try implementing anything like an AI until one has perfectly solved all of the theoretical problems first.
Obviously this isn't any sort of proof that working on FAI is irrational, but it does seem awfully suspicious that people who really like to spend their time thinking about ideas have managed to persuade themselves that they can save the entire species from certain doom just by thinking about ideas.