All of isomic's Comments + Replies

I wonder if the initial 67% in favor of x-risk was less reflective of the audience's opinion on AI specifically, but a general application of the heuristic "<X fancy new technology> = scary, needs regulation."

(That is, if you replaced AI with any other technology that general audiences are vaguely aware of but don't have a strong opinion on, such as CRISPR or nanotech, would they default to about the same number?)

Also, I would guess that hearing two groups of roughly equally smart-sounding people debate a topic one has no strong opinion on tends to r... (read more)

2Karl von Wendt
That's a good point, which is supported by the high share of 92% prepared to change their minds.

There seems to be a lack of emphasis in this market on outcomes where alignment is not solved, yet humanity turns out fine anyway. Based on an Outside View perspective (where we ignore any specific arguments about AI and just treat it like any other technology with a lot of hype), wouldn't one expect this to be the default outcome?

Take the following general heuristics:

  • If a problem is hard, it probably won't be solved on the first try.
  • If a technology gets a lot of hype, people will think that it's the most important thing in the world even if it isn't. At m
... (read more)
9gwd
Part of the problem with these two is that whether an apocalypse happens or not often depends on whether people took the risk of it happening seriously.  We absolutely, could have had a nuclear holocaust in the 70's and 80's; one of the reasons we didn't is because people took it seriously and took steps to avert it. And, of course, whether a time slice is the most important in history, in retrospect, will depend on whether you actually had an apocalypse.  The 70's would have seemed a lot more momentous if we had launched all of our nuclear warheads at each other. For my part, my bet would be on something like: But more specifically:  P.  Red-teams evaluating early AGIs demonstrate the risks of non-alignment in a very vivid way; they demonstrate, in simulation, dozens of ways in which the AGI would try to destroy humanity.  This has an effect on world leaders similar to observing nuclear testing:  It scares everyone into realizing the risk, and everyone stops improving AGI's capabilities until they've figured out how to keep it from killing everyone.
1Noosphere89
I basically suspect that this is the best argument I've seen for why AI Alignment doesn't matter, and the best argument for why business as usual would continue, and the best argument against Holden Karnofsky's series on why we live in a pivotal time.