Like everyone, AI safety has been on my mind a lot lately. I was thinking about how most problems that are caused by intelligence in the world to-date seem to have always be solved by or have the potential to be soluble using more intelligence.

While some problems can be massively lethal of course, this doesn’t seem to be the case with the problems that most applied AI safety today seeks to avoid. Rather, the prevalent approach seems to be safety by avoiding even small risks - like not letting a child play for fear it might scratch its knee. In taking this approach, it feels like we’ll almost certainly limit our ability to access and leverage this intelligence to solve many problems.

I was curious are there many good examples from the real-world where this hasn’t been the case in other areas - what are meaningful problems that were caused by intelligence, that couldn’t be solved with more intelligence? Even if that wasn’t necessarily done - for economic reasons, etc.

For example, intelligence caused carbon emissions - we leveraged it to create such huge levels of industry that we emitted far too much - but intelligence will also almost-certainly be leveraged to solve it - via a variety of human-invented solutions and counter-actions.

Are there examples where this wasn’t the case historically?

New Answer
New Comment
2 comments, sorted by Click to highlight new comments since:

Suppose that some new technology is useful but creates bad effects, and we say that the bad effects are the "problem" caused by the intelligence used to create that technology.  Then suppose that further intelligence-driven technological development doesn't give us any way to directly "fix" the problem, but it does give us a better alternative, which has the same uses and no bad effects (or less bad ones), and people just drop the first technology in favor of the second.  Does that count as "solving" with intelligence?

Suppose that there is no second technology, but that, upon further analysis, the bad effects of the first technology turn out worse than initially believed, and then people decide the first technology isn't worth the downsides and drop it.  Does that count as "solving" with intelligence?

Suppose that there is no second technology—yet, anyway.  There might be in the future, who knows.  Will it ever be practical to say that intelligence can't solve a problem?  (The main one that comes to mind is "heat death of the universe", due to the second law of thermodynamics, but that problem wasn't created by intelligence.)

My best answer at the moment is, "Problem: people using their intelligence to figure out how to benefit themselves at the expense of others in net-negative ways".  Intelligence yields approaches to addressing many forms of this problem, but plenty of them are far from what I'd call solved.

Definitely true that many problems are far from solved - my question was more whether there are areas where there isn’t a path to a solution given more resources/attention/intelligence