I am emailing experts in order to raise and estimate the academic awareness and perception of risks from AI. Below are some questions I am going to ask. Please help to refine the questions or suggest new and better questions.
(Thanks goes to paulfchristiano, Steve Rayhawk and Mafred.)
Q1: Assuming beneficially political and economic development and that no global catastrophe halts progress, by what year would you assign a 10%/50%/90% chance of the development of artificial intelligence that is roughly as good as humans at science, mathematics, engineering and programming?
Q2: Once we build AI that is roughly as good as humans at science, mathematics, engineering and programming, how much more difficult will it be for humans and/or AIs to build an AI which is substantially better at those activities than humans?
Q3: Do you ever expect artificial intelligence to overwhelmingly outperform humans at typical academic research, in the way that they may soon overwhelmingly outperform humans at trivia contests, or do you expect that humans will always play an important role in scientific progress?
Q4: What probability do you assign to the possibility of an AI with initially (professional) human-level competence at general reasoning (including science, mathematics, engineering and programming) to self-modify its way up to vastly superhuman capabilities within a matter of hours/days/< 5 years?
Q5: How important is it to figure out how to make superhuman AI provably friendly to us and our values (non-dangerous), before attempting to build AI that is good enough at general reasoning (including science, mathematics, engineering and programming) to undergo radical self-modification?
Q6: What probability do you assign to the possibility of human extinction as a result of AI capable of self-modification (that is not provably non-dangerous, if that is even possible)?
If there was such a proof it would have been found by a computer.
I initially just believed you and wanted to find out more. But it turns out there isn't any mention of it in the places where I expected it to be mentioned. A winning endgame between a combination so similar in material would almost certainly be mentioned if it existed. Absence of evidence (that should exist) is evidence of absence! Perhaps there was another similar result in the magazine?
The most interesting endgame I found in my searching was two knights vs king and pawn, which is (depending on the pawn) a win. This is in contrast to the knights vs the lone king which is an easy draw. On a related (better to be worse) note there was a high ranked game in which a player underpromoted (pawns to knights) twice in one game and in each case the underpromotion was the unambiguous correct play.
Here
Somebody recalls a slightly different version than I.