In chess, AIs are very superhuman; the best players in the world would lose nearly every game against any modern computer player. Do humans still have something to add? The continued existence of correspondence chess, IMO, suggests that they do. In correspondence chess players have days to make each move,...
This is a common assumption for AI risk scenarios, but it doesn’t seem very justified to me. https://www.lesswrong.com/posts/gYaKZeBbSL4y2RLP3/strategic-implications-of-ais-ability-to-coordinate-at-low says that AIs could merge their utility functions. But it seems increasingly plausible that AIs will not have explicit utility functions, so that doesn’t seem much better than saying humans could merge...
For concreteness, I’ll focus on the “off button” problem, which is that an AI (supposedly) will not let you turn it off. Why not? The AI will have some goal. ~Whatever that goal is, the AI will be better able to achieve it if it is “on” and able to...
On ChrisHallquist's post extolling the virtues of money, the top comment is Eliezer pointing out the lack of concrete examples. Can anyone think of any? This is not just hypothetical: if I think your suggestion is good, I will try it (and report back on how it went) I care...
Claim: The first human-level AIs are not likely to undergo an intelligence explosion. 1) Brains have a ton of computational power: ~86 billion neurons and trillions of connections between them. Unless there's a "shortcut" to intelligence, we won't be able to efficiently simulate a brain for a long time. http://io9.com/this-computer-took-40-minutes-to-simulate-one-second-of-1043288954...