Humanity has done more than zero and less that optimality about things like climate change. Importantly, the situation isbelow the immanent existential threat level.
If you are going to complain that alternative proposals face coordination problems, you need to show that yours dont, or you are committing the fallacy of the dangling comparision. If people aren't going to refrain from building dangerously powerful superintellugences, assuming is possible, why would they have the sense to fit MIRIs safety features, assuming they are possible? If the law can make people fit safety features, why cant it prevent them building dangerous AIs ITFP?
no clearly-cut threshold between a "safe" and "dangerous" level of capability
I would suggest a combination of generality and agency. And what problem domain requires both?
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
Several years ago, Backgammon AI was at the point where it could absolutely demolish humans without cheating. My impression is that people hated it, and even if they rolled the dice for the AI and input the results themselves they were pretty sure that it had to be cheating somehow.
May have been a vocal minority. You get some people incorrectly complaining about AI cheating in any game that utilizes randomness (Civilization and the new XCOMs are two examples I know of); usually this leads to somebody running a series of tests or decompiling the source code to show people that no, the die rolls are actually fair or (as is commonly the case) actually actively biased in the human player's favor.
This never stops some people from complaining nonetheless, but a lot of others find the evidence convincing enough and just chalk it up to their own biases (and are less likely to suspect cheating when they play the next game that has random elements).