(Since the linked article doesn't at a first glance talk about AI researchers, the title should be justified.)
In statements posted on the Internet, the ITS expresses particular hostility towards nanotechnology and computer scientists. It claims that nanotechnology will lead to the downfall of mankind, and predicts that the world will become dominated by self-aware artificial-intelligence technology. Scientists who work to advance such technology, it says, are seeking to advance control over people by 'the system'.
On the other hand, the mission of the SIAI is founded on the belief that if anyone succeeds at AGI without solving the Friendliness problem, they will destroy the world. Eliezer has said in an interview a year or two back that he does not think that anyone currently working on AGI has any chance of succeeding. But if not now, then some day the question will have to be faced:
What do you do if you really believe that someone's research has a substantial chance of destroying the world?
What do you do if you really believe that someone's research has a substantial chance of destroying the world?
Go batshit crazy.
Is thinking about policy entirely avoidable, considering that people occasionally need to settle on a policy or need to decide whether a policy is better complied with or avoided?
...people occasionally need to settle on a policy or need to decide whether a policy is better complied with or avoided?
One example would be the policy not to talk about politics. Authoritarian regimes usually employ that policy, most just fail to frame it as rationality.
View more: Prev
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
Couldn't you say the same about AGI projects? It seems to me that one of the reasons that some people are being relatively optimistic about computable approximations to AIXI, compared to brain emulations, is that progress on EM's is easier to quantify.