Comment author: Humbug 29 October 2011 07:25:13PM *  4 points [-]

None of the simulation projects have gotten very far...this looks to me like it is a very long way out, probably hundreds of years.

Couldn't you say the same about AGI projects? It seems to me that one of the reasons that some people are being relatively optimistic about computable approximations to AIXI, compared to brain emulations, is that progress on EM's is easier to quantify.

Comment author: Vladimir_Nesov 15 September 2011 03:02:11PM *  -1 points [-]

(Since the linked article doesn't at a first glance talk about AI researchers, the title should be justified.)

Comment author: Humbug 15 September 2011 03:34:37PM 12 points [-]

In statements posted on the Internet, the ITS expresses particular hostility towards nano­technology and computer scientists. It claims that nanotechnology will lead to the downfall of mankind, and predicts that the world will become dominated by self-aware artificial-intelligence technology. Scientists who work to advance such technology, it says, are seeking to advance control over people by 'the system'.

Comment author: RichardKennaway 15 September 2011 03:00:24PM 6 points [-]

On the other hand, the mission of the SIAI is founded on the belief that if anyone succeeds at AGI without solving the Friendliness problem, they will destroy the world. Eliezer has said in an interview a year or two back that he does not think that anyone currently working on AGI has any chance of succeeding. But if not now, then some day the question will have to be faced:

What do you do if you really believe that someone's research has a substantial chance of destroying the world?

Comment author: Humbug 15 September 2011 03:30:06PM 13 points [-]

What do you do if you really believe that someone's research has a substantial chance of destroying the world?

Go batshit crazy.

Comment author: NancyLebovitz 12 July 2011 03:01:35PM 1 point [-]

Is thinking about policy entirely avoidable, considering that people occasionally need to settle on a policy or need to decide whether a policy is better complied with or avoided?

Comment author: Humbug 12 July 2011 03:21:24PM 1 point [-]

...people occasionally need to settle on a policy or need to decide whether a policy is better complied with or avoided?

One example would be the policy not to talk about politics. Authoritarian regimes usually employ that policy, most just fail to frame it as rationality.

View more: Prev