KatjaGrace comments on Superintelligence 11: The treacherous turn - Less Wrong

10 Post author: KatjaGrace 25 November 2014 02:00AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (50)

You are viewing a single comment's thread.

Comment author: KatjaGrace 25 November 2014 02:04:30AM 1 point [-]

Is the default outcome doom?

Comment author: diegocaleiro 25 November 2014 08:14:03PM 4 points [-]

Yes.

Comment author: TRIZ-Ingenieur 25 November 2014 11:23:48PM 2 points [-]

No. Open available knowledge is not enough to obtain decisive advantage. For this close cooperation with humans and human led organizations is absolutely necessary. Trust building will take years even for AGIs. In the mean time competing AGIs will appear.

Ben Goertzel does not want to waste time debating any more - he pushes open AGI development to prevent any hardware overhang. Other readers of Bostrums book might start other projects against singleton AI development. We do not have a ceteris paribus condition - we can shape what the default outcome will be.

Comment author: artemium 26 November 2014 07:31:07PM 0 points [-]

we can shape what the default outcome will be.

But who are "we"? There are many agents with different motivations doing AI development. I'm afraid that it will be difficult to control each of this agents(companies, governments, militaries, universities, terrorist groups) in the future, and the deceasing cost of technology will only increase the problem over time .

Comment author: William_S 29 November 2014 02:30:21AM 0 points [-]

Is there a clear difference in the policy we would want if probability of doom is 10% vs 90% (aside from tweaking resource allocation between x-risks)? It might be hard to tell between these cases, but both suggest caution is warranted.

Comment author: SteveG 26 November 2014 03:08:22PM 0 points [-]

The continued existence of people or transhumans might not be that relevant to an advanced AI one way or another.

Or, human progeny could just be like weeds are to people, knocked down or reproduction constrained only when in the way of some other goal.