SteveG comments on Superintelligence 8: Cognitive superpowers - Less Wrong

7 Post author: KatjaGrace 04 November 2014 02:01AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (95)

You are viewing a single comment's thread. Show more comments above.

Comment author: SteveG 05 November 2014 04:45:07PM 4 points [-]

Bostrom discusses the Baruch Plan, and the lessons to learn from that historical experience are enormous. I agree that we need a multilateral framework to regulate AI.

However, it also has to be something that gains agreement. Baruch and the United States wanted to give Nuclear technology regulation over to an international agency.

Of all things, the Soviet Union disagreed BEFORE they even quite had the Bomb! (Although they were researching it.)

Why? Because they knew that they would be out-voted in this new entity's proposed governance structure.

Figuring out the framework to present will be a challenge-and there will not be a dozen chances...

Comment author: aramis720 06 November 2014 12:52:14AM 2 points [-]

Thanks Steve. I need to dive into this book for sure.

Comment author: TRIZ-Ingenieur 05 November 2014 10:12:38PM *  -1 points [-]
  • We need a global charta for AI transparency.
  • We need a globally funded global AI nanny project like Ben Goertzel suggested.
  • Every AGI project should spend 30% of its budget on safety and control problem: 2/3 project related, 1/3 general research.

We must find a way how financial value created by AI (today Narrow AI, tomorrow AGI) compensates for technology driven collective redundancies and supports sustainable economy and social model.