You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Liso comments on Superintelligence 10: Instrumentally convergent goals - Less Wrong Discussion

7 Post author: KatjaGrace 18 November 2014 02:00AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (31)

You are viewing a single comment's thread.

Comment author: Liso 26 November 2014 06:34:11AM 0 points [-]

Just a little idea:

In one advertising I saw interesting pyramid with these levels (from top to bottom): vision -> mission -> goals -> strategy -> tactics -> daily planning.

I think if we like to analyse cooperation between SAI and humanity then we need interdisciplinary (philosophy, psychology, mathematics, computer science, ...) work on (vision -> mision -> goals) part. (if humanity will define vision, mission and SAI will derive goals then it could be good)

I am afraid that humanity has not properly defined/analysed nor vision nor mission. And more groups and individuals has more contradictory vision, mission and goals.

One big problem with SAI is not SAI but that we will have BIG POWER and we still dont know what we really want. (and what we really want to want)

Bostrom's book seems to have paradigm that goal is something on top, rigid and stable. Could not be dynamic and flexible like vision. Probably it could be true that one stupidly defined goal (paperclipper) could be unchangeable and ultimate. But we probably have more possibilities to define SAI's personality.