turchin comments on Room For More Funding In AI Safety Is Highly Uncertain - Less Wrong

12 Post author: Evan_Gaensbauer 12 May 2016 01:57PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (11)

You are viewing a single comment's thread. Show more comments above.

Comment author: turchin 14 May 2016 06:31:15PM 0 points [-]

Of course I meant not only creating, but implementing of different friendliness theories. Your objection about cooperation based on good decision theories also seems to be sound.

But from history we know that christian countries had wars between them, and socialist countries also had mutual wars. Sometimes small difference leaded to sectarian violence, like between shia and sunni. So adopting a value system which promote future good for everybody doesn't prevent a state-agent to have wars this another agent with simillar positive value system.

For example we have two FAIs, and they both know that for the best one of them should be switched off. But how they would decide about which one will be switched off?

Also FAI may work fine until creation of the another AI, but could have instrumental value to switched off all other AIs and not be switched off by any AI, and they have to go to war because of this flaw in design which only appear if we have two FAIs.