Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

timtyler comments on SIAI - An Examination - Less Wrong

143 Post author: BrandonReinhart 02 May 2011 07:08AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (203)

You are viewing a single comment's thread. Show more comments above.

Comment author: timtyler 05 May 2011 07:57:52PM *  2 points [-]

If you believe that risks from AI are to be taken seriously then you should demand that any organisation that studies artificial general intelligence has to establish significant measures against third-party intrusion and industrial espionage that is at least on par with the biosafety level 4 required for work with dangerous and exotic agents.

What if you believe in openness and transparency - and feel that elaborate attempts to maintain secrecy will cause your partners to believe you are hiding motives or knowledge from them - thereby tarnishing your reputation - and making them trust you less by making yourself appear selfish and unwilling to share?

Surely, then the strategies you refer to could easily be highly counter-productive.

Basically, if you misguidedly impose secrecy on the organisations involved then the good guys have fewer means of honestly signalling their altruism towards each other - and cooperating with each other - which means that their progress is slower and their relative advantage is diminished. That is surely bad news for overall risk.

The "opposite" strategy is much better, IMO. Don't cooperate with secretive non-sharers. They are probably selfish bad guys. Sharing now is the best way to honestly signal that you will share again in the future.