ygert comments on A brief history of ethically concerned scientists - Less Wrong

68 Post author: Kaj_Sotala 09 February 2013 05:50AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (150)

You are viewing a single comment's thread. Show more comments above.

Comment author: V_V 09 February 2013 04:14:29PM *  5 points [-]

Today, the general attitude towards scientific discovery is that all research should be shared and disseminated as widely as possible, and that scientists are not themselves responsible for how their work is used. And for someone who is interested in science for its own sake, or even for someone who mostly considers research to be a way to pay the bills, this is a tempting attitude. It would be easy to only focus on one’s work, and leave it up to others to decide what to do with it.

The reasoning is that if you discover something which could have potentially harmful applications, it's better that there is public discussion about it rather than it becoming a toy in the hands of corporations or government agencies.

If you conceal or halt your research, somebody else is going to repeat the same discovery soon. If all ethically concerned scientists stop pursuing some line of research, then non-ethically concerned scientists will be the only ones doing it.

As for conducting dangerous research in secret, you will not be able to prevent leaks, and the chances that you screw up something are much higher if you act without public oversight. Moreover, it is unethical for you to do experiments that potentially put other people at risk without their informed consent.

I guess you are writing this because your emplyer the Singularity Institute (or whatever they are called now) use the "secret dangerous knowledge" excuse to handwave its conspicuous lack of published research. But seriously, that's not the right way of doing it:

If you are a legitimate research organization ethically concerned by AI safety, the best way to achieve your goals is to publish and disseminate your research as much as possible, in particular to people who may be building AIs.
Because, let's face it, if AGI is technically feasible, you will not be the first ones to build one, and even if by some absurdly improbable coincidence you were, the chances that you get it right while working in secrecy are negligible.

Of course, in order to publish research, you must first be able to do research worth publishing. As I said before, for the SI this would be the "flour on the invisible dragon" test.

Comment author: ygert 09 February 2013 04:54:50PM *  4 points [-]

I upvoted this, as it has some very good points about why the current general attitude is about scientific secrecy. I almost didn't though, as I do feel that the attitude in the last few paragraphs is unnecessarily confrontational. I feel you are mostly correct in saying what you said there, especially what you said in the second to last paragraph. But then the last paragraph kind of spoils it by being very confrontational and rather rude. I would not have had reservations about my upvote if you had simply left that paragraph off. As it is now, I almost didn't upvote it, as I have no wish to condone any sort of impoliteness.

Comment author: V_V 09 February 2013 05:11:53PM *  0 points [-]

Is your complaint about the tone of the last paragraphs, or about the content?

In case you are wondering, yes, I have a low opinion of the SI. I think it's unlikely that they are competent to achieve what they claim they want to achieve.

But my belief may be wrong, or may have been correct in the past but then made obsolete by the SI changing their nature.
While I don't think that AI safety is presently as a significant issue as they claim it is, I see that there is some value in doing some research on it, as long as the results are publicly disseminated.

So my last paragraphs may have been somewhat confrontational, but they were an honest attempt to give them the benefit of doubt and to suggest them a way to achieve their goals and prove my reservations wrong.