ricketson comments on A brief history of ethically concerned scientists - Less Wrong

68 Post author: Kaj_Sotala 09 February 2013 05:50AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (150)

You are viewing a single comment's thread. Show more comments above.

Comment author: V_V 09 February 2013 04:14:29PM *  5 points [-]

Today, the general attitude towards scientific discovery is that all research should be shared and disseminated as widely as possible, and that scientists are not themselves responsible for how their work is used. And for someone who is interested in science for its own sake, or even for someone who mostly considers research to be a way to pay the bills, this is a tempting attitude. It would be easy to only focus on one’s work, and leave it up to others to decide what to do with it.

The reasoning is that if you discover something which could have potentially harmful applications, it's better that there is public discussion about it rather than it becoming a toy in the hands of corporations or government agencies.

If you conceal or halt your research, somebody else is going to repeat the same discovery soon. If all ethically concerned scientists stop pursuing some line of research, then non-ethically concerned scientists will be the only ones doing it.

As for conducting dangerous research in secret, you will not be able to prevent leaks, and the chances that you screw up something are much higher if you act without public oversight. Moreover, it is unethical for you to do experiments that potentially put other people at risk without their informed consent.

I guess you are writing this because your emplyer the Singularity Institute (or whatever they are called now) use the "secret dangerous knowledge" excuse to handwave its conspicuous lack of published research. But seriously, that's not the right way of doing it:

If you are a legitimate research organization ethically concerned by AI safety, the best way to achieve your goals is to publish and disseminate your research as much as possible, in particular to people who may be building AIs.
Because, let's face it, if AGI is technically feasible, you will not be the first ones to build one, and even if by some absurdly improbable coincidence you were, the chances that you get it right while working in secrecy are negligible.

Of course, in order to publish research, you must first be able to do research worth publishing. As I said before, for the SI this would be the "flour on the invisible dragon" test.

Comment author: ricketson 09 February 2013 08:38:45PM *  2 points [-]

Good points, but it was inappropriate to question the author's motives and the attacks on the SI were off-topic.

Comment author: V_V 10 February 2013 01:30:10AM 0 points [-]

I didn't claim that his praise of scientific secrecy was questionable because of his motives (that would have been an ad hominem circumstantial ) or that his claims were dishonest because of his motives.

I claimed that his praise of scientific secrecy was questionable for the points I mentioned, AND, that I could likely see where it was coming from.

the attacks on the SI were off-topic.

Well, he specifically mentioned the SI mission, complete with a link to the SI homepage. Anyway, that wasn't an attack, it was a (critical) suggestion.