Kaj_Sotala comments on A brief history of ethically concerned scientists - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (150)
The reasoning is that if you discover something which could have potentially harmful applications, it's better that there is public discussion about it rather than it becoming a toy in the hands of corporations or government agencies.
If you conceal or halt your research, somebody else is going to repeat the same discovery soon. If all ethically concerned scientists stop pursuing some line of research, then non-ethically concerned scientists will be the only ones doing it.
As for conducting dangerous research in secret, you will not be able to prevent leaks, and the chances that you screw up something are much higher if you act without public oversight. Moreover, it is unethical for you to do experiments that potentially put other people at risk without their informed consent.
I guess you are writing this because your emplyer the Singularity Institute (or whatever they are called now) use the "secret dangerous knowledge" excuse to handwave its conspicuous lack of published research. But seriously, that's not the right way of doing it:
If you are a legitimate research organization ethically concerned by AI safety, the best way to achieve your goals is to publish and disseminate your research as much as possible, in particular to people who may be building AIs.
Because, let's face it, if AGI is technically feasible, you will not be the first ones to build one, and even if by some absurdly improbable coincidence you were, the chances that you get it right while working in secrecy are negligible.
Of course, in order to publish research, you must first be able to do research worth publishing. As I said before, for the SI this would be the "flour on the invisible dragon" test.
That's a rather uncharitable reading.
Possibly, but I try to care about being accurate, even if that means not being nice.
Do you think there are errors in my reading?
Your criticism would be more reasonable if this post had only given examples of scientists who hid their research, and said only that everyone should consider hiding their research. But while the possibility of keeping your secret was certainly brought up and mentioned as a possibility, the overall message of the post was one of general responsibility and engagement with the results of your work, as opposed to a single-minded focus on just doing interesting research and damn the consequences.
Some of the profiled scientists did hide or destroy their research, but others actively turned their efforts into various ways by which the negative effects of that technology could be reduced, be it by studying the causes of war, campaigning against the use of a specific technology, refocusing to seek ways by which their previous research could be applied to medicine, setting up organizations for reducing the risk of war, talking about the dangers of the technology, calling for temporary moratoriums and helping develop voluntary guidelines for the research, or financing technologies that could help reduce general instability.
Applied to the topic of AI, the general message does not become "keep all of your research secret!" but rather "consider the consequences of your work and do what you feel is best for helping ensure that things do not turn out to be bad, which could include keeping things secret but could also mean things like focusing on the kinds of AI architectures that seem the most safe, seeking out reasonable regulatory guidelines, communicating with other scientists on any particular risks that your research has uncovered, etc." That's what the conclusion of the article said, too: "Hopefully, the examples provided in this post can encourage more researchers to consider the broader consequences of their work."
The issue of whether some research should be published or kept secret is still an open question, and this post does not attempt to suggest an answer either way, other than to suggest that keeping research secret might be something worth considering, sometimes, maybe.
Thanks for the clarification.
However, if you are not specifically endorsing scientific secrecy, but just ethics in conducting science, then your opening paragraph seems a bit of a strawman:
Seriously, who is claiming that scientists should not take ethics into consideration while they do research?
It's more that humans specialise. Scientist and moral philosopher aren't always the same person.
OTOH, you don't get let off moral responsibility just because it isn't your job.
It's more that many of the ethical decisions - about what to study and what to do with the resulting knowledge - are taken out of your hands.
Only they are not, because you are not forced to do a job just because you have invested in the training --however strange that may seem to Homo Economicus.
Resigning would probably not affect the subjects proposed for funding, the number of other candidates available to do the work, or the eventual outcome. If you are a scientist who is concerned with ethics there are probably lower-hanging fruit that don't involve putting yourself out of work.
Moral philosophers hopefully aren't the only people who take ethics into account when deciding what to do.
Some data suggests they make roughly the same ethical choices everyone else does.