asparisi comments on A brief history of ethically concerned scientists - Less Wrong

68 Post author: Kaj_Sotala 09 February 2013 05:50AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (150)

You are viewing a single comment's thread. Show more comments above.

Comment author: V_V 09 February 2013 04:14:29PM *  5 points [-]

Today, the general attitude towards scientific discovery is that all research should be shared and disseminated as widely as possible, and that scientists are not themselves responsible for how their work is used. And for someone who is interested in science for its own sake, or even for someone who mostly considers research to be a way to pay the bills, this is a tempting attitude. It would be easy to only focus on one’s work, and leave it up to others to decide what to do with it.

The reasoning is that if you discover something which could have potentially harmful applications, it's better that there is public discussion about it rather than it becoming a toy in the hands of corporations or government agencies.

If you conceal or halt your research, somebody else is going to repeat the same discovery soon. If all ethically concerned scientists stop pursuing some line of research, then non-ethically concerned scientists will be the only ones doing it.

As for conducting dangerous research in secret, you will not be able to prevent leaks, and the chances that you screw up something are much higher if you act without public oversight. Moreover, it is unethical for you to do experiments that potentially put other people at risk without their informed consent.

I guess you are writing this because your emplyer the Singularity Institute (or whatever they are called now) use the "secret dangerous knowledge" excuse to handwave its conspicuous lack of published research. But seriously, that's not the right way of doing it:

If you are a legitimate research organization ethically concerned by AI safety, the best way to achieve your goals is to publish and disseminate your research as much as possible, in particular to people who may be building AIs.
Because, let's face it, if AGI is technically feasible, you will not be the first ones to build one, and even if by some absurdly improbable coincidence you were, the chances that you get it right while working in secrecy are negligible.

Of course, in order to publish research, you must first be able to do research worth publishing. As I said before, for the SI this would be the "flour on the invisible dragon" test.

Comment author: asparisi 10 February 2013 09:48:35PM 2 points [-]

I find it unlikely that scientific secrecy is never the right answer, just as I find it unlikely that scientific secrecy is always the right answer.

Qualitatively, I'd say it has something to do with the ratio of expected harm of immediate discovery vs. the current investment and research in the field. If the expected risks are low, by all means publish so that any risks that are there will be found. If the risks are high, consider the amount of investment/research in the field. If the investment is high, it is probably better to reveal your research (or parts of it) in the hope of creating a substantive dialogue about risks. If the investment is low, it is less likely that anyone will come up with the same discovery and so you may want to keep it a secret. This probably also varies by field with respect to how many competing paradigms are available and how incremental the research is: psychologists work with a lot of different theories of the mind, many of which do not explicitly endorse incremental theorizing, so it is less likely that a particular piece of research will be duplicated while biologists tend to have larger agreement and their work tends to be more incremental, making it more likely that a particular piece of research will be duplicated.

Honestly, I find cases of alternative pleading such as V_V's post here suspect. It is a great rhetorical tool, but reality isn't such that alternative pleading actually can map onto the state of the world. "X won't work, you shouldn't do X in cases where it does work, and even if you think you should do X, it won't turn out as well" is a good way to persuade a lot of different people, but it can't actually map onto anything.

Comment author: V_V 11 February 2013 12:20:05AM *  2 points [-]

I find it unlikely that scientific secrecy is never the right answer, just as I find it unlikely that scientific secrecy is always the right answer.

Sure, you can find exceptional scenarios where secrecy is appropriate. For instance, if you were a scientist working on the Manhattan Project, you certainly wouldn't have wanted to let the Nazis know what you were doing, and with good reason.
But barring such kind of exceptional circumstances, scientific secrecy is generally inappropriate. You need some pretty strong arguments to justify it.

If the investment is low, it is less likely that anyone will come up with the same discovery and so you may want to keep it a secret.

How much likely it is that some potentially harmful breakthrough happens in a research field where there is little interest?

psychologists work with a lot of different theories of the mind, many of which do not explicitly endorse incremental theorizing

Is that actually true? And anyway, what is the probability that a new theory of mind is potentially harmful?

Honestly, I find cases of alternative pleading such as V_V's post here suspect. It is a great rhetorical tool, but reality isn't such that alternative pleading actually can map onto the state of the world. "X won't work, you shouldn't do X in cases where it does work, and even if you think you should do X, it won't turn out as well" is a good way to persuade a lot of different people, but it can't actually map onto anything.

That statement seems contrived, I suppose that by "can map onto the state of the world" you mean "is logically consistent".
Of course, I didn't make that logically inconsistent claim. My claim is that "X probably won't work, and if you think that X does work in your particular case, then unless you have some pretty strong arguments, you are most likely mistaken".

Comment author: Troshen 25 February 2013 10:49:57PM 0 points [-]

This is a good discussion of the trade-offs that should be considered when deciding to reveal or keep secret new, dangerous technologies.