V_V comments on A brief history of ethically concerned scientists - LessWrong

68 Post author: Kaj_Sotala 09 February 2013 05:50AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (150)

You are viewing a single comment's thread.

Comment author: V_V 09 February 2013 04:14:29PM *  5 points [-]

Today, the general attitude towards scientific discovery is that all research should be shared and disseminated as widely as possible, and that scientists are not themselves responsible for how their work is used. And for someone who is interested in science for its own sake, or even for someone who mostly considers research to be a way to pay the bills, this is a tempting attitude. It would be easy to only focus on one’s work, and leave it up to others to decide what to do with it.

The reasoning is that if you discover something which could have potentially harmful applications, it's better that there is public discussion about it rather than it becoming a toy in the hands of corporations or government agencies.

If you conceal or halt your research, somebody else is going to repeat the same discovery soon. If all ethically concerned scientists stop pursuing some line of research, then non-ethically concerned scientists will be the only ones doing it.

As for conducting dangerous research in secret, you will not be able to prevent leaks, and the chances that you screw up something are much higher if you act without public oversight. Moreover, it is unethical for you to do experiments that potentially put other people at risk without their informed consent.

I guess you are writing this because your emplyer the Singularity Institute (or whatever they are called now) use the "secret dangerous knowledge" excuse to handwave its conspicuous lack of published research. But seriously, that's not the right way of doing it:

If you are a legitimate research organization ethically concerned by AI safety, the best way to achieve your goals is to publish and disseminate your research as much as possible, in particular to people who may be building AIs.
Because, let's face it, if AGI is technically feasible, you will not be the first ones to build one, and even if by some absurdly improbable coincidence you were, the chances that you get it right while working in secrecy are negligible.

Of course, in order to publish research, you must first be able to do research worth publishing. As I said before, for the SI this would be the "flour on the invisible dragon" test.

Comment author: Vladimir_Nesov 10 February 2013 02:21:21AM 4 points [-]

the best way to achieve your goals is to publish and disseminate your research as much as possible

This is an important question, and simply asserting that the answer to it is one way or the other is not helpful for understanding the question better.

Comment author: V_V 10 February 2013 09:16:41AM 1 point [-]

Fair enough. I think I provided arguments against scientific secrecy. I'd glad to hear counter-arguments.

Comment author: ygert 09 February 2013 04:54:50PM *  4 points [-]

I upvoted this, as it has some very good points about why the current general attitude is about scientific secrecy. I almost didn't though, as I do feel that the attitude in the last few paragraphs is unnecessarily confrontational. I feel you are mostly correct in saying what you said there, especially what you said in the second to last paragraph. But then the last paragraph kind of spoils it by being very confrontational and rather rude. I would not have had reservations about my upvote if you had simply left that paragraph off. As it is now, I almost didn't upvote it, as I have no wish to condone any sort of impoliteness.

Comment author: V_V 09 February 2013 05:11:53PM *  0 points [-]

Is your complaint about the tone of the last paragraphs, or about the content?

In case you are wondering, yes, I have a low opinion of the SI. I think it's unlikely that they are competent to achieve what they claim they want to achieve.

But my belief may be wrong, or may have been correct in the past but then made obsolete by the SI changing their nature.
While I don't think that AI safety is presently as a significant issue as they claim it is, I see that there is some value in doing some research on it, as long as the results are publicly disseminated.

So my last paragraphs may have been somewhat confrontational, but they were an honest attempt to give them the benefit of doubt and to suggest them a way to achieve their goals and prove my reservations wrong.

Comment author: asparisi 10 February 2013 09:48:35PM 2 points [-]

I find it unlikely that scientific secrecy is never the right answer, just as I find it unlikely that scientific secrecy is always the right answer.

Qualitatively, I'd say it has something to do with the ratio of expected harm of immediate discovery vs. the current investment and research in the field. If the expected risks are low, by all means publish so that any risks that are there will be found. If the risks are high, consider the amount of investment/research in the field. If the investment is high, it is probably better to reveal your research (or parts of it) in the hope of creating a substantive dialogue about risks. If the investment is low, it is less likely that anyone will come up with the same discovery and so you may want to keep it a secret. This probably also varies by field with respect to how many competing paradigms are available and how incremental the research is: psychologists work with a lot of different theories of the mind, many of which do not explicitly endorse incremental theorizing, so it is less likely that a particular piece of research will be duplicated while biologists tend to have larger agreement and their work tends to be more incremental, making it more likely that a particular piece of research will be duplicated.

Honestly, I find cases of alternative pleading such as V_V's post here suspect. It is a great rhetorical tool, but reality isn't such that alternative pleading actually can map onto the state of the world. "X won't work, you shouldn't do X in cases where it does work, and even if you think you should do X, it won't turn out as well" is a good way to persuade a lot of different people, but it can't actually map onto anything.

Comment author: V_V 11 February 2013 12:20:05AM *  2 points [-]

I find it unlikely that scientific secrecy is never the right answer, just as I find it unlikely that scientific secrecy is always the right answer.

Sure, you can find exceptional scenarios where secrecy is appropriate. For instance, if you were a scientist working on the Manhattan Project, you certainly wouldn't have wanted to let the Nazis know what you were doing, and with good reason.
But barring such kind of exceptional circumstances, scientific secrecy is generally inappropriate. You need some pretty strong arguments to justify it.

If the investment is low, it is less likely that anyone will come up with the same discovery and so you may want to keep it a secret.

How much likely it is that some potentially harmful breakthrough happens in a research field where there is little interest?

psychologists work with a lot of different theories of the mind, many of which do not explicitly endorse incremental theorizing

Is that actually true? And anyway, what is the probability that a new theory of mind is potentially harmful?

Honestly, I find cases of alternative pleading such as V_V's post here suspect. It is a great rhetorical tool, but reality isn't such that alternative pleading actually can map onto the state of the world. "X won't work, you shouldn't do X in cases where it does work, and even if you think you should do X, it won't turn out as well" is a good way to persuade a lot of different people, but it can't actually map onto anything.

That statement seems contrived, I suppose that by "can map onto the state of the world" you mean "is logically consistent".
Of course, I didn't make that logically inconsistent claim. My claim is that "X probably won't work, and if you think that X does work in your particular case, then unless you have some pretty strong arguments, you are most likely mistaken".

Comment author: Troshen 25 February 2013 10:49:57PM 0 points [-]

This is a good discussion of the trade-offs that should be considered when deciding to reveal or keep secret new, dangerous technologies.

Comment author: ricketson 09 February 2013 08:38:45PM *  2 points [-]

Good points, but it was inappropriate to question the author's motives and the attacks on the SI were off-topic.

Comment author: V_V 10 February 2013 01:30:10AM 0 points [-]

I didn't claim that his praise of scientific secrecy was questionable because of his motives (that would have been an ad hominem circumstantial ) or that his claims were dishonest because of his motives.

I claimed that his praise of scientific secrecy was questionable for the points I mentioned, AND, that I could likely see where it was coming from.

the attacks on the SI were off-topic.

Well, he specifically mentioned the SI mission, complete with a link to the SI homepage. Anyway, that wasn't an attack, it was a (critical) suggestion.

Comment author: Kaj_Sotala 09 February 2013 05:41:43PM 1 point [-]

That's a rather uncharitable reading.

Comment author: V_V 10 February 2013 01:39:22AM 1 point [-]

Possibly, but I try to care about being accurate, even if that means not being nice.

Do you think there are errors in my reading?

Comment author: Kaj_Sotala 10 February 2013 07:11:34AM *  5 points [-]

I guess you are writing this because your emplyer the Singularity Institute (or whatever they are called now) use the "secret dangerous knowledge" excuse to handwave its conspicuous lack of published research. But seriously, that's not the right way of doing it:

Your criticism would be more reasonable if this post had only given examples of scientists who hid their research, and said only that everyone should consider hiding their research. But while the possibility of keeping your secret was certainly brought up and mentioned as a possibility, the overall message of the post was one of general responsibility and engagement with the results of your work, as opposed to a single-minded focus on just doing interesting research and damn the consequences.

Some of the profiled scientists did hide or destroy their research, but others actively turned their efforts into various ways by which the negative effects of that technology could be reduced, be it by studying the causes of war, campaigning against the use of a specific technology, refocusing to seek ways by which their previous research could be applied to medicine, setting up organizations for reducing the risk of war, talking about the dangers of the technology, calling for temporary moratoriums and helping develop voluntary guidelines for the research, or financing technologies that could help reduce general instability.

Applied to the topic of AI, the general message does not become "keep all of your research secret!" but rather "consider the consequences of your work and do what you feel is best for helping ensure that things do not turn out to be bad, which could include keeping things secret but could also mean things like focusing on the kinds of AI architectures that seem the most safe, seeking out reasonable regulatory guidelines, communicating with other scientists on any particular risks that your research has uncovered, etc." That's what the conclusion of the article said, too: "Hopefully, the examples provided in this post can encourage more researchers to consider the broader consequences of their work."

The issue of whether some research should be published or kept secret is still an open question, and this post does not attempt to suggest an answer either way, other than to suggest that keeping research secret might be something worth considering, sometimes, maybe.

Comment author: V_V 10 February 2013 12:13:20PM 1 point [-]

Thanks for the clarification.

However, if you are not specifically endorsing scientific secrecy, but just ethics in conducting science, then your opening paragraph seems a bit of a strawman:

Today, the general attitude towards scientific discovery is that all research should be shared and disseminated as widely as possible, and that scientists are not themselves responsible for how their work is used. And for someone who is interested in science for its own sake, or even for someone who mostly considers research to be a way to pay the bills, this is a tempting attitude. It would be easy to only focus on one’s work, and leave it up to others to decide what to do with it.

Seriously, who is claiming that scientists should not take ethics into consideration while they do research?

Comment author: timtyler 11 February 2013 02:06:34AM *  0 points [-]

Seriously, who is claiming that scientists should not take ethics into consideration while they do research?

It's more that humans specialise. Scientist and moral philosopher aren't always the same person.

Comment author: whowhowho 11 February 2013 12:03:24PM 2 points [-]

OTOH, you don't get let off moral responsibility just because it isn't your job.

Comment author: timtyler 11 February 2013 11:28:06PM 1 point [-]

It's more that many of the ethical decisions - about what to study and what to do with the resulting knowledge - are taken out of your hands.

Comment author: whowhowho 12 February 2013 01:27:03AM 2 points [-]

Only they are not, because you are not forced to do a job just because you have invested in the training --however strange that may seem to Homo Economicus.

Comment author: timtyler 12 February 2013 10:52:13AM *  1 point [-]

Resigning would probably not affect the subjects proposed for funding, the number of other candidates available to do the work, or the eventual outcome. If you are a scientist who is concerned with ethics there are probably lower-hanging fruit that don't involve putting yourself out of work.

Comment author: V_V 11 February 2013 11:34:05AM 0 points [-]

Moral philosophers hopefully aren't the only people who take ethics into account when deciding what to do.

Comment author: BerryPick6 11 February 2013 12:53:23PM 1 point [-]

Some data suggests they make roughly the same ethical choices everyone else does.

Comment author: [deleted] 10 February 2013 12:00:59PM *  0 points [-]