Posts

Sorted by New

Wiki Contributions

Comments

Sorted by
Dunja190

Hi all (and thanks Ben for starting this thread),

Our group (Philosophy & Ethics group, TU Eindhoven, The Netherlands) has a call for three PhD positions which might be of interest to some of you (the deadline is very soon though - March 10). All three positions are fully funded and for a period of 4 years. Please feel free to get in touch or send me a PM if you'd like some additional info on them!


PhD Position A: Norms of Explainable AI
PhD Position B: Cognitive Science of AI
PhD Position C: Philosophy of Science/Social epistemology

Dunja30

Te problem of disagreements that arise due to different paradigms or 'schools of thought', which you mention, is an important problem as it concerns the possibility of so-called rational disagreements in science. This paper (published here) makes an attempt at providing a normative framework for such situations, suggesting that if scientists have at least some indications that the claims of their opponent is a result of a rational deliberation, they should epistemically tolerate their ideas, which means: they should treat them as potentially rational, their theory as potentially promising, and as a potential challenge to their own stance.

Of course, the main challenge for epistemic toleration is putting ourselves in the other one's shoes :) Like in the example you mention: if the others are working on an approach that is completely different from mine, it won't be easy for me to agree with everything they say, but that doesn't mean I should equate them with some junk scientists.

As for discussions via Github, that's interesting and probably we could discuss this in a separate thread, on the topic of different forms of scientific interaction. I think that peer-review can also be a useful form of dialogue, specially since a paper may end up going through different rounds of peer-review (sometimes also across different journals, in case it gets rejected in the beginning). However, preprint archives that we have nowadays are also valuable, since even if a paper keeps on being rejected (let's say unfairly, e.g. due to a dogmatic environment in the given discipline), others may still have access to it, cite it, and it may still have an impact.

Dunja150

Hi all, I've posted a few comments, but never introduced myself: I'm an academic working in philosophy of science and social epistemology, mainly on methodological issues underlying scientific inquiry, scientific rationality, etc. I'm coming from the EA forum, but on Ben's invitation I dropped by here a few days ago and I am genuinely curious about the prospects of this forum, its overall functions and its possible interactions with the academic research. So I'm happy to read and chip in where I can contribute :)

Dunja30

Again: you are conflating the descriptive and the normative. You are all the time giving examples of how science went wrong. And that may have well been the case. What I am saying is that, there are tools to mitigate these problems. In order to challenge my points, you'd have to show that chriopractics did not appear even worthy of pursuit *in view of the criteria I mentioned above* and yet it should have been pursued (I am not familiar with this branch of science, btw, so I don't have enough knowledge to say anything concerning its current status). But even if you could do this, this would be an extremely odd example, so you'd have to come up with a couple of them to make a normatively interesting point. Of course, I'd be happy to hear about that.

The confusion between the desctiptive (how things are) and the normative (how they should be) concerns also your comments on peer review, where you are bringing issues that are problematic in the current medical practice, but I don't see why we should consider them inherent to the peer-review procedure as such. Your points concern the presence of biases in science which make paradigmatic changes difficult, and that may indeed be a problem, but I don't see how abandoning the peer-review procedure is going to solve it.

Dunja30

Like I've mentioned, that's why there are indices of theory promise (see .e.g. this paper), which don't guarantee anything, but still make the assessment of some hypotheses more plausible than, say, research done within pseudo-medicine. These indices shouldn't be confused with how the scientific community actually reacts on novel theories since it is no news that sometimes scientists fail to employ the adequate criteria, reacting dogmatically (for some examples, see this case study from the history of earth sciences or this one from the history of medicine). So the fact that the scientific community fails to react in a warranted way to novel ideas doesn't imply that they couldn't do a better job at this. This is precisely why some grants are geared towards highs-risk high-reward schemes, so that projects which are clearly risky and may simply flop, get the funding.

The research in molecular biology was indeed quite tricky, but again, this is no way means that assessing it as not worthy of pursuit would have been a justified response at the time. Hence, it's important to distinguish between the descriptive and the normative dimensions when we speak of the assessment of scientific research.

As for the interview with Sydney Brenner, thanks for linking to it. I disagree though with his assessment of the peer-review system because he's not making an overall comparison between two systems, where we'd have to assess both the positive and the negative effects of the peer-review and then compare that with the positive and negative effects of possible alternative approaches. This means evaluating e.g.: how many crap papers are kept at bay this way, which without the peer-review system would simply get published; how much the lack of prestige or connections with the right people disadvantages one to publish in a journal vs. a blind peer-review procedure which mitigates this problem at least to some extent; how many women or minorities had problems with publication bias vs. the blind peer-review procedure, etc.

Dunja30

Right, which is why it's important to distinguish between a mere hunch and a "warranted hunch", the latter being based on certain indicators of promise (e.g. the idea has a potential of explaining novel phenomena, or explaining them better than the currently dominant theory, the inquiry is based on feasible methodology, etc.). These indicators of promise are in no way a guarantee that the idea will work out, but they allow us to distinguish between a sensible novel idea and junk science.

Dunja30

But to think that you cannot do better than chance at generating successful new hypotheses is obviously wrong.

It would be an uncharitable reading of Kuhn to interpret him in that way. He does speak of the performance of scientific theories in terms of different epistemic values, and already in SSR he does speak of a scientist having an initial hunch suggesting a given idea is promising.

From merely observing science's success, we can conclude that there has to be some kind of skill (Yudkowksy's take on this is here and here, among other places) that good scientists employ to do better than chance at picking what to work on.

There is actually a whole part of philosophy of science that deals with this topic, it goes under the name of the preliminary evaluation of scientific theories, their pursuit-worthiness, endorsement, etc.

A good scientist looks where progress could be made within his scientific paradigm

his or her* :)

What Eliezer says about Phlogiston is wrong.

For an excellent recent historical and philosophical study of the Chemical Revolution I recommend Hasok Chang's book "Is Water H2O?", who argues that the phlogistic chemistry was indeed worthy of pursuit at the time when it was abandoned.

Dunja40
One thing that went too far into relativism was Kuhn's insistence that there is no way to tell in advance which paradigm is going to be successful. His description of this is that you pick "teams" initially for all kinds of not-truth-tracking reasons, and you only figure out many years later whether your new paradigm will be winning or not.

This is a good point, though it's important to distinguish between assessing whether a paradigm is going to be successful (which may be impossible to say at the beginning of research) and assessing whether it is worthy of pursuit. The latter only means that for now, the paradigm seems promising, but of course, the whole research program may flop at some point. While Kuhn didn't address these problems in great detail, I linked in my previous comment to some papers that discuss his work with regard to these questions.

the lecturer of the course, a Kuhn expert, seemed to only be asking the question "How does (human-)science proceed?", and never "How should science proceed?"

It's a pity this issue wasn't explicitly discussed in the course you mention because it's actually really interesting. Some Kuhn scholars try to explain the relationship between the descriptive and the normative dimension you mention by bringing up the analogy with the grammar: just like we formulate a given grammar by looking at descriptive aspects of how the given language is used, this helps us to also formulate the normative aspects of how it should be used. Now, not everyone will agree about what this means when it comes to scientific inquiry, but I would defend the following claim: the normative has to be formulated within the boundaries of how science tends to evolve, where we may find issues that are problematic (for example, we may notice that scientists are insufficiently open-minded at times, or that sometimes they employ inadequate methods, etc.) and in view of which we may formulate some normative suggestions. In other words, the normative can't be formulated out of the blue, ignoring some important constraints which are hard to get rid of (e.g. the fact that different paradigms may come with different conceptual frameworks).

Dunja30

This is an interesting historical question, but I'd like to challenge your initial motivation ;) So the idea that sciences used to be pursued more effectively a century ago. Intuitively speaking, I don't see why this would be the case, so I'd first have to see some evidence (including the measure of effectiveness) for this claim. My impression is rather that due to immense fragmentation of today's science into sub-disciplines, there are more people working on particular problems who are effective in their own domains, while remaining largely unknown to the wider audience.

In fact, I would link a lower degree of interaction in the past science, in comparison to today's science (we have peer-review system, there are more conferences, there is an easier access to publications, etc.) with a lower degree of effectiveness. But of course, how exactly interaction and effectiveness/efficiency are related is an empirical question, so I'm open to be surprised :)

Load More