This would count toward my major, and if I weren't going to take it, the likely replacement would be a course in experimental/"folk" philosophy. But I'd also like to hear your thoughts on the virtues of academic rationality courses in general.

(The main counterargument, I'd imagine, is that the Sequences cover most of the same material in a more fluid and comprehensible fashion.)

Here is the syllabus: http://www.yale.edu/darwall/PHIL+333+Syllabus.pdf

Other information: I sampled one lecture for the course last year. It was a noncommital discussion of Newcomb's problem, which I found somewhat interesting despite having read most of the LW material on the subject.

When I asked what Omega would do if we activated a random number generator with a 50.01% chance of one-boxing us, the professors didn't dismiss the question as irrelevant, but they also didn't offer any particular answer.

I help run a rationality meetup at Yale, and this seems like a good place to meet interested students. On the other hand, I could just as easily leave flyers around before the class begins.

 

Related question: Could someone quickly sum up what might be meant by the "feminist critique" of rationality, as would be discussed in the course? I've read a few abstracts, but I'm still not sure I know the most important points of these critiques.

New to LessWrong?

New Comment
27 comments, sorted by Click to highlight new comments since: Today at 6:40 PM

(Not an expert on academic feminism):

My understanding is that just as LW worries about "corrupted hardware", feminists worry about "corrupted social order." That is, if there are various systematic injustices and power disparities in the social order, and moreover these disparities are difficult for beneficiaries to see, then any product of such a social order, especially one that claims to be impartial, has to be viewed very skeptically indeed, because it likely contains biases inherent in the social order.

I don't think I'm at a position where I could give a statement of the feminist critique that a proponent of it would be happy to call their position, but my basic sketch of it is that philosophy and rationality are overconcerned with objective reality, and that we should instead focus on how perceptions are subjective and how we relate to one another. That is, the social significance of a statement or concept is more important than whether or not it is concordant with reality.

Subjective perceptions and the relations between humans are also part of reality.

A more charitable phrasing: you view feminism as more concerned with instrumental rationality than with epistemic rationality.

Subjective perceptions and the relations between humans are also part of reality.

Of course.

A more charitable phrasing: you view feminism as more concerned with instrumental rationality than with epistemic rationality.

I don't think this is correct, though. My experience has been that in discussions with feminists who critique rationality (FWCR for short),* we have deep disagreements not on the importance of epistemology, but the process and goal of epistemology. If something is correct but hurtful, for example, I might call it true because it is correct while a FWCR would call it false because it is hurtful. (One can find ample examples of this in the arguments for egalitarianism in measurement of socially relevant variables.)

One could argue that they're making the instrumentally rational decision to spread a lie in order to accomplish some goal, or that it's instrumentally rational to engage in coalition politics which involves truth-bending, but this isn't a patrissimo saying "you guys should go out an accomplish things," but a "truth wasn't important anyway."

*I am trying to avoid painting feminism with a broad brush, as not all feminists critique rationality, and it is the anti-rationals in particular on which I want to focus.

I've never seen this sort of claim, and thought you were talking about, for example, discouraging research on sex differences because people are likely to overinterpret the observations and cause harm as a result. Can you link to an example of the sort of argument you are discussing?

thought you were talking about, for example, discouraging research on sex differences because people are likely to overinterpret the observations and cause harm as a result.

I did have this sort of thing in mind. My claim was that I think it also goes deeper. This article (PM me your email address if you don't have access to the PDF) splits the criticism into three primary schools, the first of which begins with the content of scientific theories (i.e. racism, sexism, class bias) and from that concludes that rationality is wrong. An excerpt:

If logic, rationality and objectivity produce such theories, then logic, rationality and objectivity must be at fault and women must search for alternative ways of knowing nature. Such arguments often end up privileging subjectivity, intuition, or a feminine way of knowing characterized by interaction with or identification with, rather than distance from, the object of knowledge.

If I'm reading that paragraph right, that's attributed to Luce Irigaray's 1987 paper.

The second school criticizes the methodology and philosophy of science, and then the third criticizes the funding sources (and the implied methodology) of modern science. The author argues that each has serious weaknesses, and that we need to build a better science to incorporate the critiques (with a handful of practical suggestions along those lines) but that the fundamental project of science as a communal endeavor is sound. Since I think the author of that paper is close to my camp, it may be prudent to follow her references and ensure her interpretation of them is fair.

my basic sketch of it is that philosophy and rationality are overconcerned with objective reality, and that we should instead focus on how perceptions are subjective and how we relate to one another.

The subjectivity of our perceptions and how we relate to one another are themselves parts of objective reality.

To steelman the position you're attributing, if philosophy and rationality have been paying too little attention to those parts of objective reality, then they need to focus on those as well as, not instead of, the rest of reality. Or to put that in terms of a concrete example alluded to elsewhere in the thread, nuclear power plants must be designed to be safely operable by real fallible humans.

But they do attend to these things already. Bayesian methods provide objective reasoning about subjective belief. Psychology, not all of which is bunk, deals with (among other things) how we relate to one another. Engineering already deals with human factors.

my basic sketch of it is that philosophy and rationality are overconcerned with objective reality, and that we should instead focus on how perceptions are subjective and how we relate to one another.

I'd go even further than that, and state that the very notion of an objective reality onto which we can project our "rational" action without regard for social or moral/ethical factors is somewhat peculiar. It seems to be very much a product of the overall notion of λόγος - variously given the meaning of "argument", "opinion", "reason", "number", "rationality" and even "God" (as in the general idea of a "God's Eye View") - that seems to permeate Western culture.

Needless to say, such "logocentrism" is nowadays viewed quite critically and even ridiculed by postmodernists and feminists, as well as by others who point out that non-Western philosophies often held quite different point of view, even within supposedly "rational" schools of thought. For instance, the Chinese Confucianists and Mohists advocated a "Rectification [i.e. proper use] of Names" as the proper foundation of all rational inquiry, which many in the Western tradition would find quite hard to understand (with some well-deserved exceptions, of course).

I don't see why this post is downvoted. When someone asks for an expression of postmodern thought and someone writes a reply to explain it, you shouldn't vote it down because you don't like postmodernism.

The idea that clarity about language is important is very familiar indeed in the Western philosophical tradition. ("It all depends what you mean by ..." is pretty much a paradigmatic, or even caricatural, philosopher's utterance.) It sounds as if the Confucian notion has a rather different spin on it -- focusing on terminology related to social relationships, with the idea that fixing the terminology will lead to fixing the relationships -- and a bunch of related assumptions not highly favoured among Western analytic philosophers -- but I can't help thinking there's maybe a core of shared ideas there.

It is very possible that I'm overoptimistically reading too much into the terminology, though. Would any Confucian experts like to comment?

The Chinese Confucianists and Mohists, for instance, advocated a "Rectification [i.e. proper use] of Names" as the proper foundation of all rational inquiry

My understanding of this is that it's basically map/territory convergence, with an especial emphasis on social reality- let "the ruler" be the ruler!

overconcerned with objective reality, and that we should instead focus on how perceptions are subjective and how we relate to one another.

I hope these people are kept far far away from nuclear plants. And regular factories. And machinery. Actually, far away from any sharp objects would be the best...

Yes, of course. And this is especially concerning because 'rationality', 'winning' and the like are quite clearly not ideologically neutral concepts. They are very much the product of a dominator culture as opposed to being more focused on, say, care and nurturing - be it of fellow human beings or our natural environment, a real-life symbiote without which our communities cannot possibly thrive or be sustainable.

LessWrong folks like to talk about their pursuit of a "Friendly AI" as a possible escape from this dilemma. But it's not clear at all just how 'friendly' an AI could be to, say, indigenous peoples whose way of life and culture does not contemplate Western technology. As a general rule of thumb, our developments in so-called "rationality" have not been kind to such groups.

They are very much the product of a dominator culture as opposed to being more focused on, say, care and nurturing - be it of fellow human beings or our natural environment

For someone with a strong interest in or preference towards caring and nurturing, rationality is still very useful. It helps you learn on how to best care for as many people as possible or to nurture as many pandas (or whatever). Caring and nurturing still have win-states, they're just cooperative instead of competitive.

It helps you learn on how to best care for as many people as possible or to nurture as many pandas (or whatever).

What evidence do you have for that claim? Would that pass objective tests for good evidence?

They are very much the product of a dominator culture as opposed to being more focused on, say, care and nurturing - be it of fellow human beings or our natural environment, a real-life symbiote without which our communities cannot possibly thrive or be sustainable.

"Winning" means maximizing your utility function. If you think that "care and nurturing" are important, and yet you failed to include them in your utility function, the fault lies with you, not rationality. Complaining about rationality not taking into account care and nurturing is like complaining about your car not taking into account red lights.

LessWrong folks like to talk about their pursuit of a "Friendly AI" as a possible escape from this dilemma.

What dilemma?

But it's not clear at all just how 'friendly' an AI could be to, say, indigenous peoples whose way of life and culture does not contemplate Western technology.

An AI friendly to Western values would be a tool through which Western civilization could enforce it values. If you don't like Western values, then your objection is against Western values, not with the tool used to facilitate them.

As a general rule of thumb, our developments in so-called "rationality" have not been kind to such groups.

I don't find that to be clear. The mistreatment of non-Western people can arguably be attributed to anti-rational positions, and my most measures, most people are better off today than the average person was a thousand years ago.

Generally, I focus on these four reasons to take classes:

  1. It is required for a degree you want.
  2. You want to interact with the professor.
  3. You want to interact with the other students.
  4. You want to have external pressure to complete some tasks.

Some people take classes because they want to learn the subject the class is on, but unless that unpacks into the latter three reasons, there's probably a better way to accomplish it.

As mentioned by others, it looks to me like this class does well on all of those reasons (but I'm going off of your one-lecture impression of the professors). This is probably the best place in Yale to meet interested students for your rationality meetup- and the professors are probably good network hubs for this.

As for feminist critiques of rationality, the syllabus lists the reading right there! This is week 1, and this is week 2. (The first one has limited pages in the preview- I doubt you'll be able to read all 51 pages of the second chapter- but you should be able to find it in the library.)

[-][anonymous]10y70

(The main counterargument, I'd imagine, is that the Sequences cover most of the same material in a more fluid and comprehensible fashion.)

So the course would be an easy boost to your GPA? What's the argument against going then?

Presumably you've already paid for this course; presumably the expectation of value isn't high because the alternative is something you feel the need to use scare quotes for; presumably you'd do well in the class and it would boost valuable metrics.

Given that, I would default to taking the course that's a known possibly-interesting probable-benefit, and only switch if there was a very good argument to take something else.

[-][anonymous]10y70

More interesting students = more chance your claims will be challenged, and where you are mistaken you will have the chance to become less wrong. This is the most value that comes from college during college (the diploma thus job thus pay is the best thing that comes after collge). The chance you will meet good challenges during a class that you cannot predict and do not control is higher than what might happen with your own flyers for your own group. The later will be self-selecting to agreement from the start.

I help run a rationality meetup at Yale, and this seems like a good place to meet interested students. On the other hand, I could just as easily leave flyers around before the class begins.

Speaking to people in person makes it easier to recruit them to come to your meetup. Having a good relationship with the professor who teaches the course could also come in handy.