Posts

Sorted by New

Wiki Contributions

Comments

Sorted by

I haven't read the comments yet, so apologies if this has already been said or addressed:

If I am watching others debate, and my attention is restricted to the arguments the opponents are presenting, then my using the "one strong argument" approach may not be a bad thing.

I'm assuming that weak arguments are easy to come by and can be constructed for any position, but strong arguments are rare.

In this situation I would expect anybody who has a strong argument to use it, to the exclusion of weaker ones: if A and B both have access to 50 weak arguments, and A also has access to 1 strong argument, then I would expect the debate to come out looking like (50weak) vs. (1strong) - even though the underlying balance would be more like (50weak) vs. (50weak + 1strong).

(By "having access to" an argument, I mean to include both someone's knowing an argument, and someone's having the potential to construct or come across an argument with relatively little effort.)

Tentatively:

If it's accepted that GREEN and RED are structurally identical, and that in virtue of this they are phenomenologically identical, why think that phenomenology involves anything*, beyond structure, which needs explaining?

I think this is the gist of Dennett's dissolution attempts. Once you've explained why your brain is in a seeing-red brain-state, why this causes a believing-that-there-is-red mental representation, onto a meta-reflection-about-believing-there-is-red functional process, etc., why think there's anything else?

Here's how I got rid of my gut feeling that qualia are both real and ineffable.

First, phrasing the problem:

Even David Chalmers thinks there are some things about qualia that are effable. Some of the structural properties of experience - for example, why colour qualia can be represented in a 3-dimensional space (colour, hue, and saturation) - might be explained by structural properties of light and the brain, and might be susceptible to third-party investigation.

What he would call ineffable is the intrinsic properties of experience. With regards to colour-space, think of spectrum inversion. When we look at a firetruck, the quale I see is the one you would call "green" if you could access it, but since I learned my colour words by looking at firetrucks, I still call it "red".

If you think this is coherent, you believe in ineffable qualia: even though our colour-spaces are structurally identical, the "atoms" of experience additionally have intrinsic natures (I'll call these eg. RED and GREEN) which are non-causal and cannot be objectively discovered.

You can show that ineffable qualia (experiential intrinsic natures, independent of experiential structure) aren't real by showing that spectrum inversion (changing the intrinsic natures, keeping the structure) is incoherent.

An attempt at a solution:

Take another experiential "spectrum": pleasure vs. displeasure. Spectrum inversion is harder, I'd say impossible, to take seriously in this case. If someone seeks out P, tells everyone P is wonderful, laughs and smiles when P happens, and even herself believes (by means of mental representations or whatever) that P is pleasant, then it makes no sense to me to imagine P really "ultimately" being UNPLEASANT for her.

Anyway, if pleasure-displeasure can't be noncausally inverted, then neither can colour-qualia. The three colour-space dimensions aren't really all you need to represent colour experience. Colour experience doesn't, and can't, ever occur isolated from other cognition.

For example: seeing a lot of red puts monkeys on edge. So imagine putting a spectrum-inverted monkey in a (to us) red room, and another in a (to us) green room.

If the monkey in the green (to it, RED') room gets antsy, or the monkey in the red (to it, GREEN') room doesn't, then that means the spectrum-inversion was causal and ineffable qualia don't exist.

But if the monkey in the green room doesn't get antsy, or the monkey in the red room does, then it hasn't been a full spectrum inversion. RED' without antsiness is not the same quale as RED with antsiness. If all the other experiential spectra remain uninverted, it might even look surprisingly like GREEN. But to make the inversion successfully, you'd have to flip all the other experiential spectra that connect with colour, including antiness vs. serenity, and through that, pleasure vs. displeasure.

This isn't knockdown, but it convinced me.

Not about intelligence specifically, but I believe this was the first (well-known) paper making the claim: http://www.philbio.org/wp-content/uploads/2010/11/Lewontin-The-Apportionment-of-Human-Diversity.pdf

The point is that even if the heritable component of (say) intelligence among white people formed a bell curve, and the heritable component of intelligence among black people formed a bell curve, a priori you'd expect the two curves to be pretty much the same.

(Lewontin's other conclusion, that "race" is "biologically meaningless", is separate and doesn't work because what small racial differences there are are statistically clustered: http://onlinelibrary.wiley.com/doi/10.1002/bies.10315/abstract;jsessionid=831B49767DB713DADCD9A1199D7ADC49.d02t02)

I wouldn't recommend agreeing with him about a lot of things, but he's definitely worth paying attention to.

The gist of "The Mind Doesn't Work That Way," from what I can tell so far:

So partly sparked by his own work, modularity became an important idea in cognitive science; not all parts of your mind do the same jobs, or have access to the same information. For example, knowing the Müller-Lyer illusion is an illusion doesn't ruin the effect.

Some cognitive scientists of an evolutionary bent saw functional modularity, with the functions defined by the adaptive problems they were designed to solve, as the key to predicting and understanding the mind's entire functional architecture. If the modules are information-encapsulated, then massive modularity also offers a solution to the frame problem. A computational version of this is the picture that Pinker presents in How the Mind Works.

Fodor's position seems to be something like: there are modules; computation is a good way of thinking about modules; but they seem to be restricted to input (eg. perception) and output (eg. maintaining balance) processes (both in the sense of having clear functional success-criteria and in the sense of being informationally-encapsulated). The things cognitive scientists are most interested in - and have had the least success in studying - seem to be nonmodular; when you "believe a belief" or "think a thought", you seem to have at least potential access to most of the information you've ever had access to before. If belief and thought and other things he calls "global processes" are nonmodular, then computation may not be the right way to think about them, despite being the best hypothesis we've had so far.

Things your list reminds me of:

Some other favourites of mine:

I stopped being afraid because I read the truth. And that's the scientifical truth which is much better. You shouldn't let poets lie to you.

-- Bjork

From such experience, this might be a fruitful approach to trying to shift the gender imbalance in the community. It's unfortunate that describing oneself as a rationalist can have the potential to come across as having a superiority complex, and doubly unfortunate is how common-place is the meme of rationality being a "men's" thing (all women are slaves to their bleeding vaginas, amirite?)

A possible consequence of this is that, when it's phrased explicitly as such, the idea of a "rationality community" conjures up images of boorish men talking about how "you're irrational if you get offended when I say women are sluts who always cheat! IT'S SCIENCE!", which is not at all an inviting atmosphere.

Stuff like that is something that does itself need to be combated, but in the meantime, books like GEB perfectly illustrate how whimsical and fun - AND WELCOMING - real rationality can be, introducing important concepts and making the reader feel brilliant and excited to learn, without the triggering the defensiveness that can occur through the implication that, if I want to teach you rationality it must mean I think you're "not good enough" the way you are.

I do it in a roundabout way. I've gotten far fewer people started off by trying to teach them about biases directly than by saying "dude check out this fanfic it's awesome HARRY'S A SCIENTIST," or by showing them Crab Canon from Godel Escher Bach and telling them the rest of the book's just as much fun, or by showing them the game of life. Once they're hooked on these marvelous new ways of thinking, I show them the blog.

Load More