Comment author: thomblake 03 September 2009 01:38:44PM 0 points [-]

By Wittgenstein's time, there were already plenty of philosophers who thought definitions aren't quite captured by necessary and sufficient conditions.

Comment author: Annoyance 04 September 2009 07:57:15PM -2 points [-]

And the recognition that the process that ordinary people went though had pretty much NOTHING in common with "necessary and sufficient conditions" was not made by philosophers.

Ordinary people struggle to decide whether dolphins are fish or penguins are birds. And they often get it wrong if they haven't been explicitly taught otherwise; even then, some still screw up their answers.

Comment author: RichardKennaway 03 September 2009 09:47:03AM *  5 points [-]

Does this sound familiar?

Yes. I see nothing here not already covered by this, this, and this.

Your final conclusion is like saying that the computation done by computers doesn't involve arithmetic. It's *flow of electric charge*. The charge flows around, then settles down in some stable point in the sea of possible distributions. ETA: On that point, see also this.

Comment author: Annoyance 04 September 2009 07:54:54PM -1 points [-]

Your final conclusion is like saying that [blah blah blah]

No, it's not. Associational processing can emulate logical thinking, but it's not restricted to it and will not normally produce it. Restrictions have to be added for logic to arise out of the sea of associations.

The Featherless Biped

1 Annoyance 02 September 2009 05:47PM

The classical understanding of categories centers on necessary and sufficient properties.  If a thing has X, Y, and Z, we say that it belongs to class A; if it lacks them, we say that it does not.  This is the model of how humans construct and recognize categories that philosophers have held since the days of Aristotle.

Cognitive scientists found that the reality isn't that simple.

continue reading »
Comment author: SforSingularity 13 August 2009 07:11:15PM 2 points [-]

Physicists asked to evaluate paranormal claims do very poorly, yet they are clearly very brainy.

Reference, please. I defy the implied claim that "Physicists asked to evaluate paranormal claims do worse than the average person". I bet 6:1 against this.

If I had a dollar for every brainy person who'd been gulled because they thought they were "too smart" to require being skeptical...

and if I had a dollar for every average idiot who sleepwalked straight into an obvious scam I would make a lot more money.

Comment author: Annoyance 01 September 2009 01:48:48PM 0 points [-]

If I had a dollar for every brainy person who'd been gulled because they thought they were "too smart" to require being skeptical...

and if I had a dollar for every average idiot who sleepwalked straight into an obvious scam I would make a lot more money.

Those sets are not disjoint.

Comment author: SforSingularity 11 August 2009 10:16:30PM *  1 point [-]

Can anyone think of good ways to notice when outright deception is being used? How could a rationalist practice her skills at a magic show?

Most "rationalists" are quite smart people, so tricks that are designed by a trickster to fool the masses rarely work on us. For example, I doubt that many on this site would invest heavily in a pyramid scheme or get fooled by a used car salesman. This is because these tricks are targeted at the average idiot.

However, I have recently noticed that there is, for each of us, a stalker who stalks us and at each and every turn attempts to deceive us, and is just as smart as we are. That stalker/trickster is your own cognitive biases, and by far and away inflicts the greatest material losses on you. This is certainly true in my case.

I cannot even remember the last time I was fooled by someone else, but now that I am working on reducing my losses due to self deception, I realize that basically every day I engage in successful self-deception: I get into some emotional state, myopic, irrational algorithms take over, and I make up little excuses to myself for why they reached the right conclusion.

The real enemy is already inside your head.

Comment author: Annoyance 13 August 2009 03:36:08PM *  3 points [-]

Most "rationalists" are quite smart people, so tricks that are designed by a trickster to fool the masses rarely work on us.

Wrong. Tricksters rely on people making stupid assumptions and failing to check assertions. People with a lot of brainpower can do those things just as easily as people without.

Physicists asked to evaluate paranormal claims do very poorly, yet they are clearly very brainy. It takes more than just brains to be intelligent - you have to use the brains properly.

If I had a dollar for every brainy person who'd been gulled because they thought they were "too smart" to require being skeptical...

Comment author: Psychohistorian 10 August 2009 05:10:53PM 0 points [-]

"He sharply stubbed his toe on a large rock and proclaimed, 'Thus, I refute this!'"

Comment author: Annoyance 13 August 2009 03:32:05PM -1 points [-]

That traditional anecdote (and its modified forms) only illustrate how little the pro-qualia advocates understand the arguments against the idea.

Dismissing 'qualia' does not, as many people frequently imply, require dismissing the idea that sensory stimuli can be distinguish and grouped into categories. That would be utterly absurd - it would render the senses useless and such a system would never have evolved.

All that's needed to is reject the idea that there are some mysterious properties to sensation which somehow violate basic logic and the principles of information theory.

In response to comment by Jonii on The Second Best
Comment author: Psychohistorian 28 July 2009 06:22:50PM 0 points [-]

The most charitable thing that categorical imperatives can be called is arational. The most accurate thing they can be called is unintelligible. The statement "You should do X" is meaningless without an "if you want to accomplish Y," because otherwise it can't answer the question, "Why?" More importantly, there is no way to determine which of two contradictory CIs should be followed.

No moral rule can be derived via any rational decision making process alone. Morality requires arational axioms or values. The litany of things you "should" have done if you were individually rational does not actually follow. "Rational" gets used to mean "strictly selfish utility maximizer" a bit more often than it should be, which is never. There may be people who are indeed individually arational to not do those things, but as we all have different values, that does not mean we all are.

-I'm using categorical imperative as distinct from hypothetical imperative - "Don't lie" vs. "Don't lie if you want people to trust you." There can be some confusion over what people mean by CI, from what I've seen written on this site.

Comment author: Annoyance 28 July 2009 06:29:23PM 2 points [-]

Categorical imperatives that result in persistence will accumulate.

Why should any lifeform preserve its own existence? There's no reason. But those that do eventually dominate existence. Those that do not, are not.

In response to Are you crazy?
Comment author: Annoyance 20 July 2009 07:22:32PM 6 points [-]

"Sanity" is not well-defined, here.

There are plenty of people just as sociopathic as John, and just as dangerous as John but more so, who would not be considered insane or perceived as dangerous by society at large.

Most people in positions of power have strong sociopathic tendencies. It's just that many of them conform sufficiently well with society's expectations that they're not recognized as threats.

Comment author: MendelSchmiedekamp 17 July 2009 07:16:12PM *  4 points [-]

I suppose it was only a matter of time before less wrong found the "fnords". Although at moment we seem to be obsessed with the silly or superficial ones. There is an art, parallel to the core art of rationality, in learning to see the assumptions and deceptions we build up in order to function, the accretion of simplifications and half-answers which become unchallenged beliefs so basic that we forget we even believe them.

And as the saying goes once you see the fnords, you see them everywhere.

And there are so many to see, under a thin coat of fear or denial, an idea or a correction lies ready to be revealed. These are cheap, although filled with the thrill of danger, because they frighten and inspire in equal measure.

But deep underneath all that, are the big ones, looming, twisting things which have tunneled their way through our knowledge and practices. These are the ones you need to ignore because they don't just frighten or require accepting what others deny, they mean functionally shifting your entire reference frame. It is as though language itself conspires to make these deeply embedded assumptions and delusions into something inexpressible, weird at best, madness at worse.

If you search patiently and carefully enough you can start to find them. You can even catalog or map them, seeing how they unfold into each other. But that doesn't mean you've figured out how the express them. How do you show them in a way that provides nearly as much engagement as a stream of rationalizations? That's something I'm still working on.

Comment author: Annoyance 20 July 2009 07:18:39PM -2 points [-]

There are limits to the degree to which fnords can be discussed with others. Without doing the hard work necessary to perceive them, others cannot receive benefit from having them pointed out to them - and that can even be harmful, as our mental immune systems will construct defensive rationalizations to protect fnords brought to our attention that we're not strong enough to abolish.

Comment author: thomblake 20 July 2009 04:37:00PM 3 points [-]

I'm merely against the idea that anyone has a "right" over the thoughts of others, or a right to not have their feelings hurt

I agree with the first part - that's pretty obvious from any common conception of rights - it's very hard to support rights about thoughts simultaneously with any right to liberty.

Of course, it wasn't central to the complaint. The main issue is that we might be driving people away, and there are at least a few people for whom it is true.

Here, we should be talking about how to disconnect our buttons, rather than how to insist that other people stop pushing them.

I disagree. If you take away the 'buttons', there isn't much left. While of course rationality for AIs is relevant here, most of the discussion should be about rationality implemented in humans. And while there are some who think rationality requires denying what it is to be human, I would not be among them.

Comment author: Annoyance 20 July 2009 07:10:21PM 6 points [-]

The main issue is that we might be driving people away, and there are at least a few people for whom it is true.

Whether this is a problem depends on the people being driven away, and why.

View more: Prev | Next