This article is a deliberate meta-troll. To be successful I need your trolling cooperation. Now hear me out.
In The Strangest Thing An AI Could Tell You Eliezer talks about asognostics, who have one of their arm paralyzed, and what's most interesting are in absolute denial of this - in spite of overwhelming evidence that their arm is paralyzed they will just come with new and new rationalizations proving it's not.
Doesn't it sound like someone else we know? Yes, religious people! In spite of heaps of empirical evidence against existence of their particular flavour of the supernatural, internal inconsistency of their beliefs, and perfectly plausible alternative explanations being well known, something between 90% and 98% of humans believe in the supernatural world, and is in a state of absolute denial not too dissimilar to one of asognostics. Perhaps as many as billions of people in history have even been willing to die for their absurd beliefs.
We are mostly atheists here - we happen not to share this particular delusion. But please consider an outside view for a moment - how likely is it that unlike almost everyone else we don't have any other such delusions, for which we're in absolute denial of truth in spite of mounting heaps of evidence?
If the delusion is of the kind that all of us share it, we won't be able to find it without building an AI. We might have some of those - it's not too unlikely as we're a small and self-selected group.
What I want you to do is try to trigger absolute denial macro in your fellow rationalists! Is there anything that you consider proven beyond any possibility of doubt by both empirical evidence and pure logic, and yet saying it triggers automatic stream of rationalizations in other people? Yes, I pretty much ask you to troll, but it's a good kind of trolling, and I cannot think of any other way to find our delusions.
Assuming the moderation of "beyond any possibility of doubt" I suggested in an earlier comment, I've already seen an example on this forum. The claim I make is:
"Achieving an intended result is not a task that necessitates either having a model or making predictions. In some cases, neither having a model nor attempting predictions are of any practical use at all."
(NB. I have not reread my earlier post in composing the above; searching out minor differences to seize on would be to the point only in exemplifying another type of rationalisation to add to those listed below.)
One strong thread running through the responses was to interpret the word "model" so as to make the claim false by definition, a redefinition blatantly at variance with all previous uses of the word in this very forum and its parent OB. Responses of that form stopped the moment I pointed out the previous record of its use.
Another thread was to change the above claim to something stronger and argue against that instead: the claim that models and prediction are never useful.
A third was to point to models elsewhere than in the examples of systems achieving purposes without models.
These reactions are invariable. I was not surprised to encounter them here.
A fourth reaction I've encountered (I'm not going to reexamine the comments to see if anyone here committed this) is to claim that it works, so there must be a model. Yet when pressed, they cannot point to it, cannot even say what claim they are making about the system. It's like hearing a Christian say "even if you're an atheist, if you did something good it must have been by receiving the grace of God".
The example that comes to mind here is tumble-and-travel chemotaxis.
For those not familiar with it, it's how e coli (and many other bacteria) get to places where the chemical environment favors them. From an algorythmic perspective, it senses the current pleasantness of the chemical environment (more food, less poison) as a scalar, compares that pleasantness to its general happiness level (also a scalar), is more likely to go straight if the former is higher and more likely to tumble if the latter is, and updates its happiness in the direction of the plea... (read more)