So you wrote an article that starts with a false premise, namely the implicit claim that the primary cause of radicalization is western police presence. It then proceeds to use numbers you appear to have taken from thin air in an argument whose only purposes appears to be signalling "rationality" and diverting attention from said false premise. It final reaches a conclusion that's almost certainly false. This is supposed to promote rationality how?
Original thread here.
I believe the problem people have with this is that it isn't actually helpful at all. It's just a list of outgroups for people to laugh at without any sort of analysis on why they believe this or what can be done to avoid falling into the same traps. Obviously a simple chart can't really encompass that level of explanation, so it's actual value or meaningful content is limited.
Thinking about it some more, I think it could. The problem with the chart is that the categories are based on which outgroup the belief comes from. For a more rational version of the diagram, one could start by sorting the beliefs based on the type and strength of the evidence that convinced one the belief was "absurd".
Thus, one could have categories like:
no causal mechanism consistent with modern physics
the evidence that caused this a priori low probability hypothesis to be picked out from the set of all hypotheses has turned out to be faulty (possibly with reference to debunking)
this hypothesis has been scientifically investigated and found to be false (reference to studies, ideally also reference to replications of said studies)
Once one starts doing this, one would probably find that a number of the "irrational" beliefs are actually plausible, with little significant evidence either way.
Original thread here.
However "alternative" medicine cannot be established using the scientific method,
Care to explain what you mean by that assertion. You might want to start by defining what you mean by "alternative medicine".
The scientific method is reliable -> very_controversial_thing
And hardcoded:
P(very_controversial_thing)=0
Then the conclusion is that the scientific method isn't reliable.
I the point I am trying to make is that if an AI axiomatically believes something which is actually false, then this is likely to result in weird behavior.
I suspect it would react by adjusting it's definitions so that very_controversial_thing doesn't mean what the designers think it means.
This can lead to very bad outcomes. For example, if the AI is hard coded with P("there are differences between human groups in intelligence")=0, it might conclude that some or all of the groups aren't in fact "human". Consider the results if it is also programed to care about "human" preferences.
Probably, that seems it be their analogue of concluding Tay is "Nazi".
Actually I am a bit surprised, the post got two downvotes already. I was under the impression that LW would appreciate it given it being a site about rationality and all.. I've been reading LW for quite some time but I hadn't actually posted before, did I do something horribly wrong or anything?
This list falls into a common failure mode among "skeptics" attempting to make a collection of "irrational nonsense". Namely, having no theory of what it means for something to be "irrational nonsense" so falling back on a combination of absurdity heuristic and the belief's social status.
It doesn't help that many of your labels for the "irrational nonsense" are vague enough that they could cover a number of ideas many of which are in fact correct.
Edit: In some cases I suspect you yourself don't know what they're supposed to mean. For example, you list "alternative medicine". What do you mean by this. The most literal interpretation is that you mean that all medical theories other than the current "consensus of the medical community" (if such a thing exists) are "irrational nonsense". Obviously you don't believe the current medical consensus is 100% correct. You probably mean something closer to "the irrational parts of alternative medicine are irrational", this is tautologically true and useless. Incidentally it is also true (and useless) that the irrational parts of the current "medical consensus" are irrational.
Original thread here.
And if image recognition software started doing some kind of unethical recognition (I can't be bothered to find it, but something happened where image recognition software started recognising gorillas as African ethnicity humans or vice versa)
The fact that this kind of mistake is considered more "unethical" then other types of mistakes tells us more about the quirks of the early 21th century Americans doing the considering than about AI safety.
Probably, they said something about that in the wired article. One can still get an idea for its level of intelligence.
Also, two of your recommendations are.
Of course, this is what western leaders have been doing for the past 15 years, and it doesn't seem to be working. Turns out Muslims are more inclined to get their theology from their own imams then from western politicians, and reaching out to "moderate" Muslim leaders results in Muslim leaders that are moderate in English but radical in Arabic.
Original thread here.