Lamp2
Lamp2 has not written any posts yet.

Lamp2 has not written any posts yet.

So you wrote an article that starts with a false premise, namely the implicit claim that the primary cause of radicalization is western police presence. It then proceeds to use numbers you appear to have taken from thin air in an argument whose only purposes appears to be signalling "rationality" and diverting attention from said false premise. It final reaches a conclusion that's almost certainly false. This is supposed to promote rationality how?
Original thread here.
I believe the problem people have with this is that it isn't actually helpful at all. It's just a list of outgroups for people to laugh at without any sort of analysis on why they believe this or what can be done to avoid falling into the same traps. Obviously a simple chart can't really encompass that level of explanation, so it's actual value or meaningful content is limited.
Thinking about it some more, I think it could. The problem with the chart is that the categories are based on which outgroup the belief comes from. For a more rational version of the diagram, one could start by sorting the beliefs based on... (read more)
However "alternative" medicine cannot be established using the scientific method,
Care to explain what you mean by that assertion. You might want to start by defining what you mean by "alternative medicine".
The scientific method is reliable -> very_controversial_thing
And hardcoded:
P(very_controversial_thing)=0
Then the conclusion is that the scientific method isn't reliable.
I the point I am trying to make is that if an AI axiomatically believes something which is actually false, then this is likely to result in weird behavior.
I suspect it would react by adjusting it's definitions so that very_controversial_thing doesn't mean what the designers think it means.
This can lead to very bad outcomes. For example, if the AI is hard coded with P("there are differences between human groups in intelligence")=0, it might conclude that some or all of the groups aren't in fact "human". Consider the results if it is also programed to care about "human" preferences.
Probably, that seems it be their analogue of concluding Tay is "Nazi".
Actually I am a bit surprised, the post got two downvotes already. I was under the impression that LW would appreciate it given it being a site about rationality and all.. I've been reading LW for quite some time but I hadn't actually posted before, did I do something horribly wrong or anything?
This list falls into a common failure mode among "skeptics" attempting to make a collection of "irrational nonsense". Namely, having no theory of what it means for something to be "irrational nonsense" so falling back on a combination of absurdity heuristic and the belief's social status.
It doesn't help that many of your labels for the "irrational nonsense" are vague enough... (read more)
And if image recognition software started doing some kind of unethical recognition (I can't be bothered to find it, but something happened where image recognition software started recognising gorillas as African ethnicity humans or vice versa)
The fact that this kind of mistake is considered more "unethical" then other types of mistakes tells us more about the quirks of the early 21th century Americans doing the considering than about AI safety.
Probably, they said something about that in the wired article. One can still get an idea for its level of intelligence.
Also, two of your recommendations are.
Of course, this is what western leaders have been doing for the past 15 years, and it doesn't seem to be working. Turns out Muslims are more inclined to get their theology from their own imams then from western politicians, and reaching out to "moderate" Muslim leaders results in Muslim leaders that are moderate in English but radical in Arabic.
Original thread here.