This is the third time something related to this has been posted now (one was deleted), but you're the first to think that there's some possibility LWers might actually be involved (unless I've missed something). I'm surprised that you would think this, my estimate for p(Terrorism|LWer) is extraordinarily low.
http://lesswrong.com/r/discussion/lw/7mu/link_terrorists_target_ai_researchers/
Edit, apparently the other one wasn't deleted, I just failed to notice it anymore. My bad.
Edit, apparently the other one wasn't deleted, I just failed to notice it anymore. My bad.
It was deleted - I posted it, then deleted it. I think you can still find deleted posts you've commented on (Dorikka was the only commenter on my post).
The term LWer implies that they agree with the general positions taken by this community. P4wnc6's concern seems to be that though they disagree, they may indeed read the general opinions taken by the people they believe to be their enemies.
I think that even assuming that this group does actively read the publications of its opponents, it is rather unlikely that they would read to the depth of every discussion post on this one particular site (especially since there are more specifically transhumanist forums).
That's fair certainly. But I think I was somewhat assuming your second point, namely that they'd probably never read this even if they did cursorily follow LW.
But as has been pointed out by others, we're not really "opponents" of this group, ideologically. We agree with them on how dangerous UFAI is, which is specifically why we're interested in rationality and FAI.
I was actually going to come back to revise this. They mentioned the SIAI in passing in their manifesto. Given that, I'd say it's quite likely that they follow LW cursorily, even if they don't read everything here.
Doesn't the terrorist group operate primarily within Mexico? I'm not sure if we have any Mexican Less Wrong members, but even if we do, I don't think any Less Wrong member would be likely to chose the targets they've attacked so far as significant sources of danger, even assuming they thought sending people bombs in the mail was a good idea.
This article depicts a recent attack on a computer scientist. If anyone who reads LessWrong happens to be involved with this, please stop. Sending letter bombs and hurting people is not the answer.
Quoting this post: " And it is triple ultra forbidden to respond to criticism with violence. There are a very few injunctions in the human art of rationality that have no ifs, ands, buts, or escape clauses. This is one of them. Bad argument gets counterargument. Does not get bullet. Never. Never ever never for ever. "
This applies equally well to research that you don't approve of.
Note: I don't mean to imply that average LessWrong readers would be involved in this or sympathize with it... however, their arguments about A.I. and nanotech seem specific enough that it's not unreasonable to believe they might read this post.