To break up the awkward silence at the start of a recent Overcoming Bias meetup, I asked everyone present to tell their rationalist origin story - a key event or fact that played a role in their first beginning to aspire to rationality. This worked surprisingly well (and I would recommend it for future meetups).
I think I've already told enough of my own origin story on Overcoming Bias: how I was digging in my parents' yard as a kid and found a tarnished silver amulet inscribed with Bayes's Theorem, and how I wore it to bed that night and dreamed of a woman in white, holding an ancient leather-bound book called Judgment Under Uncertainty: Heuristics and Biases (eds. D. Kahneman, P. Slovic, and A. Tversky, 1982)... but there's no need to go into that again.
So, seriously... how did you originally go down that road?
Added: For some odd reason, many of the commenters here seem to have had a single experience in common - namely, at some point, encountering Overcoming Bias... But I'm especially interested in what it takes to get the transition started - crossing the first divide. This would be very valuable knowledge if it can be generalized. If that did happen at OB, please try to specify what was the crucial "Aha!" insight (down to the specific post if possible).
I'm probably also an ex-rationalist. Simply looking at the list of biases that I should really be correcting for in making a decision under uncertainty is rather intimidating. I'd like to be right - but do I really want to be right that much?
Frankly, the fact that I still maintain a cryonics membership is really status quo bias: I set that up before
Reading The Crack of a Future Dawn - downgrade by 2X if uploads/ems dominate and are impoverished to the point of being on the edge of survivable subsistence.
Watching the repugnant Leon Kass lead a cheerleading section for the grim reaper from the chairmanship of W's bioethics council. Extending human lifespans is a hard enough technical problem - but I hadn't imagined that there was going to be a whole faction on the side of death. Downgrade the odds by another 2X if there is a faction trying to actively keep cryonicists dead.
Watching Watson perform impressively in an open problem domain. The traditional weakness of classical AI has been brittleness, breaking spectacularly on moving outside of a very narrow domain. That firewall against ufAI has now been breached. Yet another downgrade of 2X for this hazard gaining strength...