Rationalism before the Sequences
I'm here to tell you a story about what it was like to be a rationalist decades before the Sequences and the formation of the modern rationalist community. It is not the only story that could be told, but it is one that runs parallel to and has important connections to Eliezer Yudkowsky's and how his ideas developed. My goal in writing this essay is to give the LW community a sense of the prehistory of their movement. It is not intended to be "where Eliezer got his ideas"; that would be stupidly reductive. I aim more to exhibit where the drive and spirit of the Yudkowskian reform came from, and the interesting ways in which Eliezer's formative experiences were not unique. My standing to write this essay begins with the fact that I am roughly 20 years older than Eliezer and read many of his sources before he was old enough to read. I was acquainted with him over an email list before he wrote the Sequences, though I somehow managed to forget those interactions afterwards and only rediscovered them while researching for this essay. In 2005 he had even sent me a book manuscript to review that covered some of the Sequences topics. My reaction on reading "The Twelve Virtues of Rationality" a few years later was dual. It was a different kind of writing than the book manuscript - stronger, more individual, taking some serious risks. On the one hand, I was deeply impressed by its clarity and courage. On the other hand, much of it seemed very familiar, full of hints and callbacks and allusions to books I knew very well. Today it is probably more difficult to back-read Eliezer's sources than it was in 2006, because the body of more recent work within his reformation of rationalism tends to get in the way. I'm going to attempt to draw aside that veil by talking about four specific topics: General Semantics, analytic philosophy, science fiction, and Zen Buddhism. Before I get to those specifics, I want to try to convey that sense of what it was like. I was a bright
Of course the word "might" is doing a lot of work here! Because there is no guaranteed happy solution, the best we can do is steer away from futures we absolutely know we we do not want to be in, like a grinding totalitarianism rationalized by "We're saving you from the looming threat of killer AIs!"
" At least with the current system, corporations are able to test models before release". The history of proprietary software does not inspire any confidence at all that this will be done adequately, or even at all; in a fight between time-to-market and software quality, getting their firstest almost always wins. It's not reasonable to expect this to change simply because some people have strong opinions about AI risk.