It seems from my previous readings that there's a non-negligible proportion of the population on both sides that can read much more easily with light on dark or dark on light and has trouble with their respective reverses.
Personally, if the room I'm in is very brightly lit, I tend to prefer dark-on-light, but otherwise under most normal lighting conditions or dim light (like in my apartment) I prefer light-on-dark. In both cases, this preference is due to eye strain during prolonged reading and ease of finding words I'm looking for when I'm "seeking" (i.e. finding the spot where I paused reading, or finding a specific thing in a piece of code, or something similar), measured by how long it takes to find whatever I'm looking for.
(also anecdotal quip re the above link: That thing about the refresh rate isn't just random for me - if the refresh rate of a traditional monitor drops anywhere below 50hz, I will reliably get a harsh migraine within an hour, and if also reading dark-on-light text on it, this time drops to within ten minutes. LCD / LED tend to be less punishing and I've never had this problem with them even as low as 30hz. )
Summary: Intelligence Explosion Microeconomics (pdf) is 40,000 words taking some initial steps toward tackling the key quantitative issue in the intelligence explosion, "reinvestable returns on cognitive investments": what kind of returns can you get from an investment in cognition, can you reinvest it to make yourself even smarter, and does this process die out or blow up? This can be thought of as the compact and hopefully more coherent successor to the AI Foom Debate of a few years back.
(Sample idea you haven't heard before: The increase in hominid brain size over evolutionary time should be interpreted as evidence about increasing marginal fitness returns on brain size, presumably due to improved brain wiring algorithms; not as direct evidence about an intelligence scaling factor from brain size.)
I hope that the open problems posed therein inspire further work by economists or economically literate modelers, interested specifically in the intelligence explosion qua cognitive intelligence rather than non-cognitive 'technological acceleration'. MIRI has an intended-to-be-small-and-technical mailing list for such discussion. In case it's not clear from context, I (Yudkowsky) am the author of the paper.
Abstract:
The dedicated mailing list will be small and restricted to technical discussants.
This topic was originally intended to be a sequence in Open Problems in Friendly AI, but further work produced something compacted beyond where it could be easily broken up into subposts.
Outline of contents:
1: Introduces the basic questions and the key quantitative issue of sustained reinvestable returns on cognitive investments.
2: Discusses the basic language for talking about the intelligence explosion, and argues that we should pursue this project by looking for underlying microfoundations, not by pursuing analogies to allegedly similar historical events.
3: Goes into detail on what I see as the main arguments for a fast intelligence explosion, constituting the bulk of the paper with the following subsections:
4: A tentative methodology for formalizing theories of the intelligence explosion - a project of formalizing possible microfoundations and explicitly stating their alleged relation to historical experience, such that some possibilities can allegedly be falsified.
5: Which open sub-questions seem both high-value and possibly answerable.
6: Formally poses the Open Problem and mentions what it would take for MIRI itself to directly fund further work in this field.