Singularity Institute desperately needs someone who is not me who can write cognitive-science-based material. Someone smart, energetic, able to speak to popular audiences, and with an excellent command of the science. If you’ve been reading Less Wrong for the last few months, you probably just thought the same thing I did: “SIAI should hire Lukeprog!” To support Luke Muelhauser becoming a full-time Singularity Institute employee, please donate and mention Luke (e.g. “Yay for Luke!”) in the check memo or the comment field of your donation - or if you donate by a method that doesn’t allow you to leave a comment, tell Louie Helm (louie@intelligence.org) your donation was to help fund Luke.
Note that the Summer Challenge that doubles all donations will run until August 31st. (We're currently at $31,000 of $125,000.)
During his stint as a Singularity Institute Visiting Fellow, Luke has already:
- Co-organized and taught sessions for a well-received one-week Rationality Minicamp, and taught sessions for the nine-week Rationality Boot Camp.
- Written many helpful and well-researched articles for Less Wrong on metaethics, rationality theory, and rationality practice, including the 20-page tutorial A Crash Course in the Neuroscience of Human Motivation.
- Written a new Singularity FAQ.
- Published an intelligence explosion website for academics.
- ...and completed many smaller projects.
As a full-time Singularity Institute employee, Luke could:
- Author and co-author research papers and outreach papers, including
- A chapter already accepted to Springer’s The Singularity Hypothesis volume (co-authored with Louie Helm).
- A paper on existential risk and optimal philanthropy, co-authored with a Columbia University researcher.
- Continue to write articles for Less Wrong on the theory and practice of rationality.
- Write a report that summarizes unsolved problems related to Friendly AI.
- Continue to develop his metaethics sequence, the conclusion of which will be a sort of Polymath Project for collaboratively solving open problems in metaethics relevant to FAI development.
- Teach courses on rationality and social effectiveness, as he has been doing for the Singularity Institute’s Rationality Minicamp and Rationality Boot Camp.
- Produce introductory materials to help bridge inferential gaps, as he did with the Singularity FAQ.
- Raise awareness of AI risk and the uses of rationality by giving talks at universities and technology companies, as he recently did at Halcyon Molecular.
If you’d like to help us fund Luke Muehlhauser to do all that and probably more, please donate now and include the word “Luke” in the comment field. And if you donate before August 31st, your donation will be doubled as part of the 2011 Summer Singularity Challenge.
Because "signing" comments is not customary here, doing so signals a certain aloofness or distance from the community, and thus can easily be interpreted as a passive-aggressive assertion of high status. (Especially coming from Luke, who I find emits such signals rather often -- he may want to be aware of this in case it's not his intention.)
I interpret Silas's "Not necessary" as roughly "Excuse me, but you're not on Mount Olympus writing an epistle to the unwashed masses on LW down below".
I saw lukeprog's signing messages as minor noise and possibly a finger macro developed long ago, so I stopped seeing the signature.
I'd be dubious about assuming one can be certain (where's the Bayesianism?) about what someone else is intending to signal, especially considering that it's doctrine here that one can't be certain of even one's own motivations. How much less certain should one be about other people's?
I would add some further uncertainty if one feels very sure about the motivation driving a behavior that's annoying.