Singularity Institute desperately needs someone who is not me who can write cognitive-science-based material. Someone smart, energetic, able to speak to popular audiences, and with an excellent command of the science. If you’ve been reading Less Wrong for the last few months, you probably just thought the same thing I did: “SIAI should hire Lukeprog!” To support Luke Muelhauser becoming a full-time Singularity Institute employee, please donate and mention Luke (e.g. “Yay for Luke!”) in the check memo or the comment field of your donation - or if you donate by a method that doesn’t allow you to leave a comment, tell Louie Helm (louie@intelligence.org) your donation was to help fund Luke.
Note that the Summer Challenge that doubles all donations will run until August 31st. (We're currently at $31,000 of $125,000.)
During his stint as a Singularity Institute Visiting Fellow, Luke has already:
- Co-organized and taught sessions for a well-received one-week Rationality Minicamp, and taught sessions for the nine-week Rationality Boot Camp.
- Written many helpful and well-researched articles for Less Wrong on metaethics, rationality theory, and rationality practice, including the 20-page tutorial A Crash Course in the Neuroscience of Human Motivation.
- Written a new Singularity FAQ.
- Published an intelligence explosion website for academics.
- ...and completed many smaller projects.
As a full-time Singularity Institute employee, Luke could:
- Author and co-author research papers and outreach papers, including
- A chapter already accepted to Springer’s The Singularity Hypothesis volume (co-authored with Louie Helm).
- A paper on existential risk and optimal philanthropy, co-authored with a Columbia University researcher.
- Continue to write articles for Less Wrong on the theory and practice of rationality.
- Write a report that summarizes unsolved problems related to Friendly AI.
- Continue to develop his metaethics sequence, the conclusion of which will be a sort of Polymath Project for collaboratively solving open problems in metaethics relevant to FAI development.
- Teach courses on rationality and social effectiveness, as he has been doing for the Singularity Institute’s Rationality Minicamp and Rationality Boot Camp.
- Produce introductory materials to help bridge inferential gaps, as he did with the Singularity FAQ.
- Raise awareness of AI risk and the uses of rationality by giving talks at universities and technology companies, as he recently did at Halcyon Molecular.
If you’d like to help us fund Luke Muehlhauser to do all that and probably more, please donate now and include the word “Luke” in the comment field. And if you donate before August 31st, your donation will be doubled as part of the 2011 Summer Singularity Challenge.
I actually have no interest in supporting your research. Every time I ask a clarifying question of any of your claims, you get extremely defensive and fail to answer it, which suggests a poor understaning of what you're trying to present results on. Also, every piece of advice I've followed falls woefully short of what you claim it does, and I don't seem to be alone here (on either point). I think your contributions are overrated.
(This is a large part of why I put such a low prior on claims of the minicamp's phenomenal success and was so skeptical of the report.)
I don't claim to have contributed more research, to LW, of course, but when I do present research I make sure to understand it.
In fairness, your more recent work doesn't seem to be subject to any of this, so I could very well change my opinion on this.
It may be OK in poker to try calling someone's bluff with a bluff of your own, but it's pretty rude in real life.