lukeprog comments on Reply to Holden on The Singularity Institute - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (213)
How do I know that supporting SI doesn't end up merely funding a bunch of movement-building leading to no real progress?
It seems to me that the premise of funding SI is that people smarter (or more appropriately specialized) than you will then be able to make discoveries that otherwise would be underfunded or wrongly-purposed.
I think the (friendly or not) AI problem is hard. So it seems natural for people to settle for movement-building or other support when they get stuck.
That said, some of the collateral output to date has been enjoyable.
Movement-building is progress, but...
I hear ya. If I'm your audience, you're preaching to the choir. Open Problems in Friendly AI — more in line with what you'd probably call "real progress" — is something I've been lobbying for since I was hired as a researcher in September 2011, and I'm excited that Eliezer plans to begin writing it in mid-August, after SPARC.
Such as?
The philosophy and fiction have been fun (though they hardly pay my bills).
I've profited from reading well-researched posts on the state of evidence-based (social-) psychology / nutrition / motivation / drugs, mostly from you, Yvain, Anna, gwern, and EY (and probably a dozen others whose names aren't available).
The bias/rationality stuff was fun to think but "ugh fields", for me at least, turned out to be the only thing that mattered. I imagine that's different for other types of people, though.
Additionally, the whole project seems to have connected people who didn't belong to any meaningful communities (thinking of various regional meetup clusters).