Karmakaiser comments on The Singularity Institute's Arrogance Problem - Less Wrong Discussion
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (307)
I appreciate the tone and content of your comment. Responding to a few specific points...
There are many things we aren't (yet) good at. There are too many things about which to check the science and test things and update. In fact, our ability to collaborate successfully with volunteers on things has greatly improved in the last month, in part because we implemented some advice from the GWWC gang, who are very good at collaborating with volunteers.
Eliezer strikes me as an easy candidate for having awesome powers. CFAI, while confusingly written, was way ahead of its time, and what Eliezer figured out in the early 2000s is slowly becoming a mainstream position accepted by, e.g., Google's AGI team. The Sequences are simply awesome. And he did manage to write the most popular Harry Potter fanfic of all time.
Finally, I suspect many people's doubts about SIAI's horsepower could be best addressed by arranging a single 2-hour conversation between them and Carl Shulman. But you'd have to visit the Bay Area, and we can't afford to have him do nothing but conversations, anyway. If you want a taste, you can read his comment history, which consists of him writing the exactly correct thing to say in almost every comment he's made for the past several years.
Aaaaaaaaaand now Carl will slap me for setting expectations too high. But I don't think I'm exaggerating that much. Maybe I'll get by with another winky-face.
;)
I wasn't aware of Google's AGI team accepting CFAI. Is there a link of organizations that consider the Friendly AI issue important?
I wasn't even aware of "Google's AGI team" . .
Update: please see here.