Polymeron comments on Thoughts on the Singularity Institute (SI) - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (1270)
It's a fine line though, isn't it? Saying "huh, looks like we have much to learn, here's what we're already doing about it" is honest and constructive, but sends a signal of weakness and defensiveness to people not bent on a zealous quest for truth and self-improvement. Saying "meh, that guy doesn't know what he's talking about" would send the stronger social signal, but would not be constructive to the community actually improving as a result of the criticism.
Personally I prefer plunging ahead with the first approach. Both in the abstract for reasons I won't elaborate on, but especially in this particular case. SI is not in a position where its every word is scrutinized; it would actually be a huge win if it gets there. And if/when it does, there's a heck of a lot more damning stuff that can be used against it than an admission of past incompetence.
I do not see why this should be a motivating factor for SI; to my knowledge, they advertise primarily to people who would endorse a zealous quest for truth and self-improvement.
That subset of humanity holds considerably less power, influence and visibility than its counterpart; resources that could be directed to AI research and for the most part aren't. Or in three words: Other people matter. Assuming otherwise would be a huge mistake.
I took Wei_Dai's remarks to mean that Luke's response is public, and so can reach the broader public sooner or later; and when examined in a broader context, that it gives off the wrong signal. My response was that this was largely irrelevant, not because other people don't matter, but because of other factors outweighing this.