Jonathan_Graehl comments on Reply to Holden on The Singularity Institute - Less Wrong

46 Post author: lukeprog 10 July 2012 11:20PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (213)

You are viewing a single comment's thread. Show more comments above.

Comment author: JaneQ 12 July 2012 07:18:33AM 4 points [-]

It seems to me that the premise of funding SI is that people smarter (or more appropriately specialized) than you will then be able to make discoveries that otherwise would be underfunded or wrongly-purposed.

But then SI has to have dramatically better idea what research has to be funded to protect the mankind, than every other group of people capable of either performing such research or employing people to perform such research.

Muehlhauser has stated that SI should be compared to alternatives in form of the organizations working on the AI risk mitigation, but that seems like an overly narrow choice reliant on presumption that it is not an alternative to not work on AI risk mitigation now.

For example, 100 years ago it would seem to have been too early to fund work on AI risk mitigation; that may still be the case; as the time gone on one could naturally expect that the opinions will form a distribution and the first organizations offering AI risk mitigation will pop up earlier than the time at which such work is effective. When we look into the past through the goggles of notoriety, we don't see all the failed early starts.

Comment author: Jonathan_Graehl 12 July 2012 11:49:51PM 1 point [-]

100 years ago it would seem to have been too early to fund work on AI risk mitigation

Hilarious, and an unfairly effective argument. I'd like to know such people, who can entertain an idea that will still be tantalizing yet unresolved a century out.

that seems like an overly narrow choice reliant on presumption that it is not an alternative to not work on AI risk mitigation now.

Yes. I agree with everything else, too, with the caveat that SI is not the first organization to draw attention to AI risk) - not that you said so.