Vladimir_Nesov comments on What big goals do we have? - Less Wrong

10 Post author: cousin_it 19 January 2010 04:35PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (92)

You are viewing a single comment's thread. Show more comments above.

Comment author: cousin_it 19 January 2010 08:14:40PM *  6 points [-]

A quote explaining why I don't do that either:

The three outstanding problems in physics, in a certain sense, were never worked on while I was at Bell Labs. By important I mean guaranteed a Nobel Prize and any sum of money you want to mention. We didn't work on (1) time travel, (2) teleportation, and (3) antigravity. They are not important problems because we do not have an attack. It's not the consequence that makes a problem important, it is that you have a reasonable attack.

-- Richard Hamming, "You and Your Research"

Comment author: Vladimir_Nesov 20 January 2010 12:18:08AM 5 points [-]

For now, a valid "attack" of Friendly AI is to actually research this question, given that it wasn't seriously thought about before. For time travel or antigravity, we don't just not have at attack, we have a pretty good idea of why it's won't be possible to implement them now or ever, and the world won't end if we don't develop them. For Friendly AI, there is no such clarity or security.