Warrigal comments on Call for new SIAI Visiting Fellows, on a rolling basis - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (264)
Goertzel, Voss, and similar folks are not working on the FAI problem. They're working on the AGI problem. Contrary to what Goertzel, Voss, and similar folks find most convenient to believe, these two problems are not on the same planet or even in the same galaxy.
I shall also be quite surprised if Goertzel's or Voss's project yields AGI. Code is easy. Code that is actually generally intelligent is hard. Step One is knowing which code to write. It's futile to go on to Step Two until finishing Step One. If anyone tries to tell you otherwise, bear in mind that the advice to rush ahead and write code has told quite a lot of people that they don't in fact know which code to write, but has not actually produced anyone who does know which code to write. I know I can't sit down and write an FAI at this time; I don't need to spend five years writing code in order to collapse my pride.
The arc of Less Wrong read start to finish should be sufficient for an intelligent person to discard existing AGI projects - once your "mysterious answer to mysterious question" detector is initialized and switched on, and so on - so I consider my work of explanation in that area to be pretty much done. Anything left is public relations, taking an existing explanation and making it more persuasive.
No? I've been thinking of both problems as essentially problems of rationality. Once you have a sufficiently rational system, you have a Friendliness-capable, proto-intelligent system.
And it happens that I have a copy of "Do the Right Thing: Studies in Limited Rationality", but I'm not reading it, even though I feel like it will solve my entire problem perfectly. I wonder why this is.