AndrewKemendo comments on A Nightmare for Eliezer - Less Wrong

0 Post author: Madbadger 29 November 2009 12:50AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (74)

You are viewing a single comment's thread.

Comment author: AndrewKemendo 29 November 2009 02:59:30AM 0 points [-]

I'm trying to be Friendly, but I'm having serious problems with my goals and preferences.

So is this an AGI or not? If it is then it's smarter than Mr. Yudkowski and can resolve it's own problems.

Comment author: DanArmak 29 November 2009 05:42:58PM 3 points [-]

Intelligence isn't a magical single-dimensional quality. It may be generally smarter than EY, but not have the specific FAI theory that EY has developed.

Comment author: AndrewKemendo 30 November 2009 01:50:06AM *  0 points [-]

Any AGI will have all dimensions which are required to make a human level or greater intelligence. If it is indeed smarter, then it will be able to figure the theory out itself if the theory is obviously correct, or find a way to get it in a more efficient manner.

Comment author: DanArmak 30 November 2009 05:14:12AM 2 points [-]

Well, maybe the theory is inobviously correct.

The AI called EY because it's stuck while trying to grow, so it hasn't achieved its full potential yet. It should be able to comprehend any theory a human EY can comprehend; but I don't see why we should expect it to be able to independently derive any theory a human could ever derive in their lifetimes, in (small) finite time, and without all the data available to that human.

Comment author: Johnicholas 29 November 2009 05:45:43PM 0 points [-]

Yay multidimensional theories of intelligence!

Comment author: Madbadger 29 November 2009 03:14:57AM 1 point [-]

Its a seed AGI in the process of growing. Whether "Smarter than Yudkowski" => "Can resolve own problems" is still an open problem 8-).

Comment author: akshatrathi 29 November 2009 10:52:50PM 2 points [-]

"Uhh.."

"You might want to get some coffee."

I find this the most humorous bit in the post. Smarter than Yudokowsky? May be.