You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Qiaochu_Yuan comments on "Stupid" questions thread - Less Wrong Discussion

40 Post author: gothgirl420666 13 July 2013 02:42AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (850)

You are viewing a single comment's thread. Show more comments above.

Comment author: Qiaochu_Yuan 13 July 2013 05:57:18AM *  15 points [-]

I can't tell whether you're complaining about the word as it applies to humans or as it applies to abstract agents. If the former, to a first-order approximation it cashes out to g factor and this is a perfectly well-defined concept in psychometrics. You can measure it, and it makes decent predictions. If the latter, I think it's an interesting and nontrivial question how to define the intelligence of an abstract agent; Eliezer's working definition, at least in 2008, was in terms of efficient cross-domain optimization, and I think other authors use this definition as well.

Comment author: Locaha 13 July 2013 09:44:48AM 0 points [-]

"Efficient cross-domain optimization" is just fancy words for "can be good at everything".

Comment author: [deleted] 13 July 2013 10:29:26AM 5 points [-]

Yes. And your point is?

Comment author: Locaha 13 July 2013 10:36:18AM 3 points [-]

This is the stupid questions thread.

Comment author: Viliam_Bur 14 July 2013 06:53:00PM 1 point [-]

That would be the inefficient cross-domain optimization thread.

Comment author: Locaha 15 July 2013 10:26:19AM 0 points [-]

Awesome. I need to use this as a swearword sometimes...

"You inefficient cross-domain optimizer, you!"

Comment author: RomeoStevens 13 July 2013 09:50:59AM 2 points [-]

achieves its value when presented with a wide array of environments.

Comment author: Locaha 13 July 2013 10:00:15AM 0 points [-]

This is again different words for "can be good at everything". :-)

Comment author: RomeoStevens 13 July 2013 10:19:33AM 6 points [-]

When you ask someone to unpack a concept for you it is counter-productive to repack as you go. Fully unpacking the concept of "good" is basically the ultimate goal of MIRI.

Comment author: Locaha 13 July 2013 10:23:21AM 2 points [-]

I just showed that your redefinition does not actually unpack anything.

Comment author: RomeoStevens 13 July 2013 10:41:26AM *  8 points [-]

I feel that perhaps you are operating on a different definition of unpack than I am. For me, "can be good at everything" is less evocative than "achieves its value when presented with a wide array of environments" in that the latter immediately suggests quantification whereas the former uses qualitative language, which was the point of the original question as far as I could see. To be specific: Imagine a set of many different non-trivial agents all of whom are paper clip maximizers. You created copies of each and place them in a variety of non-trivial simulated environments. The ones that average more paperclips across all environments could be said to be more intelligent.

Comment author: Lightwave 15 July 2013 09:54:06AM 0 points [-]

You can use the "can be good at everything" definition to suggest quantification as well. For example, you could take these same agents and make them produce other things, not just paperclips, like microchips, or spaceships, or whatever, and then the agents that are better at making those are the more intelligent ones. So it's just using more technical terms to mean the same thing.