You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Locaha comments on "Stupid" questions thread - Less Wrong Discussion

40 Post author: gothgirl420666 13 July 2013 02:42AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (850)

You are viewing a single comment's thread.

Comment author: Locaha 13 July 2013 05:44:56AM -1 points [-]

Why are we throwing the word "Intelligence" around like it actually means anything? The concept is so ill-defined It should be in the same set with "Love."

Comment author: Qiaochu_Yuan 13 July 2013 05:57:18AM *  15 points [-]

I can't tell whether you're complaining about the word as it applies to humans or as it applies to abstract agents. If the former, to a first-order approximation it cashes out to g factor and this is a perfectly well-defined concept in psychometrics. You can measure it, and it makes decent predictions. If the latter, I think it's an interesting and nontrivial question how to define the intelligence of an abstract agent; Eliezer's working definition, at least in 2008, was in terms of efficient cross-domain optimization, and I think other authors use this definition as well.

Comment author: Locaha 13 July 2013 09:44:48AM 0 points [-]

"Efficient cross-domain optimization" is just fancy words for "can be good at everything".

Comment author: [deleted] 13 July 2013 10:29:26AM 5 points [-]

Yes. And your point is?

Comment author: Locaha 13 July 2013 10:36:18AM 3 points [-]

This is the stupid questions thread.

Comment author: Viliam_Bur 14 July 2013 06:53:00PM 1 point [-]

That would be the inefficient cross-domain optimization thread.

Comment author: Locaha 15 July 2013 10:26:19AM 0 points [-]

Awesome. I need to use this as a swearword sometimes...

"You inefficient cross-domain optimizer, you!"

Comment author: RomeoStevens 13 July 2013 09:50:59AM 2 points [-]

achieves its value when presented with a wide array of environments.

Comment author: Locaha 13 July 2013 10:00:15AM 0 points [-]

This is again different words for "can be good at everything". :-)

Comment author: RomeoStevens 13 July 2013 10:19:33AM 6 points [-]

When you ask someone to unpack a concept for you it is counter-productive to repack as you go. Fully unpacking the concept of "good" is basically the ultimate goal of MIRI.

Comment author: Locaha 13 July 2013 10:23:21AM 2 points [-]

I just showed that your redefinition does not actually unpack anything.

Comment author: RomeoStevens 13 July 2013 10:41:26AM *  8 points [-]

I feel that perhaps you are operating on a different definition of unpack than I am. For me, "can be good at everything" is less evocative than "achieves its value when presented with a wide array of environments" in that the latter immediately suggests quantification whereas the former uses qualitative language, which was the point of the original question as far as I could see. To be specific: Imagine a set of many different non-trivial agents all of whom are paper clip maximizers. You created copies of each and place them in a variety of non-trivial simulated environments. The ones that average more paperclips across all environments could be said to be more intelligent.

Comment author: Lightwave 15 July 2013 09:54:06AM 0 points [-]

You can use the "can be good at everything" definition to suggest quantification as well. For example, you could take these same agents and make them produce other things, not just paperclips, like microchips, or spaceships, or whatever, and then the agents that are better at making those are the more intelligent ones. So it's just using more technical terms to mean the same thing.

Comment author: bogdanb 13 July 2013 06:31:08PM *  4 points [-]

Because it actually does mean something, even if we don’t really know what and borders are fuzzy.

When you hear that X is more intelligent than Y, there is some information you learn, even though you didn’t find out exactly what can X do that Y can’t.

Note that we also use words like “mass” and “gravity” and “probability”; even though we know lots about each, it’s not at all clear what they are (or, like in the case of probability, there are conflicting opinions).

Comment author: ChristianKl 13 July 2013 12:17:32PM 4 points [-]

All language is vague. Sometimes vague language hinders us in understanding what another person is saying and sometimes it doesn't.

Comment author: Kaj_Sotala 13 July 2013 03:07:56PM 3 points [-]

Legg & Hutter have given a formal definition of machine intelligence. A number of authors have expanded on it and fixed some of its problems: see e.g. this comment as well as the parent post.

Comment author: gothgirl420666 13 July 2013 06:25:35AM 3 points [-]

I'm not really sure why you use "love" as an example. I don't know that much about neurology, but my understand is that the chemical makeup of love and its causes are pretty well understood. Certainly better understood than intelligence?

Comment author: Locaha 13 July 2013 06:49:47AM 2 points [-]

I think what you talk about here is certain aspects of sexual attraction. Which are, indeed, often lumped together into the concept of "Love". Just like a lot of different stuff is lumped together into the concept of "Intelligence".

Comment author: [deleted] 13 July 2013 10:34:33AM 5 points [-]

The fact that English uses the same word for several concepts (which had different names in, say, ancient Greek) doesn't necessarily mean that we're confused about neuropsychology.

Comment author: RomeoStevens 13 July 2013 09:54:54AM 6 points [-]

This seems like matching "chemistry" to "sexual" in order to maintain the sacredness of love rather than to actually get to beliefs that cash out in valid predictions. People can reliably be made to fall in love with each other given the ability to manipulate some key variables. This should not make you retch with horror any more than the stanford prison experiment already did. Alternatively, update on being more horrified by tSPE than you were previously.

Comment author: NancyLebovitz 13 July 2013 11:27:31AM 1 point [-]

People can reliably be made to fall in love with each other given the ability to manipulate some key variables.

?

Comment author: RomeoStevens 13 July 2013 12:07:03PM 1 point [-]

Lots of eye contact is sufficient if the people are both single, of similar age, and with a person of their preferred gender. But even those conditions could be overcome given some chemicals to play with.

Comment author: drethelin 13 July 2013 01:16:33PM 13 points [-]

[citation needed]

Comment author: TimS 13 July 2013 05:59:28AM 3 points [-]

There seems to be a thing called "competence" for particular abstract tasks. Further, there are kinds of tasks where competence in one task generalizes to the whole class of tasks. One thing we try to measure by intelligence is an individual's level of generalized abstract competence.

I think part of the difficulties with measuring intelligence involve uncertainty about what tasks are within the generalization class.