Jack comments on Open Thread: February 2010, part 2 - Less Wrong

10 Post author: CronoDAS 16 February 2010 08:29AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (857)

You are viewing a single comment's thread. Show more comments above.

Comment author: Jack 16 February 2010 11:46:51PM 0 points [-]

They don't mean the same thing at all and the wikipedia entries seem to reflect that.

Comment author: timtyler 16 February 2010 11:51:55PM 2 points [-]

Check with here, for example, though:

http://www.vetta.org/definitions-of-intelligence/

The "AI researcher" definitions in particular seem to be much the same as the definition of instrumental rationality.

Comment author: Jack 17 February 2010 12:29:40AM 0 points [-]

I had a pretty long comment about Jurgen Habermas but instead I'll just say:

I'm not really sure term means anything outside of the assumptions and framework of Critical Theory unless you're talking about a totally different thing. And given those assumptions and framework you can't possibly say instrumental rationality is the same thing as intelligence since the whole coinage exists to distinguish it from communicative rationality. But the framework this community is operating under is so far removed from Critical Theory that I don't even know how to talk about it here.

My guess is not many people here recognize any other kind of rationality and so your question just becomes: are rationality and intelligence the same thing?

Comment author: timtyler 17 February 2010 09:18:31PM *  0 points [-]

"Rationality" seems to be most frequently used here to mean "epistemic rationality", not "instrumental rationality". It seems to be one of this community's oddities. ...and yes, the "critical theory" term.

Comment author: SilasBarta 16 February 2010 11:56:23PM 0 points [-]

Semi-OT: The problem with the AI researchers' definitions of intelligence is that they are written as if there can be some kind of perfect intelligence, yet they end up in contradictions like, "I've developed the maximally intelligent being, but it's completely useless."

(Mr. Vetta (Shane Legg) and Marcus Hutter's AIXI, I'm looking in your general direction here.)

Comment author: timtyler 17 February 2010 09:31:43AM 2 points [-]

The idea of universal intelligence is not a bug, it is a feature. It is mainly due to Legg/Hutter that we have that concept in the first place - and it is a fine one.

Comment author: SilasBarta 17 February 2010 10:29:10PM 1 point [-]

Not really. If you claim that a) intelligence is useful, and b) a maximally intelligent being that you have invented is useless ... you made a mistake somewhere.

And their work is just the formalization of Solomonoff induction -- the difficulty is in the derivation. People knew in advance that you can find the shortest theory to fit the data by taking a language, and then iterating up from the shortest expressible program until you find one that matches the data -- it's just that it's not computable, which for now, means useless, and the exponential approximation isn't much better.

Can you identify any working, useful system based on AIXI?

Comment author: timtyler 17 February 2010 11:03:32PM 0 points [-]

I don't think you have a reference for b).

Solomonoff induction is concerned with sequence prediction - not decision theory. It is not a trivial extra step.

Comment author: SilasBarta 17 February 2010 11:06:35PM *  0 points [-]

I don't think you have a reference for b).

Okay, I don't have a reference for them admitting that AIXI's useless -- but they acknowledge it's uncomputable, and don't have working code implementing it for an actual problem in a way better than existing "not intelligent" methods.

Solomonoff induction is concerned with sequence prediction - not decision theory. It is not a trivial extra step.

AIXI is also primarily concerned with sequence prediction and not decision theory.

Comment author: timtyler 17 February 2010 11:14:17PM 0 points [-]

"AIXI is a universal theory of sequential decision making akin to Solomonoff's celebrated universal theory of induction. Solomonoff derived an optimal way of predicting future data, given previous observations, provided the data is sampled from a computable probability distribution. AIXI extends this approach to an optimal decision making agent embedded in an unknown environment."

Comment author: SilasBarta 17 February 2010 11:31:55PM 0 points [-]

Okay, you're right, my apologies. The point about uncomputability and uselessness of the decision theory still stands.

Comment author: timtyler 18 February 2010 09:34:08AM 1 point [-]

Right - but they know that. AIXI is a self-confessed abstract model.

IMO, AIXI does have some marketing issues. For instance:

"The book also presents a preliminary computable AI theory. We construct an algorithm AIXItl, which is superior to any other time t and space l bounded agent."

That seems to be an inaccurate description, to me.