[Link] Intelligence, a thermodynamic POV

-7 Post author: Thomas 21 July 2013 12:44PM

Comments (5)

Comment author: RichardKennaway 21 July 2013 04:49:11PM 6 points [-]

Previously on LessWrong, twice.

Comment author: Thomas 21 July 2013 05:20:45PM -2 points [-]

Yes, I see that now. Still it is an important open question. And the whole raison d'etre of MIRI, FAI and so on hangs in this.

If they (authors) are basically right, then it's a game changer. I think, they are.

Comment author: RichardKennaway 21 July 2013 10:48:35PM 1 point [-]

If they (authors) are basically right, then it's a game changer.

This is true of all new ideas about A(G)I, including past ones that fizzled, which is all of them so far. One might conclude that this one is likely to fizzle, except that there are anthropic issues about alternate histories in which one of these advances foomed instead of fizzling. I am not sure how to handle that.

Is there any reason to think that this new idea has something that all previous ideas lacked?

Comment author: Thomas 22 July 2013 08:44:50AM *  -2 points [-]

I wouldn't say, that all those ideas fizzled. They brought us some great results, don't forget to give credits to those who deserve.

But if you want to understand seeing, you have to understand optics. If you want to understand motioning, you have to understand mechanics. You have to understand the physics behind, any biology, physiology or anthropology is not enough. The same goes with flying. It's aerodynamics which enables flying.

Animals were always just users of the underlying physics, clumsy users in fact. Not at all the inventors of breading (oxidation), swimming (moving through liquids) and so on. Evolution carved animal shapes into the surrounding physics.

It is likely that one has to understand the physics behind the thinking to really understand and replicate it.

I wouldn't go into details. If is it really necessary and enough for a process to be intelligent to be able to maximize the entropy. Might be some subset of possible futures to maintain and not the whole set. Or some other re-conditioning.

But it very likely takes the thermodynamics to really understand the matter. A "cognition" is not a very fruitful term. As many others are not. It's a wrong level of describing the problem, I think.

Comment author: Adele_L 21 July 2013 01:07:30PM 0 points [-]

Seems like this could be another basic AI drive, but would still be orthogonal to most of human value.