You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

brazil84 comments on Open thread, 11-17 March 2014 - Less Wrong Discussion

3 Post author: David_Gerard 11 March 2014 10:45PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (226)

You are viewing a single comment's thread.

Comment author: brazil84 16 March 2014 06:35:44PM 1 point [-]

Is this guy a crank? He seems to be claiming that he has found the E=mc^2 for intelligence, artificial or otherwise.

http://www.exponentialtimes.net/videos/equation-intelligence-alex-wissner-gross-tedxbeaconstreet

My alarm bells are going off but I am interested to hear peoples' thoughts.

Comment author: Douglas_Knight 17 March 2014 12:10:44AM *  4 points [-]

previous discussion also. He has been mentioned several other times without much discussion.

Comment author: dv82matt 21 March 2014 05:04:02AM 1 point [-]
Comment author: RichardKennaway 17 March 2014 02:40:19PM *  1 point [-]

I'm sure he's not a crank. Which leaves the important question: is he right? I don't know, but if he is, it's highly relevant to the question of FAI, and suggests that the MIRI approach of considering an AI as a logical system to be designed to be safe may be barking up the wrong tree. From an interview with Wissner-Gross:

“The conventional storyline [of SF about AI],” he says, “has been that we would first build a really intelligent machine, and then it would spontaneously decide to take over the world.”

But one of the key implications of Wissner-Gross’s paper is that this long-held assumption may be completely backwards — that the process of trying to take over the world may actually be a more fundamental precursor to intelligence, and not vice versa.

...

“Our causal entropy maximization theory predicts that AIs may be fundamentally antithetical to being boxed,” he says. “If intelligence is a phenomenon that spontaneously emerges through causal entropy maximization, then it might mean that you could effectively reframe the entire definition of Artificial General Intelligence to be a physical effect resulting from a process that tries to avoid being boxed.”

But as I said on a previous occasion when this came up, the outside view here is that so far it's just a big idea and toy demos.

Comment author: brazil84 20 March 2014 07:11:40PM 0 points [-]

Thank you for your response. Having thought about it for a while, I think he is wrong. (Whether he is a crank is a different issue, probably not worth worrying about)

I think it can be illustrated with the following example:

Suppose you are writing a computer program to find the fastest route between two cities and the computer program must select between two possibilities: Take the express highway or take local roads. A naive interpretation of Wissner-Gross' approach would be to take the local roads because that gives you more options. However this would not seem to be the more intelligent choice in general. So a naive interpretation of the Wissner-Gross approach appears to be basically a heuristic -- useful in some situations but not others.

But is this interpretation of Wissner-Gross's approach correct? I expect he would say "no," that taking the express highway actually entails more options because you get to your destination quicker, resulting in extra time which can be used to pursue other activities. Which is fine, but it seems to me that this is circular reasoning. Of course the more intelligent choice will result in more time, money, energy, health, or whatever, and these things give you more options. But this observation tells us nothing about how to actually achieve intelligence. It's like the investment guru who tells us to "buy low sell high." He's stating the obvious without imparting anything of substance.

I admit it's possible I have misunderstood Wissner-Gross' claims. Is he saying anything deeper than what I have pointed out?

Comment author: Manfred 17 March 2014 12:28:28AM *  0 points [-]

My thoughts: Yeah he's wrong. And he got a paper on this junk published in PRL? Sheesh.

He demos a program maximizing some entropy function, and claims intelligent behavior. Well, he could just as easily have made the program try to move everything to the left, and claimed intelligent behavior from that, too. The intelligence was not because of what he maximized, but because of a complex set of behaviors he paid someone to program into the agent but then glossed over.