_rpd comments on The case for value learning - Less Wrong

4 Post author: leplen 27 January 2016 08:57PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (7)

You are viewing a single comment's thread. Show more comments above.

Comment author: TheAncientGeek 30 January 2016 04:07:40PM 0 points [-]

Part of it seems to be inherent in the idea of AGI, or an artificial general intelligence. There seems to be the belief that once an AI crosses a certain threshold of smarts, it will be capable of understanding literally everything.

The MIRI/LessWrong sphere is very enamoured of "universal" problem solvers like AIXI. The main pertinent fact about these is that they can't be built out of atoms in our universe. Nonetheless, MIRI think it is possible to get useful architecture-indpendent generalisations out of AIXI-style systems.

"Anyway that sounds great right? Universal prior. Right. What's it look like? Way oversimplifying, it rates hypotheses' likelihood by their compressibility, or algorithmic complexity. For example, say our perfect AI is trying to figure out gravity. It's going to treat the hypothesis that gravity is inverse-square as more likely than a capricious intelligent faller. It's a formalization of Occam's razor based on real, if obscure, notions of universal complexity in computability theory.

But, problem. It's uncomputable. You can't compute the universal complexity of any string, let alone all possible strings. You can approximate it, but there's no efficient way to do so (AIXItl is apparently exponential, which is computer science talk for "you don't need this before civilization collapses, right?").

So the mathematical theory is perfect, except in that it's impossible to implement, and serious optimization of it is unrealistic. Kind of sums up my view of how well LW is doing with AI, personally, despite this not being LW. Worry about these contrived Platonic theories while having little interest in how the only intelligent beings we're aware of actually function."

Comment author: _rpd 03 February 2016 07:31:11AM 0 points [-]

I think your criticism is a little harsh. Turing machines are impossible to implement as well, but they are still a useful theoretical concept.

Comment author: TheAncientGeek 06 February 2016 03:37:14PM 1 point [-]

Theoretical systems are useful so long as you keep track of where they depart from reality.

Consider the following exchange:

Engineer: The programme is acquiring more memory than it is releasing' so it will eventually fill the memory and crash.

Computer Scientist: No it won't, the memory is infinite.

Do the MIRI crowd make similar errors? Sure, consider Bostrom's response to Oracle AI. He assumes that an Oracle can only be a general intelligence coupled to a utility function that makes it want  to answer questions and do nothing else.

Comment author: _rpd 06 February 2016 09:29:17PM 0 points [-]

I take your point that theorists can appear to be concerned with problems that have very little impact. On the other hand, there are some great theoretical results and concepts that can prevent us futility wasting our time and guide us to areas where success is more likely.

I think you're being ungenerous to Bolstrom. His paper on the possibility of Oracle type AIs is quite nuanced, and discusses many difficulties that would have to be overcome ...

http://www.nickbostrom.com/papers/oracle.pdf

Comment author: TheAncientGeek 08 February 2016 09:29:10AM 1 point [-]

To be fair to Bostrom' he doesn't go all the way down the rabbit hole -- arguing that oracles aren't any different to agentive AGIs.