Comment author: username2 06 October 2016 09:06:18PM *  2 points [-]

I agree with Robin Hanson that we are maybe 5% of the way to general AI.

On what basis do you say that?

Comment author: entirelyuseless 07 October 2016 01:42:25AM 0 points [-]

On the basis of thinking long and hard about it.

Some people think that intelligence should be defined as optimization power. But suppose you had a magic wand that could convert anything it touched into gold. Whenever you touch any solid object with it, it immediately turns to gold. That happens in every environment with every kind of object, and it happens no matter what impediments you try to set up to prevent. You cannot stop it from happening.

In that case, the magic wand has a high degree of optimization power. It is extremely good at converting things it touches into gold, in all possible environments.

But it is perfectly plain that the wand is not intelligent. So that definition of intelligence is mistaken.

I would propose an alternative definition. Intelligence is the ability to engage in abstract thought. You could characterize that as pattern recognition, except that it is the ability to recognize patterns in patterns in patterns, recursively.

The most intelligent AI we have, is not remotely close to that. It can only recognize very particular patterns in very particular sorts of data. Many of Eliezer's philosophical mistakes concerning AI arise from this fact. He assumes that the AI we have is close to being intelligent, and therefore concludes that intelligent behavior is similar to the behavior of such programs. One example of that was the case of AlphaGo, where Eliezer called it "superintelligent with bugs," rather than admitting the obvious fact that it was better than Lee Sedol, but not much better, and only at Go, and that it generally played badly when it was in bad positions.

The orthogonality thesis is a similar mistake of that kind; something that is limited to seeking a limited goal like "maximize paperclips" cannot possibly be intelligent, because it cannot recognize the abstract concept of a goal.

But in relation to your original question, the point is that the most intelligent AI we have is incredibly stupid. Unless you believe there is some magical point where there is a sudden change from stupid to intelligent, we are still extremely far off from intelligent machines. And there is no such magical point, as is evident in the behavior of children, which passes imperceptibly from stupid to intelligent.

Comment author: DittoDevolved 06 October 2016 04:29:56PM 0 points [-]

In the UK it's tax free, anyway.

In response to comment by DittoDevolved on Burch's Law
Comment author: entirelyuseless 07 October 2016 01:30:50AM 0 points [-]

Ok. Not in the USA.

Comment author: rhaps0dy 06 October 2016 09:50:25AM 0 points [-]

I don't think we are that far away from AGI.

At the very least 20 years. And yes Alphabet are the closest, but in 20 years a lot of things can change.

Comment author: entirelyuseless 06 October 2016 01:12:09PM 1 point [-]

I doubt there is much motivation here for "at least 20 years" except the very fact that it is hard to tell what will happen in 20 years.

I agree with Robin Hanson that we are maybe 5% of the way to general AI. I think 20 years from now the distance we were from AI at this point will be somewhat clearer (because we will be closer, but still very distant.)

In response to comment by Caledonian2 on Burch's Law
Comment author: DittoDevolved 05 October 2016 09:40:15PM 0 points [-]

Never a good idea. Unless you win. Ask the recipient of $100m tax-free whether or not it was a good idea to buy a ticket.

I don't buy lottery tickets, but as much as the chance is so ridiculously small that you might as well burn the ticket as soon as you buy it, that doesn't stop people from winning.

In response to comment by DittoDevolved on Burch's Law
Comment author: entirelyuseless 06 October 2016 01:54:18AM 1 point [-]

Lottery income is most definitely taxed, although this likely makes little difference to your point.

Comment author: username2 03 October 2016 04:14:54AM *  -2 points [-]

Sorry, no, you seem to have completely missed the minimax aspect of the problem -- an infinite integral with a weight that limits to zero has finitely bounded solutions. But it is not worth my time to debate this. Good day, sir.

Comment author: entirelyuseless 03 October 2016 02:16:14PM *  0 points [-]

I did not miss the fact that you are talking about an approximation. There is no guarantee that any particular approximation will result in intelligent behavior. Claiming that there is, is claiming to know more than all the AI experts in the world.

Also, at this point you are retracting your correction and adopting your original absurd view, which is unfortunate.

Comment author: username2 02 October 2016 06:51:12AM 0 points [-]

You started out by saying, in essence, that general AI is just a matter of having good enough hardware.

Ok this is where the misunderstanding happened. What I said was "if you had the luxury of running with infinite compute resources and allow some handwavery around defining utility functions." Truly infinite compute resources will never exist. So that's not a claim about "we just need better hardware" but rather "if we had magic oracle pixie dust, it'd be easy."

The rest I am uninterested in debating further.

Comment author: entirelyuseless 02 October 2016 04:06:55PM 0 points [-]

That's fine. As far as I can see you have corrected your mistaken view, even though you do have the usual human desire not to admit that you have done so, even though such a correction is a good thing, not a bad thing. Your statement would be true if you meant by infinite resources, the ability to execute an infinite number of statements, and complete that infinite process. In the same way it would be true that we could solve the halting problem, and resolve the truth or falsehood of every mathematical claim. But in fact you meant that if you have unlimited resources in a more practical sense: unlimited memory and computing speed (it is evident that you meant this, since when I stipulated this you persisted in your mistaken assertion.) And this is not enough, without the software knowledge that we do not have.

Comment author: username2 01 October 2016 10:54:02AM 0 points [-]

Solomonoff induction is not in fact infinite due to the Occam prior, because a minimax branch pruning algorithm eventually trims high-complexity possibilities.

Comment author: entirelyuseless 01 October 2016 04:01:58PM 1 point [-]

Ok, let's go back and review this conversation.

You started out by saying, in essence, that general AI is just a matter of having good enough hardware.

You were wrong. Dead wrong. The opposite is true: it is purely a matter of software, and sufficiently good hardware. We have no idea how good the hardware needs to be. It is possible that a general AI could be programmed on the PC I am currently using, for all we know. Since we simply do not know how to program an AI, we do not know whether it could run on this computer or not.

You supported your mistake with the false claim that AIXI and Solomonoff induction are computable, in the usual, technical sense. You spoke of this as though it were a simple fact that any well educated person knows. The truth was the opposite: neither one is computable, in the usual, technical sense. And the usual technical sense of incomputable implies that the thing is incomputable even without a limitation on memory or clock speed, as long as you are allowed to execute a finite number of instructions, even instantaneously.

You respond now by saying, "Solomonoff induction is not in fact infinite..." Then you are not talking about Solomonoff induction, but some approximation of it. But in that case, conclusions that follow from the technical sense of Solomonoff induction do not follow. So you have no reason to assume that some particular program will result in intelligent behavior, even removing limitations of memory and clock speed. And until someone finds that program, and proves that it will result in intelligent behavior, no one knows how to program general AI, even without hardware limitations. That is our present situation.

Comment author: username2 30 September 2016 03:30:22PM *  0 points [-]

Taboo the word computable. (If that's not enough of a hint, notice that Solomonoff is "incomputable" only for finite computers, whereas this thread is assuming infinite computational resources.)

Comment author: entirelyuseless 01 October 2016 01:29:30AM *  1 point [-]

Again, you are mistaken. I assumed that you could execute any finite number of instructions in an instant. Computing Solomonoff probabilities requires executing an infinite number of instructions, since it implies assigning probabilities to all possible hypotheses that result in the appearances.

In other words, if you assume the ability to execute an infinite number of instructions (as opposed to simply the instantaneous execution of any finite number), you will indeed be able to "compute" the incomputable. But you will also be able to solve the halting problem, by running a program for an infinite number of steps and checking whether it halts during that process or not. As you said earlier, this is not what is typically meant by computable.

(If that is not clear enough for you, consider the fact that a Turing machine is allowed an infinite amount of "memory" by definition, and the amount of time it takes to execute a program is no part of the formalism. So "computable" and "incomputable" in standard terminology do indeed apply to computers with infinite resources in the sense that I specified.)

Comment author: username2 30 September 2016 02:23:29PM -1 points [-]

Okay random person on the internet.

Comment author: entirelyuseless 30 September 2016 02:39:36PM 2 points [-]

If you can't use Google, see here. They even explain exactly why you are mistaken -- because Solomonoff induction is not computable in the first place, so nothing using it can be computable.

Comment author: username2 30 September 2016 05:35:06AM *  -1 points [-]

This is incorrect. AIXI is "not computable" only in the sense that it will not halt on the sorts of problems we care about on a real computer of realistically finite capabilities in a finite amount of time. That's not what is generally meant by 'computable'. But in any case if you assume these restrictions away as you did (infinite clock speed, infinite memory) then it absolutely is computable in the sense that you can define a Turing machine to perform the computation, and the computation will terminate in a finite amount of time, under the specified assumptions.

Simple reinforcement learning coupled with Solomonoff induction and an Occam prior (aka AIXI) results in intelligent behavior on arbitrary problem sets. It just also requires impossible computational requirements on practical requirements. But that's very different from uncomputability.

Comment author: entirelyuseless 30 September 2016 01:33:51PM 2 points [-]

Sorry, you are simply mistaken here. Go and read more about it before you say anything else.

View more: Prev | Next