Comment author: James_Miller 11 October 2016 04:10:40AM 1 point [-]

Get a job at Google or seek to influence the people developing the AI. If, say, you were a beautiful woman you could, probably successfully, start a relationship with one of Google's AI developers.

Comment author: username2 11 October 2016 07:24:07PM -1 points [-]

I am confused as to whether I should upvote for "get a job at Google" or downvoter for "prostitute yourself".

Comment author: turchin 10 October 2016 11:13:53AM 5 points [-]

If we knew that AI will be created by Google, and that it will happen in next 5 years, what should we do?

Comment author: username2 11 October 2016 07:20:11PM 0 points [-]

Rejoice because the end is near.

Maybe buy Google stock?

Comment author: ChristianKl 10 October 2016 09:46:15PM 1 point [-]

And incidentally, how many people can you fit in Australia? I know its very big, but also has a lot of desert.

You can fit many people in California despite it being desert.

Comment author: username2 11 October 2016 07:18:35PM 0 points [-]

*Southern California

Comment author: username2 10 October 2016 09:23:33AM 6 points [-]

Is there something similar to the Library of Scott Alexandria available for The Last Psychiatrist ? I just read "Amy Schumer offers you a look into your soul" and I really liked it but I don't have enough time to read all posts on the blog.

Comment author: hairyfigment 07 October 2016 11:48:44PM 0 points [-]

You're assuming that "what humans mean" is well-defined. I've seen people criticize the example of an AI putting humans on a dopamine drip, on the grounds that "making people happy" clearly doesn't mean that. But if your boss tells you to 'make everyone happy,' you will probably get paid to make everyone stop complaining. Parents in the real world used to give their babies opium and cocaine; advertisers today have probably convinced themselves that the foods and drugs they push genuinely make people happy. There is no existing mind that is provably Friendly.

So, this criticism is implying that simply understanding human speech will (at a minimum) let the AI understand moral philosophy, which is not trivial.

Comment author: username2 09 October 2016 09:00:43PM 0 points [-]

So, this criticism is implying that simply understanding human speech will (at a minimum) let the AI understand moral philosophy, which is not trivial.

I don't disagree with the other stuff you said. But I interpreted the criticism as "an AI told to 'do what humans want, not what they mean'" will have approximately the same effect as if you told a perfectly rational human being to do the same. So in the same way that I can instruct people with some success to "do what I mean", the same will work for AI too. It's just also true that this isn't a solution to FAI any more than it is with humans -- because morality is inconsistent, human beings are inherently unfriendly, etc...

Comment author: entirelyuseless 08 October 2016 02:40:19PM -1 points [-]

I am saying the opposite. Having a goal, in Eliezer's sense, is contrary to being intelligent. That is, doing everything you do for the sake of one thing and only one thing, and not being capable of doing anything else, is the behavior of an idiotic fanatic, not of an intelligent being.

I said that to be intelligent you need to understand the concept of a goal. That does not mean having one; in fact it means the ability to have many different goals, because your general understanding enables you to see that there is nothing forcing you to pursue one particular goal fanatically.

Comment author: username2 09 October 2016 02:28:48PM *  0 points [-]

Smells like a homunculus. What guides your reasoning about your goals?

Comment author: niceguyanon 07 October 2016 01:40:52PM *  2 points [-]

Why doesn't the U.S. government hire more tax auditors? If every hired auditor can either uncover or deter (threat of chance of audit) tax evasion, it would pay for itself, create jobs, increase revenue, punish those who cheat. Estimated cost of tax evasion per year to the Federal gov is 450B.

Incompetent government tropes include agencies that hire too many people and becoming inappropriate profit centers. It would seem that the IRS should have at the very least been accidentally competent in this regard.

Comment author: username2 08 October 2016 02:29:03PM 2 points [-]

I think that in many cases uncovering a potential tax evasion might not be enough to get that money, it might require prosecution and large scale evidence collection. Maybe it's not worth it unless amount of evaded taxes is large?

Comment author: WhySpace 05 October 2016 07:03:10PM 2 points [-]

Places like https://www.reddit.com/r/askscience/ might be a good spot, depending on the question. If it sounds crackpot, you might be able to precede it with a qualifier that you're probably wrong, just like you did here.

Comment author: username2 08 October 2016 02:21:45PM 1 point [-]

Also check out physics.SE and physicsoverflow

Comment author: entirelyuseless 07 October 2016 05:07:17AM 0 points [-]

"There are plenty of conceivable architectures for which this meta level thinking is incapable of happening, yet nevertheless are capable of producing arbitrarily complex intelligent behavior."

Maybe, but that's exactly like the orthogonality thesis. The fact that something is possible in principle doesn't mean there's any easy way to do it in practice. The easy way to produce arbitrarily complex intelligent behavior in practice is to produce something that can abstract to an arbitrary degree of generality, and that means recognizing abstractions like "goal", "good," and so on.

The reason why a human baby becomes intelligent over time is that right from the beginning it has the ability to generalize to pretty much any degree necessary. So I don't see how that argues against my position. I would expect AIs also to require a process of "growing up" although you might be able to speed that process up so that it takes months rather than years. That is still another reason why the orthogonality thesis is false in practice. AIs that grow up among human beings will grow up with relatively humanlike values (although not exactly human), and the fact that arbitrary values are possible in principle will not make them actual.

Comment author: username2 07 October 2016 07:46:01AM 0 points [-]

The fact that something is possible in principle doesn't mean there's any easy way to do it in practice. The easy way to produce arbitrarily complex intelligent behavior in practice is to produce something that can abstract to an arbitrary degree of generality, and that means recognizing abstractions like "goal", "good," and so on.

I actually had specific examples in mind, basically all GOFAI approaches to general AI. But in any case this logic doesn't seem to hold up. You could argue that something needs to HAVE goals in order to be intelligent -- I don't think so, at least not with the technical definition typically given to 'goals', but I will grant it for the purpose of discussion. It still doesn't follow that the thing has to be aware of these goals, or introspective of them. One can have goals without being aware that one has them, or able to represent those goals explicitly. Most human beings fall in this category most of the time, it is sad to say.

Comment author: entirelyuseless 07 October 2016 01:42:25AM 0 points [-]

On the basis of thinking long and hard about it.

Some people think that intelligence should be defined as optimization power. But suppose you had a magic wand that could convert anything it touched into gold. Whenever you touch any solid object with it, it immediately turns to gold. That happens in every environment with every kind of object, and it happens no matter what impediments you try to set up to prevent. You cannot stop it from happening.

In that case, the magic wand has a high degree of optimization power. It is extremely good at converting things it touches into gold, in all possible environments.

But it is perfectly plain that the wand is not intelligent. So that definition of intelligence is mistaken.

I would propose an alternative definition. Intelligence is the ability to engage in abstract thought. You could characterize that as pattern recognition, except that it is the ability to recognize patterns in patterns in patterns, recursively.

The most intelligent AI we have, is not remotely close to that. It can only recognize very particular patterns in very particular sorts of data. Many of Eliezer's philosophical mistakes concerning AI arise from this fact. He assumes that the AI we have is close to being intelligent, and therefore concludes that intelligent behavior is similar to the behavior of such programs. One example of that was the case of AlphaGo, where Eliezer called it "superintelligent with bugs," rather than admitting the obvious fact that it was better than Lee Sedol, but not much better, and only at Go, and that it generally played badly when it was in bad positions.

The orthogonality thesis is a similar mistake of that kind; something that is limited to seeking a limited goal like "maximize paperclips" cannot possibly be intelligent, because it cannot recognize the abstract concept of a goal.

But in relation to your original question, the point is that the most intelligent AI we have is incredibly stupid. Unless you believe there is some magical point where there is a sudden change from stupid to intelligent, we are still extremely far off from intelligent machines. And there is no such magical point, as is evident in the behavior of children, which passes imperceptibly from stupid to intelligent.

Comment author: username2 07 October 2016 02:50:45AM 2 points [-]

Your example of a magic wand doesn't sound correct to me. By what basis is a Midas touch "optimizing"? It is powerful, yes, but why "optimizing"? A supernova that vaporizes entire planets is powerful, but not optimizing. Seems like a strawman.

Defining intelligence as pattern recognizing is not new. Ben Goertzel has espoused this view for some twenty years, and written a book on the subject I believe. I'm not sure I buy the strong connection with "recognizing the abstract concept of a goal" and such, however. There are plenty of conceivable architectures for which this meta level thinking is incapable of happening, yet nevertheless are capable of producing arbitrarily complex intelligent behavior.

Regarding your last point, your terminology is unnecessarily obscuring. There doesn't have to be a "magic point" -- it could be simply a matter of correct software, but insufficient data or processing power. A human baby is a very stupid device, incapable of doing anything intelligent. But with experiential data and processing time it becomes a very powerful general intelligence over the course of 25 years, without any designer intervention. You bring up this very point yourself which seems to counteract your claim.

View more: Prev | Next