All of GregFish's Comments + Replies

Now, if said grad student did come to the thesis adviser, but their motivation was that they've been taught from a very young age that they should do math. Is there initiative?

Not sure. You could argue both points in this situation.

Assuming that such entities are possible, do you or do you not think there's a risk of the AI getting out of control.

Any AI can get out of control. I never denied that. My issue is with how that should be managed, not whether it can happen.

So, what you've said is one evolved desire overriding another would still seem to be a bug.

I suppose it would.

0JoshuaZ
Ah. In that case, there's actually very minimal disagreement.

Oh fun, we're talking about my advisers' favorite topic! Yeah, strong natural language is a huge pain and if we had devices that understood human speech well, tech companies would jump on that ASAP.

But here's the thing. If you want natural language processing, why build a Human 2.0? Why not just build the speech recognition system? It's making AGI for something like that the equivalent of building a 747 to fly one person across a state? I can see various expert systems coming together as an AGI, but not starting out as such.

3TheOtherDave
It would surprise me if human-level natural-language processing were possible without sitting on top of a fairly sophisticated and robust world-model. I mean, just as an example, consider how much a system has to know about the world to realize that in your next-to-last sentence, "It's" is most likely a typo for "Isn't." Granted that one could manually construct and maintain such a model rather than build tools that maintain it automatically based on ongoing observations, but the latter seems like it would pay off over time.

Sounds like a logical conclusion to me...

I still have a lot of questions about detail but I'm starting to see what I was after: consistent, objective definitions I can work with and relate to my experience with computers and AI.

... if we're talking about code that is capable of itself generating executable code as output in response to situations that arise

Again, it really shouldn't be doing that. It should have the capacity to learn new skills and build new neural networks to do so. That doesn't require new code, it just requires a routine to initialize a new set of ANN objects at runtime.

0TheOtherDave
If it somehow follows from that that there's an absolute blueprint in it for how every part of it will react to any stimuli in a way that is categorically different from how human genetics specify how humans will respond to any environment, then I don't follow the connection... sorry. I have only an interested layman's understanding of ANNs.

Just as my desktop computer no longer functions by the rules of a dRAM.

It never really did. DRAM is just a way to keep bits in memory for processing. What's going on under the hood of any computer hasn't changed at all. It's just grown vastly more complex and allowed us to do much more intricate and impressive things with the same basic ideas. The first computer ever built and today's machines function by the same rules, it's just that the latter is given the tools to do so much more with them.

And as JoshuaZ explains, it is something that does everyth

... (read more)
2Perplexed
Yes. As long as it does everything roughly as well as a human and some things much better.

Um... we already do all that to a pretty high extent and we don't need general intelligence in every single facet of human ability to do that. Just make it an expert in its task and that's all you need.

1JoshuaZ
There are a large number of tasks where the expertise level needed by current technology is woefully insufficient. Anything that has a strong natural language requirement for example.

the relevant dimension of intelligence is something like "ability to design and examine itself similarly to it's human designers".

Ok, I'll buy that. I would agree that any system that could be its own architect and hold meaningful design and code review meetings with its builders would qualify as human-level intelligent.

0jsalvatier
To clarify: I didn't mean that such a machine is necessarily "human level intelligent" in all respects, just that that is the characteristic relevant to the idea of an "intelligence explosion".

You keep suggesting that there's no reason to worry about how to constrain the behavior of computer programs, because computer programs can only do what they are told to do.

No, I just keep saying that we don't need to program them to "like rewards and fear punishments" and train them like we'd train dogs.

I agree completely that, in doing so, it is merely doing what I told it to do: I'm the one who wrote that stupid bug, it didn't magically come out of nowhere, the program doesn't have any mysterious kind of free will or anything. It's just a

... (read more)
0TheOtherDave
(shrug) OK, fair enough. I agree with you that reward/punishment conditioning of software is a goofy idea. I was reading your comment here to indicate that we can constrain the behavior of human-level AGIs by just putting appropriate constraints in the code. ("You don't want the machine to do something? Put in a boundry. [..] with a machine, you can just tell it not to do that.") I think that idea is importantly wrong, which is why I was responding to it, but if you don't actually believe that then we apparently don't have a disagreement. Re: source code... if we're talking about code that is capable of itself generating executable code as output in response to situations that arise (which seems implicit in the idea of a human-level AGI, given that humans are capable of generating executable code), it isn't at all clear to me that its original source code comprises in any kind of useful way an absolute blueprint for how every part of it will react to any stimuli. Again, sure, I'm not positing magic: whatever it does, it does because of the interaction between its source code and the environment in which it runs, there's no kind of magic third factor. So, sure, given the source code and an accurate specification of its environment (including its entire relevant history), I can in principle determine precisely what it will do. Absolutely agreed. (Of course, in practice that might be so complicated that I can't actually do it, but you aren't claiming otherwise.) If you don't think the same is true of humans, then we disagree about humans, but I think that's incidental.

Hmm, so would a grad student who is thinking about a thesis problem because their advisor said to think about it be showing initiative?

Dis he/she volunteer to work on a problem and come to the advisor saying that this is the thesis subject? Doesn't sound like it, so I'd say it's not. Initiative is doing something that's not required, but something you feel needs to be done or something you want to do.

Is "incorrectly" a normative or descriptive term?

Yes. When you need it to return "A" and it retuns "Finland," it made a m... (read more)

0JoshuaZ
Ok. Now, if said grad student did come to the thesis adviser, but their motivation was that they've been taught from a very young age that they should do math. Is there initiative? It seems that a large part of the disagreement is implicit premises here. You seem to be focused on very narrow AI, when the entire issue is what happens when one doesn't have narrow AI but have AI that has most capabilities that humans have. Let's set aside whether or not we should build such AIs and whether or not they are possible. Assuming that such entities are possible, do you or do you not think there's a risk of the AI getting out of control. Either there's a miscommunication here or there's a misunderstanding about how evolution works. An organism that puts its own survival over reproducing is an evolutionary dead end. Historically, lots of humans didn't want any children, but they didn't have effective birth control methods, so in the ancestral environment there was minimal evolutionary incentive to remove that preference. It has only been recently that there is widespread and effective birth control. So, what you've said is one evolved desire overriding another would still seem to be a bug.

Thank you for the unnecessary tutorial. But actually, what I said is that a super-human AI might be something like a very large neural net.

No, actually I think the tutorial was necessary, especially since what you're basically saying is that something like a large enough neural net will no longer function by the rules of an ANN. If it doesn't, how does it learn? It would simply spit out random outputs without having some sort of direct guidance.

More will go on in a future superhuman AI than goes on in any present-day toy AI.

And again I'm trying to f... (read more)

5Perplexed
Am I really being that unclear? Something containing so many and such large embedded neural nets so that the rest of its circuitry is small by comparison. But that extra circuitry does mean that the whole machine indeed no longer functions by the rules of an ANN. Just as my desktop computer no longer functions by the rules of a dRAM. And as JoshuaZ explains, it is something that does everything intellectual that a human can do, only faster and better. Play chess, write poetry, learn to speak Chinese, design computers, prove Fermat's Last Theorem. The whole human repertoire. Sure, machines already do some of those things. Many people (I am not one of them) think that such an AI, doing every last one of those things at superhuman speed, would be transformative. It is at least conceivable that they are right.

... the words intelligent and intelligence in this context and simply refer to a computer capable of doing at least everything a regular person can do.

But we already have things capable of doing everything a regular person can do. We call them regular people. Are we trying to build another person in digital format here, and if so, why? Just because we want to see if can? Or because we have some big plans for it?

2JoshuaZ
Irrelevant to the question at hand, which is what would happen if a machine had such capabilities. But, if you insist on discussing this issue also, machines with human-like abilities could be very helpful. For example, one might be able to train one of them to do some task, and then make multiple copies of it, much more efficient than individually training lots of humans. Or one could send such AIs into dangerous situations where we might not ethically send a person (whether it would actually be ethical to send an AI is a distinct question.)

Hey, if people choose to downvote my replies, either because they disagree or just plain don't like me, that's their thing. I'm not all that easy to scare with a few downvotes... =)

I don't think this is a good argument. Just because you cannot define something doesn't mean it's not a real phenomena or that you cannot reason about it at all.

If you have no working definition for what you're trying to discuss, you're more than likely to be barking up the wrong tree about it. We didn't understand fire completely, but we knew that it was hot, you couldn't touch it, and you made it by rubbing dry sticks together really, really fast, or by making a spark with rocks and have it land on dry straw.

Also, where did I say that until I get a de... (read more)

0jsalvatier
I apologize, the intent of your question was not at all clear to me from your previous post. It sounded to me like you were using this as an argument that SIAI types were clearly wrong headed. To answer your question then, the relevant dimension of intelligence is something like "ability to design and examine itself similarly to it's human designers".
0[anonymous]
Interesting question, Wikipedia does list some requirements.
GregFish-40

Can you clarify how it's helpful to know that my machine only does what it's been told to do, if I can't know what I'm telling it to do or be certain what I have told it to do?

If you have no idea what you want your AI to do, why are you building it in the first place? I have never built an app that does, you know, anything and whatever. It'll just be muddled mess that probably won't even compile.

we have programs embedded in DNA that manifest themselves in brains...

No we do not. This is not how biology works. Brains are self-organizing structures bui... (read more)

4TheOtherDave
I'm not sure how you got from my question to your answer. I'm not talking at all about programmers not having intentions, and I agree with you that in pretty much all cases they do have intentions. I'll assume that I wasn't clear, rather than that you're willing to ignore what's actually being said in favor of what lets you make a more compelling argument, and will attempt to be clearer. You keep suggesting that there's no reason to worry about how to constrain the behavior of computer programs, because computer programs can only do what they are told to do. At the same time, you admit that computer programs sometimes do things their programmers didn't intend for them to do. I might have written a stupid bug that causes the program to delete the contents of my hard drive, for example. I agree completely that, in doing so, it is merely doing what I told it to do: I'm the one who wrote that stupid bug, it didn't magically come out of nowhere, the program doesn't have any mysterious kind of free will or anything. It's just a program I wrote. But I don't see why that should be particularly reassuring. The fact remains that the contents of my hard drive are deleted, and I didn't want them to be. That I'm the one who told the program to delete them makes no difference I care about; far more salient to me is that I didn't intend for the program to delete them. And the more a program is designed to flexibly construct strategies for achieving particular goals in the face of unpredictable environments, the harder it is to predict what it is that I'm actually telling my program to do, regardless of what I intend for it to do. In other words: "I can't know what I'm telling it to do or be certain what I have told it to do." Sure, once it deletes the files, I can (in principle) look back over the source code and say "Oh, I see why that happened." But that doesn't get me my files back. And yet, remarkably, brains don't "self-organize" in the absence of that regulation. Y

Would the ability to come up with new definitions and conjectures in math be an example of thinking and initiative?

Yes, but with a caveat. I could teach an ANN how to solve a problem but it would be more or less by random trial and error with a squashing function until each "neuron" has the right weight and activation function. So it will learn how to solve this generic problem, but it won't be because it traced its way along all the steps.

(Actually I made in mistake in my previous reply, ANNs have no fitness function, that's a genetic algorit... (read more)

2JoshuaZ
Hmm, so would a grad student who is thinking about a thesis problem because their advisor said to think about it be showing initiative? Is a professional mathematician showing initiative? They keep thinking about math because that's what gives them positive feedback (e.g. salary, tenure, positive remarks from their peers). Is "incorrectly" a normative or descriptive term? .How is it different than "this program didn't do what I expected it to do" other than that you label it a bug when the program deviates more from what you wanted to accomplish? Keep in mind that what a human wants isn't a notion that cleaves reality at the joints. Ok. So when someone (and I know quite a few people in this category) deliberately uses birth control because they want the pleasure of sex but don't want to ever have kids, is that a bug in your view?

Other people imagine something like a neural net containing more 'neurons' than the human brain - a device which is born with little more hardwired programming than the general guidance...

That's not what an artificial neural net actually is. When training your ANN, you give it an input and tell it what the output should be. Then, using a method called backpropagation, you tell it to adjust the weights and activation thresholds of each neuron object until it can match the output. So you're not just telling it to learn, you're telling it what the problem ... (read more)

4Perplexed
Thank you for the unnecessary tutorial. But actually, what I said is that a super-human AI might be something like a very large neural net. Clearly, a neural net by itself doesn't act autonomously - to get anything approaching 'intelligence' you will need to at least add some feedback loops beyond simple backpropagation. More will go on in a future superhuman AI than goes on in any present-day toy AI. Well, yes, those other people I mention do seem to think that. But they are not indulging in any kind of mysticism. Only in the kinds of conceptual extrapolation which took place, for example, in going from simple combinational logic circuitry to the instruction fetch-execute cycle of a von Neumann computer architecture.
GregFish-10

So in other words, you're more of a hit-and-run-out-of-context kind of guy than someone who prefers to actually go further than a derisive little put down and show that he actually understands the topic in enought depth to argue it?

GregFish-10

... but the overarching premise that machines can only do what they are programmed to seems to show up in both pieces, and is simply wrong.

Only if you choose to discard any thought to how machines are actually built. There's no magic going on in that blinking box, just ciruits performing the functions they were designed to do in the order they're told.

Neural nets and genetic algorithms often don't do what they are told.

Actually, they do precisely what they're told because without a fitness function which determines what problem they are to solve in ... (read more)

9JoshuaZ
There's no magic going on inside the two pounds of fatty tissue inside my skull either. Magic is apparently not required for creativity or initiative (whatever those may be). I'm confused by what you mean by "thinking" and "initiative." Let's narrow the field slightly. Would the ability to come up with new definitions and conjectures in math be an example of thinking and initiative? Calling something a bug doesn't change the nature of what is happening. That's just a label. Humans are likely as smart as they are due to runaway sexual selection for intelligence. And then humans got really smart and realized that they could have all the pleasure of sex while avoiding the hassle of reproduction. Is the use of birth-control an example of human initiative or a bug? Does it make a difference?
8TheOtherDave
Can you clarify how it's helpful to know that my machine only does what it's been told to do, if I can't know what I'm telling it to do or be certain what I have told it to do? I mean, there's a sense in which humans only do "what they've been told to do", also... we have programs embedded in DNA that manifest themselves in brains that construct minds from experience in constrained ways. (Unless you believe in some kind of magic free will in human minds, in which case this line of reasoning won't seem sensible to you.) But so what? Knowing that doesn't make humans harmless.

It centers around what happens once machines have human level intelligence.

As defined by... what exactly? We have problems measuring our own intelligence or even defining it so we're giving computers a very wide sliding scale of intelligence based on personal opinions and ideas morethan a rigirous examination. A computer today could ace just about any general knowledge test we give it if we tell it how to search for an answer or compute a problem. Does that make it as intelligent as a really academically adept human? Oh and it can do it in a tiny fraction of the time it would take us. Does that make it superhuman?

5JoshuaZ
It may be a red herring to focus on the definition of "intelligence" in this context. If you prefer, taboo the words intelligent and intelligence in this context and simply refer to a computer capable of doing at least everything a regular person can do. The issue is what happens after one has a machine that reaches that point.
3jsalvatier
I don't think this is a good argument. Just because you cannot define something doesn't mean it's not a real phenomena or that you cannot reason about it at all. Before we understood fire completely, it was still real and we could reason about it somewhat (fire consumes some things, fire is hot etc.). Similarly, intelligence is a real phenomena that we don't completely understand and we can still do some reasoning about it. It is meaningful to talk about a computer having "human-level" (I think "human-like" might be more descriptive) intelligence.

Fish seemed to be implying that it wasn't.

Absolutely not. If you take another look, I argue that it's uncessary. You don't want the machine to do something? Put in a boundry. You don't have the option to just turn off a lab rat's desire to search a particular corner of its cage with a press of a button, so all you can do is put in some deterrent. But with a machine, you can just tell it not to do that. For example, this code in Java would mean not to add two even numbers if the method recieves them:

public int add(int a, int b) { if ((a % 2) != 0 &am... (read more)

Perplexed110

Part of the disagreement here seems to arise from disjoint models of what a powerful AI would consist of.

You seem to imagine something like an ordinary computer, which receives its instructions in some high-level imperative language, and then carries them out, making use of a huge library of provably correct algorithms.

Other people imagine something like a neural net containing more 'neurons' than the human brain - a device which is born with little more hardwired programming than the general guidance that 'learning is good' and 'hurting people is bad' tog... (read more)

GregFish-40

My intention for linking to it was not that I thought it featured good arguments...

Gee, thanks. So you basically linked and replied as a form of damage control? And by the way, the "outsiders' perception" isn't helped when the "insiders'" arguments seem to be based not on what computers actually do, but what they're made to do in comic books.

9JoshuaZ
XiXi is actually one of the people here who is more critical of the SI and the notion of run-away superintelligence. XiXi can correct me if I'm wrong here, but I suspect that XiXi's intention in this particular instance was to do just what he said. To give an example of an outsider's perspective on the SI of exactly the type of outsider who the SI should be trying to convince and should be able to convince if their arguments have much validity. Ok. This is the sort of remark that get's the SI people correctly annoyed. Generalizations from fictional evidence are bad. But, at the same time, that something happens to have occurred in fictional settings isn't in general a reason to assign it lower probability than you would if one weren't aware of such fiction. (To use a silly example, there's fiction set after the sun has become a red giant. The fact that there's such fiction isn't relevant to evaluating whether or not the sun will enter such a phase). It also misses one of the fundamental points that the SI people have made repeatedly: computers as they exist today are very weak entities. The SI's argument doesn't have to do with computers in general. It centers around what happens once machines have human level intelligence. So, ask yourself, how likely is it do you think that we'll have general AI ever, and if we do have general AI, what buggy failure modes seem most likely?

Well, argue the points then. Anyone can make a pithy "oh, he doesn't know what he's talking about" and leave it at that. Go ahead, show your expertise on the subject. Of course you'd be showing it on a single out-of-context quote here...

-4timtyler
You've laid out some of your positions on these topics in your blog. Alas, after reading them, I am not positively inclined towards engaging with you. I cited one for the purpose of illustrating your perspective to other readers.

I think the author is asserting that it seems to them that some of the stuff put out by the website shows the general trends one expect if someone has learned about some idea from popularizations rather than the technical literature.

Yes that is exactly what I meant. That might sound a little harsh, but that was my impression.

Wow, if that's all you got from a post trying to explain the very real difference between acing an intelligence test by figuring things out on your own and having a machine do the same after you give it all the answers and how the suggested equations only measure how many answers were right, not how that feat was accomplished, I don't even know how to properly respond...

Oh and by the way, in the comments I suggest how to keep track of the machine doing some learning and figuring out to Dr. Legg so there's another thing to consider. And yes, I've had the formal instruction in discrete math to do so.

4JoshuaZ
It is possible that I didn't explain my point well. The problem I am referring to is your apparent insistence that there are things that machines can't do that people can and that this is insurmountable. Most of your subclaims are completely reasonable, but the overarching premise that machines can only do what they are programmed to seems to show up in both pieces, and is simply wrong. Even today, that's not true by most definitions of those terms. Neural nets and genetic algorithms often don't do what they are told.