Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Questions about AGI's Importance

0 Post author: curi 31 October 2017 08:50PM

Why expect AGIs to be better at thinking than human beings? Is there some argument that human thinking problems are primarily due to hardware constraints? Has anyone here put much thought into parenting/educating AGIs?

Comments (117)

Comment author: korin43 31 October 2017 10:15:25PM 1 point [-]

I suspect this has been answered on here before in a lot more detail, but:

  • Evolution isn't necessarily trying to make us smart; it's just trying to make us survive and reproduce
  • Evolution tends to find local optima (see: obviously stupid designs like how the optical nerve works)
  • We seem to be pretty good at making things that are better than what evolution comes up with (see: no birds on the moon, no predators with natural machine guns, etc.)

Also, specifically in AI, there is some precedent for there to be only a few years between "researchers get AI to do something at all" and "this AI is better at its task than any human who has ever lived". Chess did it a while. It just happened with Go. I suspect we're crossing that point with image recognition now.

Comment author: curi 31 October 2017 10:29:20PM *  0 points [-]

Do you expect AGI to be qualitatively or quantitatively better at thinking than humans?

Do you think there are different types of intelligence? If so, what types? And would AGI be the same type as humans?

EDIT: By "intelligence" I mean general intelligence.

Comment author: curi 10 November 2017 06:47:37PM 0 points [-]

I'm getting an error trying to load Lumifer's comment in the highly nested discussion, but I can see it in my inbox, so I'll try replying here without the nesting. For this comment, I will quote everything I reply to so it stands alone better.

Isn't it convenient that I don't have to care about these infinitely many theories?

why not?

Why not what?

Why don't you have to care about the infinity of theories?

you can criticize categories, e.g. all ideas with feature X

How can you know that every single theory in that infinity has feature X? or belongs to the same category?

It depends which infinity we're talking about. Suppose the problem is persuading LW ppl about Paths Forward and you say "Use a shovel". That refers to infinitely many different potential solutions. However, they can be criticized as a group by pointing out that a shovel won't help solve the problem. What does a shovel have to do with it? Irrelevant!

This criticism only applies to the infinite category of ideas about shovels, not everything. I'm able to criticize that whole infinite group as a unit because it was brought up as a unit, and defined according to having a particular feature for all the theories in the group (that they involve trying to solve the problem specifically with a shovel.)

The criticism is also contextual. It relates to using shovels for this particular problem. But shovels still help with some other problems. The context the criticism works in is broader than the single problem about paths forward persuasion of LW ppl – e.g. it also applies to anti-induction persuasion of Objectivists. This is typical – the point has some applicability to multiple contexts, but not universal applicability.

If you instead said "Do something" then you'd be bringing up a different infinity with more stuff in it, and I'd have a different reply: "Do what? That isn't helpful because you're pointing me to a large number of non-solutions without pointing out any solution. I agree there is a solution contained in there, somewhere, but I don't know what it is, and you don't seem to either, so I can't use it currently. So I'm stuck with the regular options like doing a solution I do know of or spending more time looking for solutions."

I will admit that there may be a solution with a shovel that actually would work (one way to get this is to take some great solution and then tack on a shovel, which is not optimal but may still be way better than anything we currently know of). So my criticism doesn't 100% rule shovels out. However, it rules shovels out for the time being, as far as is known, pending a new idea about how to make a shovel work. We can only act on solutions we know of, and I have a criticism of the shovel category of ideas as we currently understand it. Our current understanding is that shovels help us dig, and can be used as weapons, and can be salvaged for resources like wood and metal, and can be sold, but that just vaguely saying "use a shovel somehow" does not help me solve a problem of intellectually persuading people.

you can't observe entities

My nervous system makes perfectly good entities out of my sensory stream. Moreover, a rat's nervous system also makes perfectly good entities out if its sensory stream regardless of the fact that the rat has never heard of epistemology and is not very philosophically literate.

I don't think humans think like rats, and I propose we don't debate animal "intelligence" at this time. I'll try to speak to the issue in a different way.

I think humans have adequate control over their observing that they don't get stuck and unable to make progress due to built-in biases and errors. For example, people can consciously think "that looked like a dog at first glance, but actually isn't a painting of a dog". So you can put thought into what the entities are. To the extent you have a default, you can partly change what that default is, and partly reinterpret it after doing the observation. And you're capable of observing in a sufficiently non-lossy way to get whatever information you need (at least with tools like microscopes for some cases). You aren't just inherently, permanently blind to some ways of dividing up the world into entities, or some observable things.

And whatever default your genes gave you about entities is not super reliable. It may be pretty good, but it's very much capable of errors. So I'll make a weaker claim: you can't infallibly observe entities. You need to put some actual thought into what the entities are and aren't, and the inductivist perspective doesn't address this well. (As to rats, they actually start making gross errors in some situations, due to their inability to think like a human to deal with situations they weren't evolved for.)

you have to interpret what entities there are (or not – as you advocated by saying only prediction matters)

or not

Or not? Prediction matters, but entities are an awfully convenient way to make predictions.

but when two ways of thinking about entities (or, a third option, not thinking about entities at all) give identical predictions, then you said it doesn't matter which you do? one entity (or none) is as good as another as long as the predictions come out the same?

but i don't think all ways of looking at the world in terms of entities are equally convenient for aiding us in making predictions (or for some other important things like coming up with new hypotheses!)

Comment author: Lumifer 10 November 2017 07:52:33PM 0 points [-]

Huh, that shaft ended in loud screech and a clang... Let's drop another shaft!

Why don't you have to care about the infinity of theories?

I don't have to care about the infinity of theories because if they all make exactly the same predictions, I don't care that they are different.

This is highly convenient because I am, to quote an Agent, "only human" and humans are not well set up to deal with infinities.

they can be criticized as a group by pointing out that a shovel won't help solve the problem

How do you know that without examining the specific theories?

We can only act on solutions we know of, and I have a criticism of the shovel category of ideas as we currently understand it.

Right, but the point is that you do not have solution at the moment and there is an infinity of theories which propose potential shovel-ready solutions. You have no basis for rejecting them because "I don't know of a solution with a shovel" -- they are new to you solutions, that's the whole point.

To the extent you have a default, you can partly change what that default is, and partly reinterpret it after doing the observation.

Yes, of course, but you were claiming there are no such things as observations at all, merely some photons and such flying around. Being prone to errors is an entirely different question.

one entity (or none) is as good as another as long as the predictions come out the same?

Predictions do not come out of nowhere. They are made by models (= imperfect representations of reality) and "entity" is just a different word for a "model". If you don't have any entities, what exactly generates your predictions?

Comment author: curi 10 November 2017 08:33:02PM *  0 points [-]

I don't find these replies very responsive. Are you trying to understand what I'm getting at, or just writing local replies to a selection of my points? This is not the first time I've tried to write some substantial explanation and gotten not much engagement from you (IMO).

Comment author: Lumifer 10 November 2017 09:08:07PM 0 points [-]

Oh, I understand what you are getting at. I just think that you're wrong.

I'm writing local replies because fisking walls of text gets tedious very very quickly. There is no point in debating secondary effects when it's pretty clear that the source disagreement is deeper.

Comment author: curi 10 November 2017 09:14:43PM *  0 points [-]

I'm going to end the discussion now, unless you object. I'm willing to consider objections.

I'm stopping for a variety of reasons, some of which I talked about previously like your discussion limitations like about references. I think you don't understand and aren't willing to do what it takes to understand.

If we stop and you later want to get these issues addressed, you would be welcome to post to the FI forum: http://fallibleideas.com/discussion-info

Comment author: Lumifer 10 November 2017 09:20:59PM 0 points [-]

I think you don't understand and aren't willing to do what it takes to understand.

s/understand/be convinced/g and I'll agree :-)

Was a fun ride!

Comment author: ImmortalRationalist 02 November 2017 06:00:08PM 0 points [-]

Here is a somewhat relevant video.

Comment author: whpearson 01 November 2017 12:21:21PM 0 points [-]

Has anyone here put much thought into parenting/educating AGIs?

I'm interested in General Intelligence Augmentation, what it would be like try and build/train an artificial brain lobe and try and make it part of a normal human intelligence.

I wrote a bit on my current thoughts on how I expect to align it using training/education here but watching this presentation is necessary for context.

Comment author: siIver 01 November 2017 09:32:32AM 0 points [-]

Because

"[the brain] is sending signals at a millionth the speed of light, firing at 100 Hz, and even in heat dissipation [...] 50000 times the thermodynamic minimum energy expenditure per binary swtich operation"

https://www.youtube.com/watch?v=EUjc1WuyPT8&t=3320s

AI will be quantitatively smarter because it'll be able to think over 10000 times faster (arbitrary conservative lower bound) and it will be qualitatively smarter because its software will be built by an algoirthm far better than evolution

Comment author: Lumifer 01 November 2017 03:57:45PM 0 points [-]

AI will be quantitatively smarter because it'll be able to think over 10000 times faster

My calculator can add large numbers much, much faster than I. That doesn't make it "quantitatively smarter".

an algoirthm far better than evolution

Given that no one has any idea about what that algorithm might look like, statements like this seem a bit premature.

Comment author: Tehuti 05 November 2017 08:26:07PM *  0 points [-]

My calculator can add large numbers much, much faster than I. That doesn't make it "quantitatively smarter.

Your brain actually performs much more analysis each second that any computer we have:

At the time of this writing, the fastest supercomputer in the world is the Tianhe-2 in Guangzhou, China, and has a maximum processing speed of 54.902 petaFLOPS. A petaFLOP is a quadrillion (one thousand trillion) floating point calculations per second. That’s a huge amount of calculations, and yet, that doesn’t even come close to the processing speed of the human brain. In contrast, our miraculous brains operate on the next order higher. Although it is impossible to precisely calculate, it is postulated that the human brain operates at 1 exaFLOP, which is equivalent to a billion billion calculations per second.

https://www.scienceabc.com/humans/the-human-brain-vs-supercomputers-which-one-wins.html

Of course this is structurally very different from a CPU or a GPU etc, but the whole power of the brain is still way bigger.

Comment author: curi 01 November 2017 09:22:22PM *  0 points [-]

I think AGIs will be built by evolution, and use evolution for their own thinking, because I think human thinking uses evolution (replication with variation and selection of ideas). I don't think any other method of knowledge creation is known, other than evolution.

Comment author: Lumifer 02 November 2017 12:37:23AM 1 point [-]

I don't think any other method of knowledge creation is known, other than evolution.

The scientific method doesn't look much like evolution to me. At a simpler level, things like observation and experimentation don't look like it, either.

Comment author: username2 10 November 2017 01:23:40PM 0 points [-]

I went down the rabbit hole of your ensuing discussion and it seems to have broken LW, but didn't look like you were very convinced yet. Thanks for taking one for the team.

Comment author: Lumifer 10 November 2017 03:39:26PM 0 points [-]

Too deep we delved there, and woke the nameless fear...

I suspect there is an implicit max thread depth and once it's reached, LW's gears and cranks (if only!) screech to a halt.

Comment author: curi 02 November 2017 12:48:54AM 0 points [-]

The scientific method involves guesses (called "hypotheses") and criticism (including by experimental tests). That follows the pattern of evolution (exactly, not by analogy): replication with variation (guessing), and selection (criticism).

Comment author: Lumifer 02 November 2017 04:10:00PM 1 point [-]

That follows the pattern of evolution (exactly, not by analogy)

Not at all. Hypothesis generation doesn't look like taking the current view and randomly changing one element in it. More importantly, science is mostly teleological and evolution is not.

But let's take a trivial example. Let's say I'm walking by a food place and I notice a new to me dish. I order it, eat it, and decide that it's tasty. I have acquired knowledge. How's that like evolution?

Comment author: curi 02 November 2017 05:59:39PM *  0 points [-]

the way you decide it's tasty is by guessing it's tasty, and guessing some other things, and criticizing those guesses, and "it's tasty" survives criticism while its rivals don't.

lots of this is done at an unconscious level.

it has to be this way b/c it's the only known way of creating knowledge that could actually work. if you find it awkward or burdensome, that doesn't make it impossible – which puts it ahead of its rivals.

Comment author: Lumifer 02 November 2017 06:23:20PM 0 points [-]

The word you're looking for is "testing". I test whether that thing is tasty.

Testing is not the same thing as evolution.

it has to be this way b/c it's the only known way of creating knowledge that could actually work

That's an entirely circular argument.

Comment author: curi 02 November 2017 06:30:30PM 0 points [-]

Evolution is an abstract pattern which makes progress via the correction of errors using selection. If something fits the pattern, then it's evolution.

Would you agree with something like: if induction doesn't work, and CR does, then it's a good idea to accept CR? Even if you find it counter-intuitive and awkward from your current perspective?

Comment author: Lumifer 02 November 2017 08:01:39PM *  0 points [-]

Evolution is an abstract pattern which makes progress via the correction of errors using selection

I think we might be having terminology problems -- in particular I feel that you stick the "evolution" label on vastly broader things.

First, the notion of progress. Evolution doesn't do progress not being teleological. Evolution does adapation to the current environment. A decrease in complexity is not an uncommon event in evolution, for example. A mass die-off is not an uncommon event, either.

Second, evolution doesn't correct "errors". Those are not errors, those are random exploratory steps. A random walk. And evolution does not correct them, it just kills off those who misstep (which is 99.99%+ of steps).

if induction doesn't work, and CR does, then it's a good idea to accept CR?

Sure. Please provide empirical evidence.

And I still don't understand what's wrong with plain-vanilla observation as a way to acquire knowledge.

Comment author: Elo 02 November 2017 04:44:47AM 0 points [-]

The scientific method

You read the same book as me! "Theory And Reality - Peter Godfrey Smith". I am surprised you say this.

What you describe is the hypothetico-deductive method (https://en.wikipedia.org/wiki/Scientific_Method pictured here is the hypothetico-deductive method, wikipedia is wrong and disagrees with it's own sources). The hypothetico-deductive method involves guesses but the scientific method according to that book is about:

  1. observation
  2. measurement (and building models that can be predictive of that measurement)
  3. standing on the shoulders of the extisting body of knowledge.
  4. ???
  5. Profit!

Edit: that wiki page has changed a lot over the last few months and now I am less sure about what it says.

Comment author: curi 02 November 2017 07:04:51AM 0 points [-]

I don't understand what reading a book has to do with it, or what you wish me to take from the wikipedia link. In my comment I stated the CR position on scientific method, which is my position. Do you have a criticism of it?

Comment author: curi 01 November 2017 09:38:23AM 0 points [-]

i think humans don't use their full computational capacity. why expect an AGI to?

in what way do you think AGI will have a better algorithm than humans? what sort of differences do you have in mind?

Comment author: siIver 01 November 2017 10:30:43AM *  0 points [-]

It doesn't really matter whether the AI uses their full computational capacity. If the AI has a 100000 times larger capacity (which is again a conservative lower bound) and it only uses 1% of it, it will still be 1000 as smart as the human's full capacity.

AGI's algorithm will be better, because it has instant access to more facts than any human has time to memorize, and it will not have all of the biases that humans have. The entire point of the sequences is to list dozens of ways that the human brain reliably fails.

Comment author: curi 01 November 2017 08:23:28PM 0 points [-]

If the advantage is speed, then in one year an AI that thinks 10,000x faster could be as productive as a person who lives for 10,000 years. Something like that. Or as productive as one year each from 10,000 people. But a person could live to 10,000 and not be very productive, ever. That's easy, right? Because they get stuck, unhappy, bored, superstitious ... all kinds of things can go wrong with their thinking. If AGI only has a speed advantage, that won't make it immune to dishonesty, wishful thinking, etc. Right?

Humans have fast access to facts via google, databases, and other tools, so memorizing isn't crucial.

The entire point of the sequences is to list dozens of ways that the human brain reliably fails.

I thought they talked about things like biases. Couldn't an AGI be biased, too?

Comment author: Lumifer 01 November 2017 08:26:18PM *  0 points [-]

For fun ways in which NN classifiers reliably fail, google up adversarial inputs :-)

Example

Comment author: Elo 01 November 2017 08:38:50PM 0 points [-]

Rubbish in, rubbish out - right?

Comment author: Lumifer 02 November 2017 12:33:31AM *  0 points [-]

No, not quite. It's more like "let us poke around this NN and we'll be able to craft inputs which look like one thing to a human and a completely different thing to the NN, and the NN is very sure of it".