tangerine

Wiki Contributions

Comments

Sorted by

Thank you for the reply!

I’ve actually come to a remarkably similar conclusion as described in this post. We’re phrasing things differently (I called it the “myth of general intelligence”), but I think we’re getting at the same thing. The Secret of Our Success has been very influential on my thinking as well.

This is also my biggest point of contention with Yudkowsky’s views. He seems to suggest (for example, in this post) that capabilities are gained from being able to think well and a lot. In my opinion he vastly underestimates the amount of data/experience required to make that possible in the first place, for any particular capability or domain. This speaks to the age-old (classical) rationalism vs empiricism debate, where Yudkowsky seems to sit on the rationalist side, whereas it seems you and I would lean more to the empiricist side.
 

Entities that reproduce with mutation will evolve under selection. I'm not so sure about the "natural" part. If AI takes over and starts breeding humans for long floppy ears, is that selection natural?

In some sense, all selection is natural, since everything is part of nature, but an AI that breeds humans for some trait can reasonably be called artificial selection (and mesa-optimization). If such a breeding program happened to allow the system to survive, nature selects for it. If not, it tautologically doesn’t. In any case, natural selection still applies.

But there won't necessarily be more than one AI, at least not in the sense of multiple entities that may be pursuing different goals or reproducing independently. And even if there are, they won't necessarily reproduce by copying with mutation, or at least not with mutation that's not totally under their control with all the implications understood in advance. They may very well be able prevent evolution from taking hold among themselves. Evolution is optional for them. So you can't be sure that they'll expand to the limits of the available resources.

In a chaotic and unpredictable universe such as ours, survival is virtually impossible without differential adapation and not guaranteed even with it. (See my reply to lukedrago below.)

I don't know how selection pressures would take hold exactly, but it seems to me that in order to prevent selection pressures, there would have to be complete and indefinite control over the environment. This is not possible because the universe is largely computationally irreducible and chaotic. Eventually, something surprising will occur which an existing system will not survive. Diverse ecosystems are robust to this to some extent, but that requires competition, which in turn creates selection pressures.

tangerine1412

humans are general because of the data, not the algorithm

Interesting statement. Could you expand a bit on what you mean by this?

You cannot in general pay a legislator $400 to kill a person who pays no taxes and doesn't vote.

Indeed not directly, but when the inferential distance increases it quickly becomes more palatable. For example, most people would rather buy a $5 T-shirt that was made by a child for starvation wages on the other side of the world, instead of a $100 T-shirt made locally by someone who can afford to buy a house with their salary. And many of those same T-shirt buyers would bury their head in the sand when made aware of such a fact.

If I can tell an AI to increase profits, incidentally causing the AI to ultimately kill a bunch of people, I can at least claim a clean conscience by saying that wasn't what I intended, even though it happened just the same.

In practice, legislators do this sort of thing routinely. They pass legislation that causes harm—sometimes a lot of harm—and sleep soundly.

Unfortunately, democracy itself depends on the economic and military relevance of masses of people. If that goes away, the iceberg will flip and the equilibrium system of government won't be democracy.

Agreed. The rich and powerful could pick off more and more economically irrelevant classes while promising the remaining ones the same won't happen to them, until eventually they can get everything they need from AI and live in enclaves protected by vast drone armies. Pretty bleak, but seems like the default scenario given the current incentives.

It seems really hard to think of any examples of such tech.

I think you would effectively have to build extensions to people's neocortexes in such a way that those extensions cannot ever function on their own. Building AI agents is clearly not that.

tangerine10-1

Excellent post. This puts into words really well some thoughts that I have had.

I would also like to make an additional point: it seems to me that a lot of people (perhaps less so on LessWrong) hold the view that humanity has somehow “escaped” the process of evolution by natural selection, since we can choose to do a variety of things that our genes do not “want”, such as having non-reproductive sex. This is wrong. Evolution by natural selection is inescapable. When resources are relatively abundant, which is currently true for many Western nations, it can seem that it’s escapable because the selection pressures are relatively low and we can thus afford to spend resources somewhat frivolously. Since resources are not infinitely abundant, over time those selection pressures will increase. Those selection pressures will select out unproductive elements.

This means that even if we managed to get aligment right and form a utopia where everybody gets everything they need or more, they will eventually still be discarded because they cannot produce anything of economic value. In your post, capitalist incentives effectively play the role of natural selection, but even if we converted to a communist utopia, the result would ultimately be the same once selection pressures increase sufficiently, and they will.

Very interesting write-up! When you say that orcas could be more intelligent than humans, do you mean something similar to them having a higher IQ or g factor? I think this is quite plausible.

My thinking has been very much influenced by Joseph Henrich's The Secret of Our Success, which you mentioned. For example, looking at the behavior of feral (human) children, it seems quite obvious to me now that all the things that humans can do better than other animals are all things that humans imitate from an existing cultural “reservoir” so to speak and that an individual human has virtually no hope of inventing within their lifetime, such as language, music, engineering principles, etc.

Gene-culture coevolution has resulted in a human culture and a human body that are adapted to each other. For example, the human digestive system is quite short because we've been cooking food for a long time, humans have muscles that are very weak compared to those of our evolutionary cousins because we've learned to make do with tools (weapons) instead and we have relatively protracted childhoods to absorb all of the culture required to survive and reproduce. If we tried to “uplift” orcas, the fact that human culture has co-evolved with the human body and not with the orca body would likely be an issue in trying to get them to learn it (a bit like trying to get software built for x86 to run on an ARM processor). Still, I think progress in LLM scaling shows that neural networks (artificial or biological) are able to absorb a significant chunk of human culture, as long as you have the right training method. I've made a similar point here.

There is nothing in principle that stops a chimpanzee from being able to read and write English, for example. It’s just that we haven’t figured out the methods to configure their brains into that state, because they don’t have a strong tendency to imitate, which human children do have, which makes training them much easier.

I agree with this view. Deep neural nets trained with SGD can learn anything. (“The models just want to learn.”) Human brains are also not really different from brains of other animals. I think the main struggles are 1. scaling up compute, which follows a fairly predictable pattern, and 2. figuring out what we actually want them to learn, which is what I think we’re most confused about.

tangerine110

My introduction to Dennett, half a lifetime ago, was this talk: 

That was the start of his profound influence on my thinking. I especially appreciated his continuous and unapologetic defense of the meme as a useful concept, despite the many detractors of memetics.

Sad to know that we won't be hearing from him anymore.

Load More