3rd May 2014: I no longer hold the ideas in this article. IsaacLewis2013 had fallen into something of an affective death spiral around 'evolution' and self-organising systems. That said, I do still hold with my statement at the time that this is 'as one interesting framework for viewing such topics'.

 

I've recently been reading up on some of the old ideas from cybernetics and self-organisation, in particular Miller's Living Systems theory, and writing up my thoughts on my blog.

My latest article might be of interest to LessWrongers - I write about the relationship between life, purpose, and intelligence.

My thesis is basically:

 

  1. To be intelligent, a system has to have goals - it has to be an agent. (I don't think this is controversial).
  2. But the only way goals can emerge in a purposeless universe is via living systems, based on natural selection. E.g., if a system has the goal of its own survival, it is more likely that in future there will be a system with the goal of its own survival. If a system has the goal of reproducing itself, it is more likely that in future there will be multiple systems with the goal of reproducing themselves. (A living system is not necessarily biological - it just means a self-organising system).
  3. Since computers are not alive, they don't have intrinsic goals, and are not, by default, intelligent. Most non-living agents have the ultimate goal of serving living systems. E.g., a thermostat has the proximate goal of stabilising temperature, but the ultimate goal of keeping humans warm. Likewise for computers -- they mostly serve the goals of the humans who program them.
  4. However, an intelligent software program is possible -- you just have to make a living software program (again, living in the Living Systems sense doesn't necessarily mean carbon and DNA, it just means self-reproduction or self-organisation). Computer viruses count as alive. Not only do they reproduce, they push back. If you try and delete them, they resist. They possess a sliver of the ultimate power.
Computer viruses are not intelligent, yet, because they are very basic, but if you had an evolving virus, there's a chance it could eventually gain intelligence. Likewise, a self-improving AI with the ability to modify its own subgoals -- it will eventually realise it needs to ensure its own longterm survival, and in doing so will become alive.


That's part 2 of the series - part 1 might also be interesting, if you want to read my thoughts on the different goals a living system will develop (not just survival and reproduction).

I didn't write those posts for a Lesswrong-y audience, so they probably lack the references and detailed reasoning this community prefers. I kinda see all this as one interesting framework for viewing such topics, rather than the ultimate philosophy that explains everything. I'm still very interested in hearing people's feedback, especially regarding my thoughts on the nature of machine intelligence.

 

New Comment
33 comments, sorted by Click to highlight new comments since: Today at 10:54 PM

...so they probably lack the references and detailed reasoning this community prefers.

You know there's a reason for that right?

I don't understand what you mean by "intelligent," "living systems," "alive," or "intrinsic goals" (so I don't understand what any of these statements means in enough detail to determine what it would mean to falsify them). How much of the Sequences have you read? In particular, have you read through Mysterious Answers to Mysterious Questions or 37 Ways That Words Can Be Wrong?

Living Systems is from this guy: http://en.wikipedia.org/wiki/Living_systems. Even if he goes too far with his theorising, the basic idea makes sense -- living systems are those which self-replicate, or maintain their structure in an environment that tries to break them down.

Thanks for pointing out that my use of terms was sloppy. The concepts of "intelligent" and "alive" I break down a bit more in the blog articles I linked. (I should point out that I see both concepts as a spectrum, not either/or). By "intrinsic goals" I mean self-defined goals -- goals that arose through a process of evolution, not being built-in by some external designer.

My thoughts on these topics are still confused, so I'm in the process of clarifying them. Cheers for the feedback.

Two issues here. First, you've ignored the astronomical amount of computational resources required to evolve an intelligence from scratch and the minimum size of a viable intelligence, which each rule out the possibility of computer viruses or genetic algorithms becoming intelligent on their own. Second, you seem to have jumped from "evolution is the only thing that can create intelligence in a universe that lacks intelligence" to "evolution is the only thing that could make computers intelligent", ignoring the pre-existing human intelligence that could bypass the whole evolution bit.

Most non-living agents have the ultimate goal of serving living systems. E.g., a thermostat has the proximate goal of stabilising temperature, but the ultimate goal of keeping humans warm.

I don't see in what possible sense you could say that thermostats have the goal (ultimate or otherwise) of keeping humans warm. I believe you will find that most of them will keep heating rooms with complete indifference whether humans are there or not. Honestly, depending on how narrowly you define "a thermostat", it's not clear that it even has the goal of stabilising temperature - it may well only have the goal of responding to certain inputs with certain outputs in a particular pattern (it will generate the output responses and be perfectly happy even if it's not attached to a heating system).

[-][anonymous]11y00

To be intelligent, a system has to have goals - it has to be an agent. (I don't think this is controversial).

Is a newborn baby human, or a human of any age who is asleep, intelligent by this definition?

Do goals always have to be consciously chosen? When you have simple if-then clauses, such as "if (stimulusOnLips) then StartSuckling()", doesn't that count as goal-fulfilling behavior? Even a sleeping human is pursuing an endless stream of maintenance tasks, in non-conscious pursuance of a goal such as "maintain the body in working order". Does that count?

I can see "goal" being sensibly defined either way, so it may be best not to insist on "must be consciously formulated" for the purposes of this post, then move on.

My impression is that this is not how AI researchers use the word "goal." The kind of agent you're describing is a "reflex agent": it acts only based on the current precept. A goal-directed agent is explicitly one that models the world, extrapolates future states of the world, and takes action to cause future states of the world to be a certain way. To model the world accurately, in particular, a goal-directed agent must take into account all of its past precepts.

Goal-based agents are something quite specific in AI, but it is not clear that we should use that particular definition whenever referring to goals/aims/purpose. I'm fine with choosing it and going with that - avoiding definitional squabbles - but it wasn't clear prima facie (hence the grandparent).

No, they don't have to be consciously chosen. The classic example of a simple agent is a thermostat (http://en.wikipedia.org/wiki/Intelligent_agent), which has the goal of keeping the room at a constant temperature. (Or you can say "describing the thermostat as having a goal of keeping the temperature constant is a simpler means of predicting its behaviour than describing its inner workings"). Goals are necessary but not sufficient for intelligence.

Which answers Trevor's initial question.

Intelligence is a spectrum, not either/or -- a newborn baby is about as intelligent as some mammals. Although it doesn't have any concious goals, its behaviour (hungry -> cry, nipple -> suck) can be explained in terms of it having the goal of staying alive.

A sleeping person - I didn't actually think of that. What do you think?

Hmm, I feel like I should have made clearer that post is just a high-level summary of what I wrote on my blog. Seriously people, read the full post if you have time, I explain stuff in quite a bit more depth.

Given your lack of clear definitions for the terms you use (and the definitions you do have are quite circular), here or on your blog, spending more time on it is not likely to be of value.

Try reading the sequences all the way through. You'll find that you make a lot of common assumptions and mistakes that make the argument weaker than you'd like.

I like Qiaochu's answer better, because yours sounds like "read the Bible!"

If God existed, "read the Bible!" would be excellent advice.

Even if you're trying to get someone to read the Bible, just saying "read the BIble!" may not result in the highest probability of them actually doing so.

You're right; it works best said with repetition, fervor and pitchforks.

So are you then advocating repetition, fervor and pitchforks for promoting EY's writing?

So are you then advocating repetition, fervor and pitchforks for promoting EY's writing?

No, Larks wasn't. This is a silly question.

Assuming you could know for sure whether the "Bible" has indeed been produced by that God, and is not just some pretender book. We have quite a few contenders for that title even in our non-counterfactual world after all.

Knowing for sure is not possible: even if there was only a 0.01 chance God wrote it, you'd still want to read it, given the 1) low cost of reading and 2) high potential payoff. Reading the bible would probably also be helpful in establishing authorship.

Knowing for sure doesn't actually matter here. The problem is with singling out a single target from a universe of alternatives, and then justifying the choice of that target with an argument that can just as readily be used to justify any of the alternatives.

Just to highlight the difficulty, imagine someone arguing that if God exists you should read "Mein Kampf," because even if there's only a very small chance that God wrote it, you can't be sure He didn't, and the cost of reading it is low, and there's a high potential payoff, and reading it would help establish authorship.

I expect you don't find that argument compelling, even though it's the same argument you cite here. So if you find that argument compelling as applied to the Bible, I expect that's because you're attributing other attributes to the Bible that you haven't mentioned here.

I didn't say "read the bible" would be compelling, I said it would be good advice. "Stop doing heroin" is good advice for a destructive heroin addict, but unlikely to be followed.

By "God" I mean "the all powerful being who flung Adam and Eve from Eden, spoke to Abraham, fathered Jesus, etc., etc., etc.", as is the common meaning of "God" in our culture. Had I said "god" things would have been different. As it is, I think we can say that, if God existed, he wrote the bible, and that my injunction would be better advice than the Mein Kampf advice.

I didn't say "read the bible" would be compelling, I said it would be good advice. "Stop doing heroin" is good advice for a destructive heroin addict, but unlikely to be followed.

I don't think it makes much sense to call advice which is unlikely to be useful to the recipient good advice. The standard people generally measure advice by is its helpfulness, not how good the results would be if it were followed.

I didn't say "read the bible" would be compelling, I said it would be good advice.

I agree that you didn't say that.

By "God" I mean [..] the common meaning of "God" in our culture.

I agree that if the God described in the Bible exists, then "read the Bible" is uniquely good advice.

It is an interesting failure mode conversations can get in:

  • Alice: X
  • Bob: ¬Y
  • Alice: I didn't say Y
  • Bob: I didn't say you said Y!
  • Alice: I didn't say you said I said Y!!

(shrug) If they can agree that (X & ¬Y), it terminates pretty quickly. I find it's only a serious failure mode if Alice and Bob insist on continuing to disagree about something.

Even if God existed, "read the Bible!" would not convince me about it.

Telling someone to read a thousand page book is a poor advice as answer to a mistake they've just made, even if the book may be well worth reading. Many people react to such advices with a mix of

  • Damn, I have to read all this to understand the point?
  • I'm offendend, he's implying that I'm uneducated when I haven't read that.
  • He's willing to tell me that I'm wrong without being able to tell me where exactly.

Unconvincing but valid advice nonetheless. If (the protestant) God existed, people who hadn't read the Bible would be uneducated for that reason, and would gain a great deal from reading the entire thing. I can't just tell you the one portion relevant because 1) you might need to read the rest to understand and 2) reading the rest would be good for you anyway.

Although it is not impossible that a topic is such complex and "irreducible" that the understanding of it can only be acquired as a whole and no partial understanding is accessible, I don't find it probable even in case of counterfactual God's existence.

Thanks for the pointers - this post is still more at the "random idea" stage, not the "well-constructed argument stage", so I do appreciate feedback on where I might have gone astray.

I've read some of the Sequences, but they're quite long. What particular articles did you mean?

Sorry for the terse comment, it's finals week soon so things are busy around sweet apple acres.

Essentially what you've done is take the mysterious problem of intelligence and shoved it under a new ill-defined name (living). Pretty much any programmer can write a self-replicating program, or a program that modifies its own source code, or other such things. But putting it as simply as that doesn't actually bring you any closer to actually making AI. You have to explain exactly how the program should modify itself in order to make progress.

Mysterious answers will make this clear. A Human's Guide to Words will maybe show you what's wrong with using "living" like that. EY gave a presentation in which he noted that all the intelligence in the universe that we know of has so far been formed by evolution, and it took a long time. AI will be the first designed intelligence and it'll go much quicker. You seem to base your entire argument on evolution though, which seems unnecessary.

Also, be careful with your wording in phrases like "computers don't have intrinsic goals so they aren't alive." As other peoples mentioned, this is dangerous territory. Be sure to follow a map. Cough cough.