Comment author: Qiaochu_Yuan 14 March 2013 05:37:21PM *  5 points [-]

I don't understand what you mean by "intelligent," "living systems," "alive," or "intrinsic goals" (so I don't understand what any of these statements means in enough detail to determine what it would mean to falsify them). How much of the Sequences have you read? In particular, have you read through Mysterious Answers to Mysterious Questions or 37 Ways That Words Can Be Wrong?

Comment author: IsaacLewis 14 March 2013 06:24:28PM 1 point [-]

Living Systems is from this guy: http://en.wikipedia.org/wiki/Living_systems. Even if he goes too far with his theorising, the basic idea makes sense -- living systems are those which self-replicate, or maintain their structure in an environment that tries to break them down.

Thanks for pointing out that my use of terms was sloppy. The concepts of "intelligent" and "alive" I break down a bit more in the blog articles I linked. (I should point out that I see both concepts as a spectrum, not either/or). By "intrinsic goals" I mean self-defined goals -- goals that arose through a process of evolution, not being built-in by some external designer.

My thoughts on these topics are still confused, so I'm in the process of clarifying them. Cheers for the feedback.

Comment author: Kawoomba 14 March 2013 05:38:30PM 0 points [-]

Do goals always have to be consciously chosen? When you have simple if-then clauses, such as "if (stimulusOnLips) then StartSuckling()", doesn't that count as goal-fulfilling behavior? Even a sleeping human is pursuing an endless stream of maintenance tasks, in non-conscious pursuance of a goal such as "maintain the body in working order". Does that count?

I can see "goal" being sensibly defined either way, so it may be best not to insist on "must be consciously formulated" for the purposes of this post, then move on.

Comment author: IsaacLewis 14 March 2013 06:14:28PM 0 points [-]

No, they don't have to be consciously chosen. The classic example of a simple agent is a thermostat (http://en.wikipedia.org/wiki/Intelligent_agent), which has the goal of keeping the room at a constant temperature. (Or you can say "describing the thermostat as having a goal of keeping the temperature constant is a simpler means of predicting its behaviour than describing its inner workings"). Goals are necessary but not sufficient for intelligence.

Comment author: [deleted] 14 March 2013 05:32:19PM 0 points [-]

To be intelligent, a system has to have goals - it has to be an agent. (I don't think this is controversial).

Is a newborn baby human, or a human of any age who is asleep, intelligent by this definition?

Comment author: IsaacLewis 14 March 2013 06:03:47PM *  -1 points [-]

Intelligence is a spectrum, not either/or -- a newborn baby is about as intelligent as some mammals. Although it doesn't have any concious goals, its behaviour (hungry -> cry, nipple -> suck) can be explained in terms of it having the goal of staying alive.

A sleeping person - I didn't actually think of that. What do you think?

Hmm, I feel like I should have made clearer that post is just a high-level summary of what I wrote on my blog. Seriously people, read the full post if you have time, I explain stuff in quite a bit more depth.

Comment author: iDante 14 March 2013 05:35:54PM *  -2 points [-]

Try reading the sequences all the way through. You'll find that you make a lot of common assumptions and mistakes that make the argument weaker than you'd like.

Comment author: IsaacLewis 14 March 2013 05:56:18PM *  0 points [-]

Thanks for the pointers - this post is still more at the "random idea" stage, not the "well-constructed argument stage", so I do appreciate feedback on where I might have gone astray.

I've read some of the Sequences, but they're quite long. What particular articles did you mean?

Comment author: IsaacLewis 24 May 2011 01:16:25PM 0 points [-]

This post inspired me to work on my Mandarin study habits - I've been stuck in a low intermediate plateau for a while, and not sure how to advance. I just started to work on this mindmap, http://www.mindmeister.com/maps/show/98440507, based on the ideas in this article.

I've also recently started following GTD (the productivity system), which emphasises choosing specific actions to follow, rather than big and vague projects. I think this article's approach is similar.

Comment author: Yoreth 14 June 2010 08:10:24AM 5 points [-]

A prima facie case against the likelihood of a major-impact intelligence-explosion singularity:

Firstly, the majoritarian argument. If the coming singularity is such a monumental, civilization-filtering event, why is there virtually no mention of it in the mainstream? If it is so imminent, so important, and furthermore so sensitive to initial conditions that a small group of computer programmers can bring it about, why are there not massive governmental efforts to create seed AI? If nothing else, you might think that someone could exaggerate the threat of the singularity and use it to scare people into giving them government funds. But we don’t even see that happening.

Second, a theoretical issue with self-improving AI: can a mind understand itself? If you watch a simple linear Rube Goldberg machine in action, then you can more or less understand the connection between the low- and the high-level behavior. You see all the components, and your mind contains a representation of those components and of how they interact. You see your hand, and understand how it is made of fingers. But anything more complex than an adder circuit quickly becomes impossible to understand in the same way. Sure, you might in principle be able to isolate a small component and figure out how it works, but your mind simply doesn’t have the capacity to understand the whole thing. Moreover, in order to improve the machine, you need to store a lot of information outside your own mind (in blueprints, simulations, etc.) and rely on others who understand how the other parts work.

You can probably see where this is going. The information content of a mind cannot exceed the amount of information necessary to specify a representation of that same mind. Therefore, while the AI can understand in principle that it is made up of transistors etc., its self-representation necessary has some blank areas. I posit that the AI cannot purposefully improve itself because this would require it to understand in a deep, level-spanning way how it itself works. Of course, it could just add complexity and hope that it works, but that’s just evolution, not intelligence explosion.

So: do you know any counterarguments or articles that address either of these points?

Comment author: IsaacLewis 14 June 2010 05:55:40PM 10 points [-]

Two counters to the majoritarian argument:

First, it is being mentioned in the mainstream - there was a New York Times article about it recently.

Secondly, I can think of another monumental, civilisation-filtering event that took a long time to enter mainstream thought - nuclear war. I've been reading Bertrand Russel's autobiography recently, and am up to the point where he begins campaigning against the possibility of nuclear destruction. In 1948 he made a speech to the House of Lords (UK's upper chamber), explaining that more and more nations would attempt to acquire nuclear weapons, until mutual annihilation seemed certain. His fellow Lords agreed with this, but believed the matter to be a problem for their grandchildren.

Looking back even further, for decades after the concept of a nuclear bomb was first formulated, the possibility of nuclear was was only seriously discussed amongst physicists.

I think your second point is stronger. However, I don't think a single AI rewiring itself is the only way it can go FOOM. Assume the AI is as intelligent as a human; put it on faster hardware (or let it design its own faster hardware) and you've got something that's like a human brain, but faster. Let it replicate itself, and you've got the equivalent of a team of humans, but which have the advantages of shared memory and instantaneous communication.

Now, if humans can design an AI, surely a team 1,000,000 human equivalents running 1000x faster can design an improved AI?