Comment author: Eliezer_Yudkowsky 01 March 2010 11:20:47PM 9 points [-]

And the Earth is slowly curving in its orbit, generating an apparent centrifugal force that decreases your weight at midnight, and increases your weight at noon. Except for a very tiny tidal correction, these two forces exactly cancel which is why the Earth stays in orbit in the first place. This argument would only be valid if the Earth were suspended motionless on two giant poles running through the axis or something.

Comment author: Karl_Smith 02 March 2010 12:59:41AM 0 points [-]

This was my original thought until I realized that of course it cancels or else the earth would crack into pieces.

Comment author: RichardKennaway 01 March 2010 10:27:10PM *  2 points [-]

You are talking about control systems.

A control system has two inputs (called its "perception" and "reference") and one output. The perception is a signal coming from the environment, and the output is a signal that has an effect on the environment. For artificial control systems, the reference is typically set by a human operator; for living systems it is typically set within the organism.

What makes it a control system is that firstly, the output has an effect, via the environment, on the perception, and secondly, the feedback loop thus established is such as to cause the perception to remain close to the reference, in spite of all other influences from the environment on that perception.

The answers to your questions are:

  1. A "goal" is the reference input of a control system.

  2. An "obstacle" is something which, in the absence of the output of the control system, would cause its perception to deviate from its reference.

  3. "Complicated" means "I don't (yet) understand this."

Suggestions for readings.

And a thought: "Romeo wants Juliet as the filings want the magnet; and if no obstacles intervene he moves towards her by as straight a line as they. But Romeo and Juliet, if a wall be built between them, do not remain idiotically pressing their faces against its opposite sides like the magnet and the filings with the card. Romeo soon finds a circuitous way, by scaling the wall or otherwise, of touching Juliet's lips directly. With the filings the path is fixed; whether it reaches the end depends on accidents. With the lover it is the end which is fixed, the path may be modified indefinitely."

-- William James, "The Principles of Psychology"

Comment author: Karl_Smith 02 March 2010 12:30:16AM *  0 points [-]

Richard, do you believe that the quest for FAI could be framed as a special case of the quest for the Ideal Ultimate Control System (IUCS). That is, intelligence in and of itself is not what we are after but control. Perhaps, FAI is the only route to IUCS but perhaps not?

Note: Originally I wrote Friendly Ultimate Control System but the acronym was unfortunate.

Comment author: MrHen 01 March 2010 07:07:05PM 1 point [-]

If I were standing there catching the pencil and directing it to the spot on the floor, you wouldn't consider the pencil intelligent. The behavior observed is not pointing to the pencil in particular being intelligent.

Just my two cents.

I don't know anything about the concept of intelligence being defined as being able to pursue goals through complicated obstacles. If I had to guess at the missing piece it would probably be some form of self-referential goal making. Namely, this takes the form of the word, "want." I want to go to this spot on the floor. I can ignore a goal but it is significantly harder to ignore a want.

At some point, my wants begin to dictate and create other wants. If I had to start pursing a definition of intelligence, I would probably start here. But I don't know anything about the field so this could have already been tried and failed.

Comment author: Karl_Smith 01 March 2010 08:15:06PM *  1 point [-]

Well I would consider the Pencil-MrHen system as intelligent. I think further investigation would be required to determine that the pencil is not intelligent when it is not connected to MrHen, but that MrHen is intelligent when not connected to the pencil. It then makes sense to say that the intelligence originates from MrHen.

The problem with the self-referential from my perspective is that it presumes a self.

It seems to me that ideas like "I" and "want" graph humanness on to other objects.

So, I want to see what happens if I try to divorce all of my anthrocentric assumptions about self, desires, wants, etc. I want to measure a thing and then by a set of criteria declare that thing to be intelligent.

Comment author: Eliezer_Yudkowsky 01 March 2010 07:35:31PM 3 points [-]

Quick look didn't find it, but I don't see why this follows (and at a wild guess, I'm guessing it doesn't). Can you link?

Comment author: Karl_Smith 01 March 2010 08:07:23PM *  4 points [-]

It doesn't. My though process was too silly to even bother explaining.

Comment author: Karl_Smith 01 March 2010 06:17:03PM *  0 points [-]

Thoughts about intelligence.

My hope is that some altruistic person will read this comment, see where I am wrong and point me to the literature I need to read. Thanks in advance.

I've been thinking about the problem of general intelligence. Before going too deeply I wanted to see if I had a handle on what intelligence is period.

It seems to me that the people sitting in the library with me now are intelligent and that my pencil is not. So what is the minimum my pencil would have to do before I suddenly thought that it was intelligent?

Moving alone doesn't count. If I drop the pencil it will fall towards the table. You could say that I caused the pencil to move, but I am not sure this isn't begging the question.

Now suppose the first time I dropped the pencil and it fell to the floor. Now I go to drop it a second time but I do it over the table. However, the pencil flies around the table and hits the same spot on the floor.

Now its got my attention. But maybe its something about the table. So I drop the pencil but put my hand in the way. Still the pencil goes around my hand.

I put my foot over the spot on the floor and drop the pencil. It flies around my foot and then into the crevice between my foot and the floor and gets stuck. As soon as I lift my foot the pencil goes to the same spot.

I believe I should now conclude that my pencil is intelligent. This has something to do with the following facts.

1) The pencil kept going to the same spot as if it had a "goal"

2) The pencil was able to respond to "obstacles" in ways not predicted by my original simply theory of pencil behavior.

I believe that I would say the pencil is more intelligent if it could pass through more "complicated" obstacles.

Here are some of my basic problems

1) What is a "goal" beyond what my intuition says

2) Similarly what is an "obstacle"

3) And what is "complicated"

I have some sense that "obstacle" is related to reducing the probability that the goal will be reached

I have some s that complicated has to do with the degree to which the probability is reduced.

Thoughts? Suggestions for readings?

Comment author: Eliezer_Yudkowsky 01 March 2010 04:02:57PM 14 points [-]

I haven't taken this position just to be difficult. To look around, the world does appear to be flat, so I think it is incumbent on others to prove decisively that it isn't. And I don't think that burden of proof has been met yet.

-- Daniel Shenton, President of the Flat Earth Society as of 2010

Comment author: Karl_Smith 01 March 2010 05:57:44PM 1 point [-]

I just read their website.

Its embarrassing but I have to say that honestly the centripetal force argument never occurred to me before. Rough calculations seem to indicate that a large man 100Kg should be almost half a pound heavier in the day time as he is at night. Kinda cool.

Now I am dying to get something big and stable enough to see if my home scale can pick it up.

Comment author: wedrifid 24 February 2010 04:31:26AM 0 points [-]

Does the term 'normative' work in that context?

Comment author: Karl_Smith 24 February 2010 05:25:50PM 1 point [-]

Yes,

I could try to say that my work focuses only on understand how growth and development take place for example but this in practice this it doesn't work that way.

A conversation with students, policy makers, even fellow economists will not go more than 5 - 10 mins without taking a normative tact. Virtually everyone is in favor of more growth and so the question is invariably, "what should we DO to achieve it"

Comment author: realitygrill 20 February 2010 04:43:39AM 0 points [-]

Awesome. I'd love to hang with you if I'm there next year; you don't have any connections to BIAC do you? I just applied for a postbac fellowship there..

What's your specialty in econ?

Comment author: Karl_Smith 21 February 2010 09:45:58PM 0 points [-]

I don't have any connection to BIAC.

My specialty is human capital (education) and economic growth and development

Comment author: Karl_Smith 19 February 2010 12:23:20AM *  6 points [-]

Name: Karl Smith

Location: Raleigh, North Carolina

Born: 1978

Education: Phd Economics

Occupation: Professor - UNC Chapel Hill

I've always been interested in rationality and logic but was sidetracked for many (12+) years after becoming convinced that economics was the best way to improve the lives of ordinary humans.

I made it to Less Wrong completely by accident. I was into libertarianism which lead me to Bryan Caplan which lead me Robin Hanson (just recently). Some of Robin's stuff convinced me that Cryonics was a good idea. I searched for Cryonics and found Less Wrong. I have been hooked ever since. About 2 weeks now, I think.

Also, skimming this I see there is a 14 year-old on this board. I cannot tell you how that makes burn with jealousy. To have found something like this at 14! Soak it in Ellen. Soak it in.

Comment author: Kevin 17 February 2010 08:18:50AM 3 points [-]
Comment author: Karl_Smith 19 February 2010 12:04:02AM 0 points [-]

I am no where near caught up on FAI readings but here are is a humble thought.

What I have read so far seems to be assuming a single jump FAI. That is once the FAI is set it must take us to where we ultimately want to go without further human input. Please correct me if I am wrong.

What about a multistage approach?

The problem that people might immediately bring up is that a multistage approach might lead elevating subgoals to goals. We say, "take us to mastery of nanotech" and the AI decides to rip us apart and organize all existing ribosomes under a coherent command.

However, perhaps what we need to do is verify that any intermediate state goal better than the current state.

So what if we have the AI guess a goal state. Then simulate that goal state and expose some subset of humans to that simulation. The AI the asks "Proceed to this stage or no" The humans answer.

Once in the next stage we can reassess.

To give a sense of motivation: it seems that verifying the goodness of future-state is easier than trying to construct the basic rules of good statedness.

View more: Prev | Next