Comment author: eirenicon 01 September 2009 03:48:59PM *  5 points [-]

Readers of Less Wrong may be interested in this New Scientist article by Noel Sharkey, titled Why AI is a dangerous dream, in which he attacks Kurzweil's and Moravec's "fairy tale" predictions and questions whether intelligence is computational ("[the mind] could be a physical system that cannot be recreated by a computer").

[edit] I thought this would go without saying, but I suspect the downvotes speak otherwise, so: I strongly disagree with the content of this article. I still consider it interesting because it is useful to be aware of differing and potentially popular perspectives on these subjects (and Sharkey is something of a "populist" scientist). I think the opinions it espouses are staggeringly ill-conceived, however.

Comment author: HalFinney 02 September 2009 11:50:33PM 3 points [-]

"[the mind] could be a physical system that cannot be recreated by a computer"

Let me quote an argument in favor of this, despite the apparently near universal consensus here that it is wrong.

There is a school of thought that says, OK, let's suppose the mind is a computation, but it is an unsolved problem in philosophy how to determine whether a given physical system implements a given computation. In fact there is even an argument that a clock implements every computation, and it has yet to be conclusively refuted.

If the connection between physical systems and computation is intrinsically uncertain, then we can never say with certainty that two physical systems implement the same computation. In particular, we can never know that a given computer program implements the same computation as a given brain.

Therefore we cannot, in principle, recreate a mind on a computer; at least, not reliably. We can guess that it seems pretty close, but we can never know.

If LessWrongers have solved the problem of determining what counts as instantiating a computation, I'd like to hear more.

Comment author: HalFinney 26 August 2009 10:03:09PM 2 points [-]

Two comments. First, your point about counterfactuals is very valid. Hofstadter wrote an essay about how we tend to automatically only consider certain counterfactuals, when an infinite variety are theoretically possible. There are many ways that the world might be changed so that Joe one-boxes. A crack in the earth might open and swallow one box, allowing Joe to take only the other. Someone might have offered Joe a billion dollars to take one box. Joe might aim to take two but suffer a neurological spasm which caused him to grasp only one box and then leave. And so on. Counterfactuals are a weak and uncertain tool.

My second point is with regard to determinism. What if the word in general, and Joe in particular, is nondeterministic? What if QM is true but the MWI is not, or some other form of nondeterminism prevails? Ideally, you should not base your analysis on the assumption of determinism.

Comment author: HalFinney 26 August 2009 09:38:33PM 0 points [-]

Two comments. First, your point about counterfactuals is very valid. Hofstadter wrote an essay about how we tend to automatically only consider certain counterfactuals, when an infinite variety are theoretically possible. There are many ways that the world might be changed so that Joe one-boxes. A crack in the earth might open and swallow one box, allowing Joe to take only the other. Someone might have offered Joe a billion dollars to take one box. Joe might aim to take two but suffer a neurological spasm which caused him to grasp only one box and then leave. And so on. Counterfactuals are a weak and uncertain tool.

My second point is with regard to determinism. What if the word in general, and Joe in particular, is nondeterministic? What if QM is true but the MWI is not, or some other form of nondeterminism prevails? Ideally, you should not base your analysis on the assumption of determinism.

Comment author: HalFinney 26 August 2009 09:29:23PM 0 points [-]

Two comments. First, your point about counterfactuals is very valid. Hofstadter wrote an essay about how we tend to automatically only consider certain counterfactuals, when an infinite variety are theoretically possible. There are many ways that the world might be changed so that Joe one-boxes. A crack in the earth might open and swallow one box, allowing Joe to take only the other. Someone might have offered Joe a billion dollars to take one box. Joe might aim to take two but suffer a neurological spasm which caused him to grasp only one box and then leave. And so on. Counterfactuals are a weak and uncertain tool.

My second point is with regard to determinism. What if the word in general, and Joe in particular, is nondeterministic? What if QM is true but the MWI is not, or some other form of nondeterminism prevails? Ideally, you should not base your analysis on the assumption of determinism.

Comment author: HalFinney 26 August 2009 09:29:12PM 0 points [-]

Two comments. First, your point about counterfactuals is very valid. Hofstadter wrote an essay about how we tend to automatically only consider certain counterfactuals, when an infinite variety are theoretically possible. There are many ways that the world might be changed so that Joe one-boxes. A crack in the earth might open and swallow one box, allowing Joe to take only the other. Someone might have offered Joe a billion dollars to take one box. Joe might aim to take two but suffer a neurological spasm which caused him to grasp only one box and then leave. And so on. Counterfactuals are a weak and uncertain tool.

My second point is with regard to determinism. What if the word in general, and Joe in particular, is nondeterministic? What if QM is true but the MWI is not, or some other form of nondeterminism prevails? Ideally, you should not base your analysis on the assumption of determinism.

In response to Suffering
Comment author: HalFinney 04 August 2009 06:18:15AM 2 points [-]

We talk a lot here about creating Artificial Intelligence. What I think Tiiba is asking about is how we might create Artificial Consciousness, or Artificial Sentience. Could there be a being which is conscious and which can suffer and have other experiences, but which is not intelligent? Contrariwise, could there be a being which is intelligent and a great problem solver, able to act as a Bayesian agent very effectively and achieve goals, but which is not conscious, not sentient, has no qualia, cannot be said to suffer? Are these two properties, intelligence and consciousness, independent or intrinsically linked?

Acknowledging the limited value of introspection, nevertheless I can remember times which I was close to experiencing "pure consciousness", with no conscious problem-solving activity at all. Perhaps I was entranced by a beautiful sunset, or haunting musical performance. My whole being seemed to be pure experience, pure consciousness, with no particular need for intelligence, Bayesian optimization, goal satisfaction, or any of the other paraphernalia which we associate with intelligence. This suggests to me that it is at least plausible that consciousness does not require intelligence.

In the other direction, the idea of an intelligence problem solver devoid of consciousness is an element in many powerful, fictional dystopias. Even Eliezer's paperclip maximizer partakes of this trope. It seems that we have little difficulty imagining intelligence without consciousness, without awareness, sentience, qualia, the ability to suffer.

If we provisionally assume that the two qualities are independent, it raises the question of how we might program consciousness (even if we only want to know how, to avoid doing it accidentally). is it possible that even relatively simple programs may be conscious, may be capable of feeling real pain and suffering, as well as pleasure and joy? Is there any kind of research program that could shed light on these questions?

In response to Suffering
Comment author: HalFinney 04 August 2009 06:06:58AM 5 points [-]

Reading the comments here, there seem to be two issues entangled. One is which organisms are capable of suffering (which is probably roughly the same set that is capable of experiencing qualia; we might call this the set of sentient beings). The other is which entities we would care about and perhaps try to help.

I don't think the second question is really relevant here. It is not the issue Tiiba is trying to raise. If you're a selfish bastard, or a saintly altruist, fine. That doesn't matter. What matters is what constitutes a sentient being which can experience suffering and similar sensations.

Let us try to devote our attention to this question, and not the issue of what our personal policies are towards helping other people.

Comment author: CronoDAS 01 August 2009 04:05:07AM *  5 points [-]

On the other hand, for all we know, since the laws of physics in this universe allow for magic, the spell might actually do what the Council thinks it does - summons a hero who brings along the proper kind of luck for getting through the current crisis.

"I summon Deus Ex Machina!"

I know what Eliezer intended the story to mean, but narrative causality seems like a more likely culprit than the anthropic principle for this particular world's survival. Considering this is a world in which the events of Lord of the Rings actually happened, if I were the hero, I'd be assuming that there's a writer of some kind involved.

Comment author: HalFinney 01 August 2009 04:32:50AM 4 points [-]

I thought maybe we were hearing about the LOTR story through something like the chronophone - the translation into English also translated the story into something analogous for us.

Comment author: HalFinney 31 July 2009 10:09:15PM 17 points [-]

I remember reading once about an experiment that was said to make rats superstitious.

These rats were used in learning experiments. They would be put into a special cage and they'd have to do something to get a treat. Maybe they'd have to push a lever, or go to a certain spot. But they were pretty good at learning whatever they had to do. They were smart rats. They knew the score, they knew what the cage was for.

So they did a new experiment, where they put them into the training cage as usual. But instead of what they did bringing the treat, they always got a treat exactly 30 seconds after going into the cage. This continued for a while, and what happened was the rats each learned an individual behavior to bring the treat. One would go to a corner, another would turn in circles, another would stand up on its hind feet. And sure enough, the treat came. Their trick worked.

I imagine the society in Eliezer's story had something similar happen. Given the anthropic effect we are postulating, they don't actually have to do anything - a certain fraction of the worlds will get lucky and survive. But after it happens a few times, the survivors may well assume that what they were doing at the time their "luck" arrived was causative. In this case, they had a hero who seemed to get lucky. Maybe several heroes. And then somewhere they got the idea of summoning them from other worlds. After all, if they need a lucky hero to save them, they should get the luckiest heroes they can find. (I wonder what this hero had done to earn their selection?)

But there would be just as many other worlds, even instances of the exact same world, which have developed their own superstitions about what defeats the evil. They each carry out their rituals, and in each case, it works - for the survivors. We just choose to eavesdrop on a world which had a particularly interesting and amusing superstition.

Comment author: Wei_Dai 31 July 2009 06:03:36AM 2 points [-]

"Ah..." Ghufhus said, puzzled. "How do you... know all this? Is there a Counter-Force in your own world?"

Fool, Aerhien thought to herself. The hero had seemed puzzled by the idea, at first, and had needed to ask for examples. She decided then and there that Ghufhus would meet with an accident before the next council meeting; their world had no room for stupid Eerionnath.

That's a bit harsh, don't you think? Maybe the Counter-Force is known by a different name in the Hero's own world, and he asked for examples to make sure they're the same thing.

Actually, why doesn't the Hero's world have a Counter-Force? Shouldn't every world have something like it? How many times have our world escaped from the brink of nuclear annihilation, for example?

Comment author: HalFinney 31 July 2009 09:58:55PM 3 points [-]

Actually, why doesn't the Hero's world have a Counter-Force? Shouldn't every world have something like it? How many times have our world escaped from the brink of nuclear annihilation, for example?

Right, like the way the LHC keeps breaking before they can turn it on and have it destroy the universe. Sooner or later we'll figure out what's happening.

View more: Prev | Next