Comment author: Protagoras 15 April 2011 10:17:14PM 0 points [-]

I think this does get at one of the key issues (and one of the places where Hume was probably wrong, and Dennett constitutes genuine progress). On my theory, qualia are not simple. If qualia are by definition simple (perhaps for your reason that they seem that way, and by definition are how they seem), then I am a qualia skeptic. Simple qualia can't exist. But there is independent reason for being skeptical of the idea that phenomenal conscious experiences are as simple as they appear to be. Indeed, Hume gave an example of how problematic it is to trust our intuitions about the simplicity of qualia in his discussion of missing blue, though of course he didn't recognize what the problem really was, and so was unable to solve it.

Comment author: TheAncientGeek 28 September 2016 02:02:22PM 0 points [-]

Given that qualia ere what they appear to be., are you denying that qualia can appear simple, or that they are just appearances?

Comment author: torekp 08 September 2016 02:21:40AM 1 point [-]

It depends how the creatures got there: algorithms or functions? That is, did the designers copy human algorithms for converting sensory inputs into thoughts? If so, then the right kind of experiences would seem to be guaranteed. Or did they find new ways to compute similar coarse-grained input/output functions? Then, assuming the creatures have some reflexive awareness of internal processes, they're conscious of something, but we have no idea what that may be like.

Further info on my position.

Comment author: TheAncientGeek 28 September 2016 01:34:21PM 0 points [-]

That is, did the designers copy human algorithms for converting sensory inputs into thoughts? If so, then the right kind of experiences would seem to be guaranteed

You seem to be rather sanguine about the equivalence of thoughts and experiences.

(And are we talking about equivlanet experiences or identical experiences? Does a tomato have to be coded as red?)

Or did they find new ways to compute similar coarse-grained input/output functions? Then, assuming the creatures have some reflexive awareness of internal processes, they're conscious of something, but we have no idea what that may be like.

It's uncontroversial that the same coarse input-output mappings can be realised by different algorithms..but if you are saying that consc. supervenes on the algorithm, not the function, then the real possibility of zombies follows, in contradiction to the GAZP.

(Actually, the GAZP is rather terrible because irt means you won't; even consider the possibility of a WBE not being fully conscious, rather than refuting it on its own ground).

Comment author: TheAncientGeek 28 September 2016 01:09:34PM *  2 points [-]

My reply to Cerullo:

"If we exactly duplicate and then emulate a brain, then it has captured what science tells us matter for conscious[ness] since it still has the same information system which also has a global workspace and performs executive functions. "

It'll have what science tells us matters for the global workspace aspect of consciousness (AKA access consciousness, roughly). Science doesn't tell us what is needed for phenomenal consciousness (AKA qualia) , because it doesn't know. Consciousness has different facets. You are kind of assuming that where you have one facet, you must have the others...which would be convenient, but isn't something that is really known.

"The key step here is that we know from our own experience that a system that displays the functions of consciousness (the easy problem) also has inner qualia (the hard problem)."

Our own experience pretty much has a sample size of one, and therefore is not a good basis for a general law. The hard question here is something like: "would my qualia remain exactly the same if my identical information-processing were re-implemented in a different physical substrate such as silicon?". We don't have any direct experience of that would answer it. Chalmer's' Absent Qualia paper is an argument to the effect, but I wouldn't call it knowledge. Like most philosophical arguments, its an appeal to intuition., and the weakness of intuition is that it is kind of tied to normal circumstances. I wouldn't expect my qualia to change or go missing while my brain was functioning within normal parameters...but that is the kind of law that sets a norm within normal circumstances, not the kind that is universal and exceptionless. Brain emulation isn't normal, it is unprecedented and artificial.

Comment author: torekp 05 September 2016 11:34:37AM 1 point [-]

The author is overly concerned about whether a creature will be conscious at all and not enough concerned about whether it will have the kind of experiences that we care about.

Comment author: TheAncientGeek 28 September 2016 12:42:49PM 0 points [-]

Everyone should care about pain-pleasure spectrum inversion!

Comment author: Ozyrus 26 September 2016 11:25:21PM *  1 point [-]

I've been meditating lately on a possibility of an advanced artificial intelligence modifying its value function, even writing some excrepts about this topic.

Is it theoretically possible? Has anyone of note written anything about this -- or anyone at all? This question is so, so interesting for me.

My thoughts led me to believe that it is theoretically possible to modify it for sure, but I could not come to any conclusion about whether it would want to do it. I seriously lack a good definition of value function and understanding about how it is enforced on the agent. I really want to tackle this problem from human-centric point, but i don't really know if anthropomorphization will work here.

Comment author: TheAncientGeek 28 September 2016 12:18:25PM 1 point [-]

I've been meditating lately on a possibility of an advanced artificial intelligence modifying its value function, even writing some excrepts about this topic. Is it theoretically possible?

Is it possible for a natrual agent? If so, why should it be impossible for an artifical agent?

Are you thinking that it would be impossible to code in software, for agetns if any intelligence? Or are you saying sufficiently intelligent agents would be able and motivated resist any accidental or deliberate changes?

With regard to the latter question, note that value stability under self improvement is far from a give..the Lobian obstacel applies to all intelligences...the carrot is always in front of the donkey!

https://intelligence.org/files/TilingAgentsDraft.pdf

Comment author: Stuart_Armstrong 19 September 2016 06:03:23PM 1 point [-]

There's only a gap if you start from the assumption that a compartmentalised UF is in some way easy, natural or preferable.

? Of course there's a gap. The AI doesn't start with full NL understanding. So we have to write the AI's goals before the AI understands what the symbols mean.

Even if the AI started with full NL understanding, we still would have to somehow program it to follow our NL instructions. And we can't do that initial programming using NL, of course.

Comment author: TheAncientGeek 22 September 2016 05:03:02PM 0 points [-]

Of course there's a gap. The AI doesn't start with full NL understanding.

Since you are talking in terms of a general counterargument, I don;t think you can appeal to a specific architecture.

So we have to write the AI's goals before the AI understands what the symbols mean.

Which would be a problem if it designed to attempt to execute NL instructions without checking if it understands them...which is a bit clown car-ish. An AI that is capable of learning NL as it goes along is an AI that has gernal a goal to get language right. Why assume it would not care about one specific sentence?

Even if the AI started with full NL understanding, we still would have to somehow program it to follow our NL instructions

Y-e-es? Why assume "it needs to follow instructions" equates to "it would simplify the instructions it's following" rather than something else?

Comment author: Stuart_Armstrong 19 September 2016 06:07:18PM 1 point [-]

The problem exists for reinforcement learning agents and many other designs as well. In fact RL agents are more vulnerable, because of the risk of wireheading on top of everything else. See Laurent Orseau's work on that: https://www6.inra.fr/mia-paris/Equipes/LInK/Les-anciens-de-LInK/Laurent-Orseau/Mortal-universal-agents-wireheading

Comment author: TheAncientGeek 22 September 2016 04:02:54PM *  0 points [-]

Simpler AIs may adopt a simpler version of a goal than the human programmers intentions. It's not clear that they do so because have a motivation to do so. In a sense, a RL agent is only motivated to avoid negative reinforcement. But simpler AIs don't pose much of a threat. Wireheading doesn't pose much of a threat either.

AFAICS, it's an open question whether the goal-simplifying behaviour of simple AI's is due to limitation or motivation.

The contentious claims are concerned with AIs that are human level, or above, sophisticated enough to appreciate human intentions directly, but nonetheless get them wrong. A RL AI that has NL, but nonetheless misunderstand "chocolate" or "happiness", but only on the context of its goals, not in its general world knowledge, needs an architecture that allows it to do that, that allows it to engage in compartmentalisation or doublethink. Doublethink is second nature to humans, because we are optimised for primate politics.

Comment author: TheAncientGeek 22 September 2016 02:46:09PM *  2 points [-]
  1. The idea of that more information can make an AI's inferences worse is surprising. But the idea that the assumption that humans have a unchanging, neatly hierarchical UF is known to be a bad idea, so it is not so surprising that it leads to bad results. In short, this is still a bit clown-car-ish.

  2. Would you tell an AI that Heroin is Bad, but not tell here that Manipulation is Bad?

Comment author: Stuart_Armstrong 19 September 2016 01:28:21PM 1 point [-]

If you are assuming that an AI has sufficiently advanced linguistic abilities to talk its way out of a box, then your opponents are entitled to assume that the same level of ability could be applied to understanding verbally specified goals.

They are entitled to assume they could be applied, not necessarily that they would be. At some point, there's going to have to be something that tells the AI to, in effect, "use the knowledge and definitions in your knowledge base to honestly do X [X = some NL objective]". This gap may be easy to bridge, or hard; no-one's suggested any way of bridging it so far.

It might be possible; it might be trivial. But there's no evidence in that direction so far, and the designs that people have actually proposed have been disastrous. I'll work at bridging this gap, and see if I can solve it to some level of approximation.

And I notice you don't avoid NL arguments yourself.

Yes, which is why I'm stepping away from those argument to help bring clarity.

Comment author: TheAncientGeek 19 September 2016 04:58:19PM *  0 points [-]

They are entitled to assume they could be applied, not necessarily that they would be. At some point, there's going to have to be something that tells the AI to, in effect, "use the knowledge and definitions in your knowledge base to honestly do X [X = some NL objective]". This gap may be easy to bridge, or hard; no-one's suggested any way of bridging it so far.

There's only a gap if you start from the assumption that a compartmentalised UF is in some way easy, natural or preferable. However, your side of the debate has never shown that.

At some point, there's going to have to be something that tells the AI to, in effect, "use the knowledge and definitions in your knowledge base to honestly do X [X = some NL objective]".

No...you don't have to show a fan how to make a whirring sound... use of updatable knowledge to specify goals is a natural consequence of some designs.

It might be possible; it might be trivial.

You are assuming it is difficult, with little evidence.

But there's no evidence in that direction so far, and the designs that people have actually proposed have been disastrous.

Designs that bridge a gap, or designs that intrinsically don't have one?

I'll work at bridging this gap, and see if I can solve it to some level of approximation.

Why not examine the assumption that there has to be a gap?

Comment author: Stuart_Armstrong 19 September 2016 12:44:23PM 1 point [-]

"don't design system whose goals system is walled off from its updateable knowledge base"

Connecting the goal system to the knowledge base is not sufficient at all. You have to ensure that the labels used in the goal system converge to the meaning that we desire them to have.

I'll try and build practical examples of the failures I have in mind, so that we can discuss them more formally, instead of very nebulously as we are now.

Comment author: TheAncientGeek 19 September 2016 04:52:54PM *  -2 points [-]

Connecting the goal system to the knowledge base is not sufficient at all. You have to ensure that the labels used in the goal system converge to the meaning that we desire them to have.

Ok, assuming you are starting from a compartmentalied system, it has to be connected in the right way. That is more of a nitpick than a knockdown.

But the deeper issue is whether you are starting from a system with a distinct utility funciton:

RL:".. talking in terms of an AI that actually HAS such a thing as a "utility function". And it gets worse: the idea of a "utility function" has enormous implications for how the entire control mechanism (the motivations and goals system) is designed.A good deal of this debate about my paper is centered in a clash of paradigms: on the one side a group of people who cannot even imagine the existence of any control mechanism except a utility-function-based goal stack, and on the other side me and a pretty large community of real AI builders who consider a utility-function-based goal stack to be so unworkable that it will never be used in any real AI.Other AI builders that I have talked to (including all of the ones who turned up for the AAAI symposium where this paper was delivered, a year ago) are unequivocal: they say that a utility-function-and-goal-stack approach is something they wouldn't dream of using in a real AI system. To them, that idea is just a piece of hypothetical silliness put into AI papers by academics who do not build actual AI systems.And for my part, I am an AI builder with 25 years experience, who was already rejecting that approach in the mid-1980s, and right now I am working on mechanisms that only have vague echoes of that design in them.Meanwhile, there are very few people in the world who also work on real AGI system design (they are a tiny subset of the "AI builders" I referred to earlier), and of the four others that I know (Ben Goertzel, Peter Voss, Monica Anderson and Phil Goetz) I can say for sure that the first three all completely accept the logic in this paper. (Phil's work I know less about: he stays off the social radar most of the time, but he's a member of LW so someone could ask his opinion)".

View more: Prev | Next