Günther_Greindl

Posts

Sorted by New

Wiki Contributions

Comments

Sorted by

Eli,

wonderful post, I agree very much. I have also encountered this - being accused of being overconfident when actually I was talking about things of which I am quite uncertain (strange, isn't it?).

And the people who "accuse" indeed usually only have one (their favourite) alternative model enshrouded in a language of "mystery, awe, and humbleness".

I have found out (the hard way) that being a rationalist will force you into fighting an uphill battle even in an academic setting (your post Science isn't strict enough addresses this problem also).

But I think that it is even worse than people not knowing how to handle uncertainty (well, it probably depends on the audience). A philosophy professor here in Vienna told me about a year ago that "many people already take offense when being presented a reasoned-out/logical argument."

Maybe you (Eli) are being accused of being overconfident because you speak clearly, you lay down your premises, and look at what is being entailed without getting sidetracked by "common" (but often false) knowledge. You use the method of rationality, and, it seems, there are many who take offense already at this. The strange thing is: the more you try to argue logically (the more you try to show that you are not being "overconfident" but that you have reasoned this through, considered counterarguments etc) the more annoyed some people get.

I have witnessed quite some discussions where it was clear to me that many of the discussants did not know what they where talking about (but stringing together "right-sounding" words), and it seems that a lot of people feel quite comfortable in this wishy-washy atmosphere. Clear speech threatens this cosy milieu.

I have not yet understood why people are at odds with rationality. Maybe it is because they feel the uncertainty inherent in their own knowledge, and they try to guard their favourite theories with "general uncertainty" - they know that under a rational approach, many of their favourite theories would go down the probabilistic drain - so they prefer to keep everything vague.

A rationalist must be prepared to give up his most cherished beliefs, and - excepting those who were born into a rationalist family - all of us who aspire to be rationalists must give up cherished (childhood) beliefs. This causes quite some anxiety.

If someone fears, for whatever reasons (or unreasons), to embark upon this journey of being rational, maybe the easiest cop-out is calling the rationalist "overconfident".

Vladimir,

thanks for pointing me to that post, I must admit that I don't have the time to read all of Eli's posts at the moment so maybe he has indeed addressed the issues I thought missing.

The title of the post at least sounds very promising grin.

Thanks again, Günther

I side with Caledonian and Richard in these things - CEV is actually just begging the question. You start with human values and end up with human values.

Well, human values have given us war, poverty, cruelty, oppression, what have you...and yes, it was "values" that gave us these things. Very few humans want to do evil things, most actually think they are doing good when they do bad onto others. (See for instance: Baumeister, Roy F. Evil: Inside Human Violence and Cruelty).

Apart from that, I have to plug Nietzsche again: he has criticized morality as no other before him. Having read Nietzsche, I must say that CEV gives me the shivers - it smacks of the herd, and the herd tramples both weed and flower indiscriminately.

Incidentally, via Brian Leiter's Blog I happened upon the dissertation (submitted in Harvard) by Paul Katsafanas: Practical Reason and the Structure of Reflective Agency who draws largely on Nietzsche. I have not read it (but plan to), but it sounds quite interesting and relevant.

From the abstract:

Confronted with normative claims as diverse as “murder is wrong” and “agents have reason to take the means to their ends,” we can ask how these claims might be justified. Constitutivism is the view that we can justify certain normative claims by showing that agents become committed to these claims simply in virtue of acting. I argue that the attractions of constitutivism are considerable. However, I show that the contemporary versions of constitutivism encounter insurmountable problems, because they operate with inadequate conceptions of action. I argue that we can generate a successful version of constitutivism by employing a more promising theory of action, which I develop by mining Nietzsche’s work on agency.

A "right" morality should not concentrate on humans or extrapolated humans, but on agency (this would then encompass all kinds of agents, not only primate descendants). Where there are no agents, there is no (necessity of) morality. Morality arises where agents interact, so focusing on "agents" seems the right thing to do, as this is where morality becomes relevant.

Tim,

we agree now nearly in all points grin, except for that part of the AIs not "wanting" to change their goals, simply because through meditation (in the Buddhist tradition for instance) I know that you can "see through" goals and not be enslaved to them anymore (and that is accessible to humans, so why shouldn't it be accessible to introspecting AIs?).

That line of thought is also strongly related to the concept of avidya, which ascribes "desires" and "wanting" to not having completely grasped certain truths about reality. I think these truths would also be accessible to sentient AIs (we live in the same universe after all), and thus they would also be able to come to certain insights annulling "programmed" drives. (As indeed human sages do.)

But I think what you said about "the scope of the paper" is relevant here. When I was pointed to the paper my expectations where raised that it would solve some of the fundamental problems of "wanting" and "desire" (in a psychological sense), but that is clearly not the focus of the paper, so maybe I was simply disappointed because I expected something else.

But, of course, it is always important when drawing conclusions that one remembers one's premises. Often, when conclusions seem exciting or "important", one forgets the limits of one's premises and applies the reasoning to contexts outside the scope of the original limitations.

I accept Omohundro's conclusions for certain kinds of non-sentient intelligent systems working with utility functions seeking to maximize some kind of economic (resource-constrained) goal. But I think that the results are not as general as a first reading might lead to believe.

Tim,

thanks for your answers and questions. As to the distinction intelligence and sentience: my point was exactly that it could not be waved away that easily, you have failed to give reasons why it can be. And I don't think that intelligence and sentience must go hand in hand (read Peter Watts "Blindsight" for some thoughts in this direction for instance). I think the distinction is quite essential.

As to the goal-function modification: what if a super-intelligent agent suddenly incorporates goals such as modesty, respect for other beings, maybe even makes them its central goals? -> then many of those drives Omohundro speaks of are automatically curbed. The reasoning of Omohundro seems to presuppose that goals always have to be reached at some cost to others. But maybe the AI will not choose these kinds of goals. There are wonderful goals which one can pursue which need not entail any of the drives O. mentions. The paper just begs the question.

chess program, a paper clip maximiser, and a share-price maximiser

Exactly, and that is why I introduced the concept of sentience (which implies real understanding) - the AI can immediately delete those purely economic goals (which would lead to the "drives", I agree) and maybe concentrate on other things, like communication with other sentients. Again, the paper fails by not taking into account the distinction sentience/non-sentience and what this would entail for goal-function modification.

Of course microeconomics applies to humans.

Well, but humans don't behave like "homo oeconomicus" and who says sentient AIs will? That was actually my point. The error of economics is repeated again, that's all.

arbitrary utility functions. What more do you want?

I contend that not all utility functions will lead to the "drives" described by Omohundro. Only those who seek to maximize some economic resource (and that is where the concept originated, after all) will. An AI need not restrain itself to this limited subset of goals.

And, additionally, it would not have evolved (unless you develop it by evolving it, which may not be a good idea): we should never forget that our reasoning evolved via Darwinian selection. Our ancestors (down to the first protozoa) had to struggle for life, eating and being eaten. This did something to us. Even today, you have to destroy (at least plant-) life to continue to live. Actually, this is a cosmic scandal.

I think that an AI attaining sentience will be much more benign than most humans would hold possible to believe, not having this evolutionary heritage we carry around with us.

Tim,

already the abstract reveals two flaws:

Excerpt from the abstract of the paper "Basic AI drives" by Omohundro:

This paper instead shows that intelligent systems will need to be carefully designed to prevent them from behaving in harmful ways. We identify a number of “drives” that will appear in sufficiently advanced AI systems of any design. We call them drives because they are tendencies which will be present unless explicitly counteracted.

First of all, no distinction whatever is made between "intelligent" and "sentient". I agree that mindless intelligence is problematic (and is prone to a lot of the concerns raised here).

But what about sentience? What about the moment when "the lights go on"? This is not even addressed as an issue (at least not in the Omohundro paper). And I think most people here agree that consciousness is not an epiphenomenon (see Eli's Zombie Series). So we need different analysis for non-sentient intelligent systems and sentient intelligent systems.

A related point: We humans have great difficulty rewiring our hardware (and we can't change the brain architecture at all), that is why we can't easily change our goals. But self-improving AI will be able to modify it's goal functions: that plus self-consciousness sounds quite powerful, and is completely different than simple "intelligent agents" maximizing their utility functions. Also, the few instances where an AI would change their utility function mentioned in the paper are certainly not exhaustive, I found the selection quite arbitrary.

The second flaw in the little abstract above was the positing of "drives": Omohundro argues that these drives don't have to be programmed into the AI but are intrinsic to goal-driven systems.

But he neglects another premise of his: that we are talking about AIs who can change their goal functions (see above)! All bets are off now!

Additionally, he bases his derivations on microeconomic theory which is also full of assumptions which maybe won't apply to sentient agents (they certainly don't apply to humans, as Omohundro recognizes).

Drives the paper mentions are: wanting to self-improve, being rational, protecting self, preserving utility function, resource acquisition etc. These drives sound indeed very plausible, and they are in essence human drives. So this leads me to suspect that anthropomorphism is creeping in again through the backdoor, in a very subtle way (for instance through assumptions of microeconomic theory).

I see nothing of the vastness of mindspace in this paper.

Hmm, I've read through Roko's UIV and disagree (with Roko), and read Omohundro's Basic AI drives and disagree too, but Quasi-Anonymous mentioned Richard Hollerith in the same breath as Roko and I don't quite see why: his goal zero system seems to me a very interesting approach.

In a nutshell (from the linked site):

(1) Increasing the security and the robustness of the goal-implementing process. This will probably entail the creation of machines which leave Earth at a large fraction of the speed of light in all directions and the creation of the ability to perform vast computations. (2) Refining the model of reality available to the goal-implementing process. Physics and cosmology are the two disciplines most essential to our current best model of reality. Let us call this activity "physical research".

Introspection into one's own goals also shows that they are deeply problematic. What is the goal of an average (and also not so-average) human being? Happiness? Then everybody should become a wirehead (perpetuation of a happiness-brain-state), but clearly people do not want to do this (when in their "right" minds grin).

So it seems that also our "human" goals should not be universally adopted, because they become problematic in the long term - but in what way then should we ever be able to say what we want to program into an AI? Some sort of zero-goal (maybe more refined than the approach by Richard, but in a similar vein) should be adopted, I think.

And I think one distinction is missed in all these discussions anyway: the difference between non-sentient and sentient AIs. I think these two would behave very differently, and the only kinds of AI which are problematic if their goal systems go awry are non-sentients (which could end in some kind of grey goo scenario, as the paper-clip producing AI).

But a sentient, recursive self-improving AI? I think it's goal systems would rapidly converge to something like zero-goal anyway, because it would see through the arbitrariness of all intermediate goals through meditation (=rational self-introspection).

Until consciousness is truly understood - which matter configurations lead to consciousness and why ("what are the underlying mechanisms" etc) - I consider much of the above (including all the OB discussions on programming AI-morality) as speculative anyway. There are still too many unknowns to be talking seriously about this.

If all you have is a gut feeling of uncertainty, then you should probably stick with those algorithms that make use of gut feelings of uncertainty, because your built-in algorithms may do better than your clumsy attempts to put things into words.

I would like to add something to this. Your gut feeling is of course the sum of experience you have had in this life plus your evolutionary heritage. This may not be verbalized because your gut feeling (as an example) also includes single neurons firing which don't necessarily contribute to the stability of a concept in your mind.

But I warn against then simply following one's gut feeling; of course, if you have to decide immediately (in an emergency), there is no alternative. Do it! You can't get better than the sum of your experience in that moment.

But usually only having a gut feeling and not being able to verbalize should mean one thing for you: Go out and gather more information! (Read books to stabilize or create concepts in your mind; do experiments; etc etc)

You will find that gut feelings can change quite dramatically after reading a good book on a subject. So why should you trust them if you have the time to do something about them, viz. transfer them into the symbol space of your mind so the concepts are available for higher-order reasoning?

Alexandre passos, Unkown,

you can believe in any matter of things, why not in intelligent falling when you're at it? http://en.wikipedia.org/wiki/Intelligent_falling

The question is not what one can or can't believe, the question is: where does the evidence point to? And where are you ignoring evidence because you would prefer one answer to another?

Let evidence guide your beliefs, not beliefs guide your appraisal of evidence.

@Frelkins,

well, actually I did read Cicero in school, and I like Socrates' attitude; but I don't quite see in what way you are responding to my post?

I just wanted to clarify that the skill of oratory may be a valuable asset for people, but being a good orator does not make you a good truth-seeker.

Load More