StefanPernar comments on Less Wrong Q&A with Eliezer Yudkowsky: Ask Your Questions - Less Wrong

16 Post author: MichaelGR 11 November 2009 03:00AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (682)

You are viewing a single comment's thread. Show more comments above.

Comment author: StefanPernar 15 November 2009 01:46:06AM 0 points [-]

Yes - I disagree with Eliezer and have analyzed a fair bit of his writings although the style in which it is presented and collected here is not exactly conducive to that effort. Feel free to search for my blog for a detailed analysis and a summary of core similarities and differences in our premises and conclusions.

Comment author: AdeleneDawner 15 November 2009 02:00:21AM *  6 points [-]

Assuming I have the correct blog, these two are the only entries that mention Eliezer by name.

Edit: The second entry doesn't mention him, actually. It comes up in the search because his name is in a trackback.

Comment author: timtyler 15 November 2009 10:36:34AM *  5 points [-]

Re: "Assumption A: Human (meta)morals are not universal/rational. Assumption B: Human (meta)morals are universal/rational.

Under assumption A one would have no chance of implementing any moral framework into an AI since it would be undecidable which ones they were." (source: http://rationalmorality.info/?p=112)

I think we've been over that already. For example, Joe Bloggs might choose to program Joe's preferences into an intelligent machine - to help him reach his goals.

I had a look some of the other material. IMO, Stefan acts in an authoritative manner, but comes across as a not-terribly articulate newbie on this topic - and he has adopted what seems to me to be a bizarre and indefensible position.

For example, consider this:

"A rational agent will always continue to co-exist with other agents by respecting all agents utility functions irrespective of their rationality by striking the most rational compromise and thus minimizing opposition from all agents." http://rationalmorality.info/?p=8

Comment author: StefanPernar 16 November 2009 12:09:48PM *  1 point [-]

"I think we've been over that already. For example, Joe Bloggs might choose to program Joe's preferences into an intelligent machine - to help him reach his goals."

Sure - but it would be moral simply by virtue of circular logic and not objectively. That is my critique.

I realize that one will have to drill deep into my arguments to understand and put them into the proper context. Quoting certain statements out of context is definitely not helpful, Tim. As you can see from my posts, everything is linked back to a source were a particular point is made and certain assumptions are being defended.

If you have a particular problem with any of the core assumptions and conclusions I prefer you voice them not as a blatant rejection of an out of context comment here or there but based on the fundamentals. Reading my blogs in sequence will certainly help although I understand that some may consider that an unreasonable amount of time investment for what seems like superficial nonsense on the surface.

Where is your argument against my points Tim? I would really love to hear one, since I am genuinely interested in refining my arguments. Simply quoting something and saying "Look at this nonsense" is not an argument. So far I only got an ad hominem and an argument from personal incredulity.

Comment author: timtyler 16 November 2009 06:59:43PM 0 points [-]

This isn't my favourite topic - while you have a whole blog about it - so you are probably quite prepared to discuss things for far longer than I am likely to be interested.

Anyway, it seems that I do have some things to say - and we are rather off topic here. So, for my response, see:

http://lesswrong.com/lw/1dt/open_thread_november_2009/19hl

Comment author: Furcas 15 November 2009 02:16:27AM *  6 points [-]

From the second blog entry linked above:

Two fundamental assumptions:

A) Compassion is a universal value

B) It is a basic AI drive to avoid counterfeit utility

If A = true (as we have every reason to believe) and B = true (see Omohundro’s paper for details) then a transhuman AI would dismiss any utility function that contradicts A on the ground that it is recognized as counterfeit utility.

Heh.

Comment author: RobinZ 15 November 2009 02:56:10AM 7 points [-]

This quotation accurately summarizes the post as I understand it. (It's a short post.)

I think I speak for many people when I say that assumption A requires some evidence. It may be perfectly obvious, but a lot of perfectly obvious things aren't true, and it is only reasonable to ask for some justification.

Comment author: AdeleneDawner 15 November 2009 03:10:08AM *  7 points [-]

... o.O

Compassion isn't even universal in the human mind-space. It's not even universal in the much smaller space of human minds that normal humans consider comprehensible. It's definitely not universal across mind-space in general.

The probable source of the confusion is discussed in the comments - Stefan's only talking about minds that've been subjected to the kind of evolutionary pressure that tends to produce compassion. He even says himself, "The argument is valid in a “soft takeoff” scenario, where there is a large pool of AIs interacting over an extended period of time. In a “hard takeoff” scenario, where few or only one AI establishes control in a rapid period of time, the dynamics described do not come into play. In that scenario, we simply get a paperclip maximizer."

Comment author: RobinZ 15 November 2009 03:18:26AM 4 points [-]

Ah - that's interesting. I hadn't read the comments. That changes the picture, but by making the result somewhat less relevant.

(Incidentally, when I said, "it may be perfectly obvious", I meant that "some people, observing the statement, may evaluate it as true without performing any complex analysis".)

Comment author: AdeleneDawner 15 November 2009 03:25:25AM 2 points [-]

(Incidentally, when I said, "it may be perfectly obvious", I meant that "some people, observing the statement, may evaluate it as true without performing any complex analysis".)

Ah. That's not how I usually see the word used.

Comment author: RobinZ 15 November 2009 03:54:39AM 1 point [-]

It's my descriptivist side playing up - my (I must admit) intuition is that when people say that some thesis is "obvious", they mean that they reached this bottom line by ... well, system 1 thinking. I don't assume it means that the obvious thesis is actually correct, or even universally obvious. (For example, it's obvious to me that human beings are evolved, but that's because it's a cached thought I have confidence in through system 2 thinking.)

Actually, come to think: I know you've made a habit of reinterpreting pronouncements of "good" and "evil" in some contexts - do you have some gut feeling for "obvious" that contradicts my read?

Comment author: AdeleneDawner 15 November 2009 04:03:58AM *  3 points [-]

I generally take 'obvious' to mean 'follows from readily-available evidence or intuition, with little to no readily available evidence to contradict the idea'. The idea that compassion is universal fails on the second part of that. The definitions are close in practice, though, in that most peoples' intuitions tend to take readily available contradictions into account... I think.

ETA: Oh, and 'obviously false' seems to me to be a bit of a different concept, or at least differently relevant, given that it's easier to disprove something than to prove it. If someone says that something is obviously true, there's room for non-obvious proofs that it's not, but if something is obviously false (as 'compassion is universal' is), that's generally a firm conclusion.

Comment author: RobinZ 15 November 2009 04:09:57AM *  2 points [-]

Yes, that makes sense - even if mine is a better description of usage, from the standpoint of someone categorizing beliefs, I imagine yours would be the better metric.

ETA: I'm not sure the caveat is required for "obviously false", for two reasons.

  1. Any substantive thesis (a category which includes most theses that are rejected as obviously false) requires less evidence to be roundly disconfirmed than it does to be confirmed.

  2. As Yvain demonstrated in Talking Snakes, well-confirmed theories can be "obviously false", by either of our definitions.

It's true that it usually takes less effort to disabuse someone of an obviously-true falsity than to convince them of an obviously-false truth, but I don't think you need a special theory to support that pattern.

Comment author: StefanPernar 16 November 2009 01:06:30PM 0 points [-]

From Robin: Incidentally, when I said, "it may be perfectly obvious", I meant that "some people, observing the statement, may evaluate it as true without performing any complex analysis".

I feel the other way around at the moment. Namely "some people, observing the statement, may evaluate it as <b>false</b> without performing any complex analysis"

Comment author: StefanPernar 16 November 2009 12:31:18PM -1 points [-]

Perfectly reasonable. But the argument - the evidence if you will - is laid out when you follow the links, Robin. Granted, I am still working on putting it all together in a neat little package that does not require clicking through and reading 20+ separate posts, but it is all there none the less.

Comment author: RobinZ 16 November 2009 05:14:32PM *  -1 points [-]

I think I'd probably agree with Kaj Sotala's remarks if I had read the passages she^H^H^H^H xe had, and judging by your response in the linked comment, I think I would still come to the same conclusion as she^H^H^H^H xe. I don't think your argument actually cuts with the grain of reality, and I am sure it's not sufficient to eliminate concern about UFAI.

Edit: I hasten to add that I would agree with assumption A in a sufficiently slow-takeoff scenario (such as, say, the evolution of human beings, or even wolves). I don't find that sufficiently reassuring when it comes to actually making AI, though.

Edit 2: Correcting gender of pronouns.

Comment author: StefanPernar 17 November 2009 03:07:19AM *  1 point [-]

Full discussion with Kaj at her http://xuenay.livejournal.com/325292.html?view=1229740 live journal with further clarifications by me.

Comment author: Cyan 17 November 2009 03:30:11AM 3 points [-]

Kaj is male (or something else).

Comment author: AdeleneDawner 15 November 2009 02:19:57AM 3 points [-]

I was going to be nice and not say anything, but, yeah.

Comment author: StefanPernar 16 November 2009 12:21:18PM -2 points [-]

Since when are 'heh' and 'but, yeah' considered proper arguments guys? Where is the logical fallacy in the presented arguments beyond you not understanding the points that are being made? Follow the links, understand where I am coming from and formulate a response that goes beyond a three or four letter vocalization :-)

Comment author: wedrifid 16 November 2009 01:40:18PM 3 points [-]

Where is the logical fallacy in the presented arguments

The claim "[Compassion is a universal value] = true. (as we have every reason to believe)" was rejected, both implicitly and explicitly by various commenters. This isn't a logical fallacy but it is cause to dismiss the argument if the readers do not, in fact, have every reason to have said belief.

To be fair, I must admit that the quoted portion probably does not do your position justice. I will read through the paper you mention. I (very strongly) doubt it will lead me to accept B but it may be worth reading.

Comment author: StefanPernar 16 November 2009 02:21:23PM -1 points [-]

"This isn't a logical fallacy but it is cause to dismiss the argument if the readers do not, in fact, have every reason to have said belief."

But the reasons to change ones view are provided on the site, yet rejected without consideration. How about you read the paper linked under B and should that convince you, maybe you have gained enough provisional trust that reading my writings will not waste your time to suspend your disbelief and follow some of the links in the about page of my blog. Deal?

Comment author: wedrifid 16 November 2009 03:07:13PM *  5 points [-]

How about you read the paper linked under B and should that convince you

I have read B. It isn't bad. The main problem I have with it is that the language used blurs the line between "AIs will inevitably tend to" and "it is important that the AI you create will". This leaves plenty of scope for confusion.

I've read through some of your blog and have found that I consistently disagree with a lot of what you say. The most significant disagreement can be traced back to the assumption of a universal absolute 'Rational' morality. This passage was a good illustration:

Moral relativists need to understand that they can not eat the cake and keep it too. If you claim that values are relative, yet at the same time argue for any particular set of values to be implemented in a super rational AI you would have to concede that this set of values – just as any other set of values according to your own relativism – is utterly whimsical, and that being the case, what reason (you being the great rationalist, remember?) do you have to want them to be implemented in the first place?

You see, I plan to eat my cake but don't expect to be able to keep it. My set of values are utterly whimsical (in the sense that they are arbitrary and not in the sense of incomprehension that the Ayn Rand quotes you link to describe). The reasons for my desires can be described biologically, evolutionarily or with physics of a suitable resolution. But now that I have them they are mine and I need no further reason.