ChrisHibbert comments on Less Wrong Q&A with Eliezer Yudkowsky: Ask Your Questions - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (682)
Do you disagree with Eliezer substantively? If so, can you summarize how much of his arguments you've analyzed, and where you reach different conclusions?
Yes - I disagree with Eliezer and have analyzed a fair bit of his writings although the style in which it is presented and collected here is not exactly conducive to that effort. Feel free to search for my blog for a detailed analysis and a summary of core similarities and differences in our premises and conclusions.
Assuming I have the correct blog, these two are the only entries that mention Eliezer by name.
Edit: The second entry doesn't mention him, actually. It comes up in the search because his name is in a trackback.
Re: "Assumption A: Human (meta)morals are not universal/rational. Assumption B: Human (meta)morals are universal/rational.
Under assumption A one would have no chance of implementing any moral framework into an AI since it would be undecidable which ones they were." (source: http://rationalmorality.info/?p=112)
I think we've been over that already. For example, Joe Bloggs might choose to program Joe's preferences into an intelligent machine - to help him reach his goals.
I had a look some of the other material. IMO, Stefan acts in an authoritative manner, but comes across as a not-terribly articulate newbie on this topic - and he has adopted what seems to me to be a bizarre and indefensible position.
For example, consider this:
"A rational agent will always continue to co-exist with other agents by respecting all agents utility functions irrespective of their rationality by striking the most rational compromise and thus minimizing opposition from all agents." http://rationalmorality.info/?p=8
"I think we've been over that already. For example, Joe Bloggs might choose to program Joe's preferences into an intelligent machine - to help him reach his goals."
Sure - but it would be moral simply by virtue of circular logic and not objectively. That is my critique.
I realize that one will have to drill deep into my arguments to understand and put them into the proper context. Quoting certain statements out of context is definitely not helpful, Tim. As you can see from my posts, everything is linked back to a source were a particular point is made and certain assumptions are being defended.
If you have a particular problem with any of the core assumptions and conclusions I prefer you voice them not as a blatant rejection of an out of context comment here or there but based on the fundamentals. Reading my blogs in sequence will certainly help although I understand that some may consider that an unreasonable amount of time investment for what seems like superficial nonsense on the surface.
Where is your argument against my points Tim? I would really love to hear one, since I am genuinely interested in refining my arguments. Simply quoting something and saying "Look at this nonsense" is not an argument. So far I only got an ad hominem and an argument from personal incredulity.
This isn't my favourite topic - while you have a whole blog about it - so you are probably quite prepared to discuss things for far longer than I am likely to be interested.
Anyway, it seems that I do have some things to say - and we are rather off topic here. So, for my response, see:
http://lesswrong.com/lw/1dt/open_thread_november_2009/19hl
From the second blog entry linked above:
Heh.
This quotation accurately summarizes the post as I understand it. (It's a short post.)
I think I speak for many people when I say that assumption A requires some evidence. It may be perfectly obvious, but a lot of perfectly obvious things aren't true, and it is only reasonable to ask for some justification.
... o.O
Compassion isn't even universal in the human mind-space. It's not even universal in the much smaller space of human minds that normal humans consider comprehensible. It's definitely not universal across mind-space in general.
The probable source of the confusion is discussed in the comments - Stefan's only talking about minds that've been subjected to the kind of evolutionary pressure that tends to produce compassion. He even says himself, "The argument is valid in a “soft takeoff” scenario, where there is a large pool of AIs interacting over an extended period of time. In a “hard takeoff” scenario, where few or only one AI establishes control in a rapid period of time, the dynamics described do not come into play. In that scenario, we simply get a paperclip maximizer."
Ah - that's interesting. I hadn't read the comments. That changes the picture, but by making the result somewhat less relevant.
(Incidentally, when I said, "it may be perfectly obvious", I meant that "some people, observing the statement, may evaluate it as true without performing any complex analysis".)
Ah. That's not how I usually see the word used.
It's my descriptivist side playing up - my (I must admit) intuition is that when people say that some thesis is "obvious", they mean that they reached this bottom line by ... well, system 1 thinking. I don't assume it means that the obvious thesis is actually correct, or even universally obvious. (For example, it's obvious to me that human beings are evolved, but that's because it's a cached thought I have confidence in through system 2 thinking.)
Actually, come to think: I know you've made a habit of reinterpreting pronouncements of "good" and "evil" in some contexts - do you have some gut feeling for "obvious" that contradicts my read?
I generally take 'obvious' to mean 'follows from readily-available evidence or intuition, with little to no readily available evidence to contradict the idea'. The idea that compassion is universal fails on the second part of that. The definitions are close in practice, though, in that most peoples' intuitions tend to take readily available contradictions into account... I think.
ETA: Oh, and 'obviously false' seems to me to be a bit of a different concept, or at least differently relevant, given that it's easier to disprove something than to prove it. If someone says that something is obviously true, there's room for non-obvious proofs that it's not, but if something is obviously false (as 'compassion is universal' is), that's generally a firm conclusion.
From Robin: Incidentally, when I said, "it may be perfectly obvious", I meant that "some people, observing the statement, may evaluate it as true without performing any complex analysis".
I feel the other way around at the moment. Namely "some people, observing the statement, may evaluate it as <b>false</b> without performing any complex analysis"
Perfectly reasonable. But the argument - the evidence if you will - is laid out when you follow the links, Robin. Granted, I am still working on putting it all together in a neat little package that does not require clicking through and reading 20+ separate posts, but it is all there none the less.
I think I'd probably agree with Kaj Sotala's remarks if I had read the passages she^H^H^H^H xe had, and judging by your response in the linked comment, I think I would still come to the same conclusion as she^H^H^H^H xe. I don't think your argument actually cuts with the grain of reality, and I am sure it's not sufficient to eliminate concern about UFAI.
Edit: I hasten to add that I would agree with assumption A in a sufficiently slow-takeoff scenario (such as, say, the evolution of human beings, or even wolves). I don't find that sufficiently reassuring when it comes to actually making AI, though.
Edit 2: Correcting gender of pronouns.
Full discussion with Kaj at her http://xuenay.livejournal.com/325292.html?view=1229740 live journal with further clarifications by me.
Kaj is male (or something else).
I was going to be nice and not say anything, but, yeah.
Since when are 'heh' and 'but, yeah' considered proper arguments guys? Where is the logical fallacy in the presented arguments beyond you not understanding the points that are being made? Follow the links, understand where I am coming from and formulate a response that goes beyond a three or four letter vocalization :-)
The claim "[Compassion is a universal value] = true. (as we have every reason to believe)" was rejected, both implicitly and explicitly by various commenters. This isn't a logical fallacy but it is cause to dismiss the argument if the readers do not, in fact, have every reason to have said belief.
To be fair, I must admit that the quoted portion probably does not do your position justice. I will read through the paper you mention. I (very strongly) doubt it will lead me to accept B but it may be worth reading.
"This isn't a logical fallacy but it is cause to dismiss the argument if the readers do not, in fact, have every reason to have said belief."
But the reasons to change ones view are provided on the site, yet rejected without consideration. How about you read the paper linked under B and should that convince you, maybe you have gained enough provisional trust that reading my writings will not waste your time to suspend your disbelief and follow some of the links in the about page of my blog. Deal?
I have read B. It isn't bad. The main problem I have with it is that the language used blurs the line between "AIs will inevitably tend to" and "it is important that the AI you create will". This leaves plenty of scope for confusion.
I've read through some of your blog and have found that I consistently disagree with a lot of what you say. The most significant disagreement can be traced back to the assumption of a universal absolute 'Rational' morality. This passage was a good illustration:
You see, I plan to eat my cake but don't expect to be able to keep it. My set of values are utterly whimsical (in the sense that they are arbitrary and not in the sense of incomprehension that the Ayn Rand quotes you link to describe). The reasons for my desires can be described biologically, evolutionarily or with physics of a suitable resolution. But now that I have them they are mine and I need no further reason.