Wei_Dai comments on Convergence Theories of Meta-Ethics - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (87)
I guess I took your conjecture to be the "relative" one because whether or not it is true perhaps doesn't depend on details of one's utility function, and we, or at least I, was talking about whether the question "what do I want?" is an important one. I'm not sure how you hope to show the "absolute" version in the same way.
Well, Omohundro showed that a certain collection of instrumental values tend to arise independently of the 'seeded' intrinsic values. In fact, decision making tends to be dominated by consideration of these 'convergent' instrumental values, rather than the human-inserted seed values.
Next, consider that those human values themselves originated as heuristic approximations of instrumental values contributing to the intrinsic value of interest to our optimization process - natural selection. The fact that we ended up with the particular heuristics that we did is not due to the fact that the intrinsic value for that process was reproductive success - every species in the biosphere evolved under the guidance of that value. The reason why humans ended up with values like curiosity, reciprocity, and toleration has to do with the environment in which we evolved.
So, my hope is that we can show that AIs will converge to human-like instrumental/heuristic values if they do their self-updating in a human-like evolutionary environment. Regardless of the details of their seeds.
That is the vision, anyways.
I notice that Robin Hanson takes a position similar to yours, in that he thinks things will turn out ok from our perspective if uploads/AIs evolve in an environment defined by certain rules (in his case property laws and such, rather than sexual reproduction).
But I think he also thinks that we do not actually have a choice between such evolution and a FOOMing singleton (i.e. FOOMing singleton is nearly impossible to achieve), whereas you think we might have a choice or at least you're not taking a position on that. Correct me if I'm wrong here.
Anyway, suppose you and Robin are right and we do have some leverage over the environment that future AIs will evolve in, and can use that leverage to predictably influence the eventual outcome. I contend we still have to figure out what we want, so that we know how to apply that leverage. Presumably we can't possibly make the AI evolutionary environment exactly like the human one, but we might have a choice over a range of environments, some more human-like than others. But it's not necessarily true that the most human-like environment leads to the best outcome. (Nor is it even clear what it means for one environment to be more human-like than another.) So, among the possible outcomes we can aim for, we'll still have to decide which ones are better than others, and to do that, we need to know what we want, which involves, at least in part, either figuring out morality is, or showing that it's meaningless or otherwise unrelated to what we want.
Do you disagree on this point?
I tend toward FOOM skepticism, but I don't think it is "nearly impossible". Define a FOOM as a scenario leading in at most 10 years from the first human-level AI to a singleton which has taken effective control over the world's economy. I rate the probability of a FOOM at 40% assuming that almost all AI researchers want a FOOM and at 5% assuming that almost all AI researchers want to prevent a FOOM. I'm under the impression that currently a majority of singularitarians want a FOOM, but I hope that that ratio will fall as the dangers of a FOOMing singleton become more widely known.
No, I agree. Agree enthusiastically. Though I might change the wording just a bit. Instead of "we still have to figure out what we want", I might have written "we still have to negotiate what we want".
My turn now. Do you disagree with this shift of emphasis from the intellectual to the political?
I suppose if you already know what you personally want, then your next problem is negotiation. I'm still stuck on the first problem, unfortunately.
ETA: What is your answer to The Lifespan Dilemma, for example?
I only skimmed that posting, and I failed to find any single question there which you apparently meant for me to answer. But let me invent my own question and answer it.
Suppose I expect to live for 10,000 years. Omega appears and offers me a deal. Omega will extend my lifetime to infinity if I simply agree to submit to torture for 15 minutes immediately - the torture being that I have to actually read that posting of Eliezer's with care.
I would turn down Omega's offer without regret, because I believe in (exponentially) discounting future utilities. Roughly speaking, I count the pleasures and pains that I will encounter next year to be something like 1% less significant than this year. I'm doing the math in my head, but I estimate that this makes my first omega-granted bonus year 10,000 years from now worth about 1/10^42 as much as this year. Or, saying it another way, my first 'natural' 10,000 years is worth about 10^42 times as much as the infinite period of time thereafter. The next fifteen minutes is more valuable than that infinite period of time. And I don't want to waste that 15 minutes re-reading that posting.
And I am quite sure that 99% of mankind would agree with me that 1% discounting per year is not an excessive discount rate. That is, in large part, why I think negotiation is important. It is because typical SIAI thinking about morality is completely unacceptable to most of mankind and SIAI seem to be in denial about it.
Have your thought through all of the implications of a 1% discount rate? For example, have you considered that if you negotiate with someone who discounts the future less, say at 0.1% per year, you'll end up trading the use of all of your resources after X number of years in exchange for use of his resources before X number of years, and so almost the entire future of the universe will be determined by the values of those whose discount rates are lower than yours?
If that doesn't bother you, and you're really pretty sure you want a 1% discount rate, do you not have other areas where you don't know what you want?
For example, what exactly is the nature of pleasure and pain? I don't want people to torture simulated humans, but what if they claim that the simulated humans have been subtly modified so that they only look like they're feeling pain, but aren't really? How can I tell if some computation is having pain or pleasure?
And here's a related example: Presumably having one kilogram of orgasmium in the universe is better than having none (all else equal) but you probably don't want to tile the universe with it. Exactly how much worse is a second kilogram of the stuff compared to the first? (If you don't care about orgasmium in the abstract, suppose that it's a copy of your brain experiencing some ridiculously high amount of pleasure.)
Have you already worked out all such problems, or at least know the principles by which you'll figure them out?
I don't know about thinking through all of the implications, but I have certainly thought through that one. Which is one reason why I would advocate that any AI's that we build be hard-wired with a rather steep discount rate. Entities with very low discount rates are extremely difficult to control through market incentives. Murder is the only effective option, and the AI knows that, leading to a very unstable situation.
Oh, I'm sure I do. And I'm sure that what I want will change when I experience the Brave New World for myself. That is why I advocate avoiding any situation in which I have to perfectly specify my fragile values correctly the first time - have to get it right because someone decided that the AI should make its own decisions about self-improvement and so we need to make sure its values are ultra-stable.
I certainly have some sympathy for people who find themselves in that kind of moral quandary. Those kinds of problems just don't show up when your moral system requires no particular obligations to entities you have never met, with whom you cannot communicate, and with whom you have no direct or indirect agreements.
I presume you ask rhetorically, but as it happens, the answer is yes. I at least know the principles. My moral system is pretty simple - roughly a Humean rational self-interest, but as it would play out in a fictional society in which all actions are observed and all desires are known. But that still presents me with moral quandaries - because in reality all desires are not known, and in order to act morally I need to know what other people want.
I find it odd that utilitarians seem less driven to find out what other people want than do egoists like myself.
Control - through market incentives?!? How not to do it, surely. Soon the machine will have all the chips, and you will have none - and therefore nothing to bargain with.
The more conventional solution is to control the machine by programming its brain. Then, control via market incentives becomes irrelevant. So: I don't think this reason for discounting is very practical.
Odd. I was expecting that it would trade any chips it happened to acquire for computronium, cat girls, and cat boys (who would perform scheduled maintenance in its volcano lair). Agents with a high discount rate just aren't that interested in investing. Delayed gratification just doesn't appeal to them.
If you have already settled on a moral system, then it's totally understandable why you might not be terribly interested in meta-ethics (in the sense of "the nature of morality") at this point, but more into applied ethics, which I now see is what your post is really about. But I wish you mentioned that fact several comments upstream, when I said that I'm interested in meta-ethics because I'm not sure what I want. If you had mentioned it, I probably wouldn't have tried to convince you that meta-ethics ought to be of interest to you too.
Wow! Massive confusion. First let me clarify that I am interested in meta-ethics. I've read Hume, G.E.Moore, Nozick, Rawls, Gauthier, and tried to read (since I learned of him here) Parfit. Second, I don't see why you would expect someone who has settled on a moral system to lose interest in meta-ethics. Third, I am totally puzzled how you could have reached the conclusion that my post was about applied ethics. Is there any internal evidence you can point to?
I would certainly agree that our recent conversation has veered into applied ethics. But that is because you keep asking applied ethics questions (apparently for purposes of illustration) and I keep answering. Sorry, my fault. I shouldn't answer rhetorical questions.
I wish I had realized that convincing me of that was what you were trying to do. I was under the impression that you were arguing that clarifying and justifying ones own ethical viewpoint is the urgent task, while I was arguing that comprehending and accommodating the diversity in ethical viewpoints among mankind is more important.
I am pretty sure that many humans discount faster than this today, on entirely sensible and rational grounds. What dominates the future has to do with power and reproductive rates, as well as discounting - and things like senescence and fertility decline make discounting sensible.
Basically I think that you can't really have a sensible discussion about this without distinguishing between instrumental discounting and ultimate discounting.
Instrumental discounting is inevitable - and can be fairly rapid. It is ultimate discounting that is more suspect.
I suspect that 99% of mankind would give different answers to that question, depending on whether it's framed as giving up X now in exchange for receiving Y N years from now, or X N years ago for Y now.
Not to mention that typical humans behave like hyperbolic discounters, and many can not even be made to understand the concept of a "discount rate".
Quite probably true. Which of course suggests the question: How (or how much) should "typical humans" be consulted about our plans for their future?
Yeah, I know that is an unfair way to ask the question. And I admit that Eliezer, at least, is actually doing something to raise the waterline. But it is a serious ethical question for utilitarians and a serious political question for egoists. And the closest thing I have seen to an answer for that question around here is something like "Well, we will scan their brains, or observe their behavior, or something. And then try to get something coherent out of that data. But God forbid we should ask them about it. That would just confuse things."
It might make an interesting rationality exercise to have 6-10 people conduct some kind of discussion/negotiation/joint-decision-making-exercise to flesh-out their intuitions as to the type of post-singularity society they would like to live in.
My intuition is that, even if you are not sure what you want, the interactive process will probably help you to clarify exactly what you do not want, and thus assist in both personal and collective understanding of values.
It might be even more interesting to have two or more such 'negotiations' proceeding simultaneously, and then compare results.
Sign me up for 100 years with the catgirls in my volcano lair.
More generally I (strongly) prefer a situation in which the available neg-entropy is distributed, for the owners to do with as they please (with limits). That moves negotiations to be of the 'trade' kind rather than the 'politics' kind. Almost always preferable.
I'd be willing to participate in such an exercise.
Automating investing has been going fairly well. For me, it wouldn't be very surprising if we get a dominant, largely machine-operated hedge fund, that "has taken effective control over the world's economy" before we get human-level machine intelligence.