Intelligence isn't a magical single-dimensional quality. It may be generally smarter than EY, but not have the specific FAI theory that EY has developed.
Any AGI will have all dimensions which are required to make a human level or greater intelligence. If it is indeed smarter, then it will be able to figure the theory out itself if the theory is obviously correct, or find a way to get it in a more efficient manner.
I'm trying to be Friendly, but I'm having serious problems with my goals and preferences.
So is this an AGI or not? If it is then it's smarter than Mr. Yudkowski and can resolve it's own problems.
[P]resent only one idea at a time.
Most posts do present one idea at a time. However it may not seem like it because most of the ideas presented are additive - that is, you have to have a fairly good background on topics that have been presented previously in order to understand the current topic. OB and LW are hard to get into for the uninitiated.
To provide more background and context, with the necessarily larger numbers of ideas being presented, while still getting useful feedback from readers.
That is what the sequences were designed to do - give the background needed.
The examples given in the article are bad examples - any decent concept of utility could deal with them pretty easily - but there are good examples he could've used that really do show some underlying ambiguity in the concept around the edges. I think most of those are solveable with enough creativity and enough willingness not to go "Oh, look, something that appears to be a minor surface-level problem, let's immediately give up and throw out the whole edifice!".
But that sort of thing doesn't really matter as regards whether we should use utility for moral judgments. It doesn't have to be perfect, it just has to be good enough. It doesn't take any kind of complicated distinction between hedonism and preference to solve the trolley problem, it just takes the understanding that five lives are, all things being equal, more important than four lives.
This sort of thing is one reason I've tried to stop using the word "utilitarianism" and started using the word "consequentialism". It doesn't set off the same defenses as "utility", and if people agree to judge actions by how well they turn out general human preference similarity can probably make them agree on the best action even without complete agreement on a rigorous definition of "well".
it just takes the understanding that five lives are, all things being equal, more important than four lives.
Your examples rely too heavily on "intuitively right" and ceteris paribus conditioning. It is not always the case that five are more important than four and the mere idea has been debunked several times.
if people agree to judge actions by how well they turn out general human preference
What is the method you use to determine how things will turn out?
similarity can probably make them agree on the best action even without complete agreement on a rigorous definition of "well"
Does consensus make decisions correct?
The economist's utility function is not the same as the ethicist's utility function. The goal of the economist is to describe and predict human behavior, so naturally, the economist's utility function is ill-suited for normative conclusions.
The ethicist's utility function, on the other hand, summarizes what you actually want, should you have the opportunity to sit down and really think about all of the possibilities. Utility in the ethicist's sense and happiness are not the same thing. Happiness is an emotion, a feeling. Utility (in the ethicist's sense) represents what you want, whether or not it is going to make you happy.
If this isn't entirely clear, consider that both happiness and the economist's utility function (they aren't the same thing either, mind you!) summarize a specific set of adaptations which would lead the actor to maximize his or her genetic fitness in some ancestral environment. The ethicist's utility function summarizes all of your values. Sometimes - many times - these values and adaptations come into conflict. For example, one adaptation for men is to treat a step child worse than a biological child, including up to (if getting caught is unlikely) murder. This will not be in the ethicist's utility function.
side note: Nozick's experience machine is no problem for the ethicist's utility function. Do you see why?
p.s.: you might want to reformat your link
The economist's utility function is not the same as the ethicist's utility function
According to who? Are we just redefining terms now?
As far as I can tell your definition is the same as Benthams only implying rules bound more weakly for the practitioner.
I think someone started (incorrectly) using the term and it has taken hold. Now a bunch of cognitive dissonance is fancied up to make it seem unique because people don't know where the term originated.
This is a problem for both those who'd want to critique the concept, and for those who are more open-minded and would want to learn more about it.
Anyone who is sufficiently technically minded undoubtedly finds frustration in reading books which give broad brush stroked counterfactuals to decision making and explanation without delving into the details of their processes. I am thinking of books like Freakonomics, Paradox of Choice, Outliers, Nudge etc..
These books are very accessible but lack the in depth analysis which are expected to be thoroughly critiqued and understood in depth. Writings like Global catastrophic risks and any of the other written deconstructions of the necessary steps of technological singularity lack those spell-it-out-for-us-all sections that Gladwell et al. make their living from. Reasonably so. The issue of singularity is so much more complex and involved that it does not do the field justice to give slogans and banner phrases. Indeed it is arguably detrimental and has the ability to backfire by simplifying too much.
I think however what is needed is a clear, short and easily understood consensus on why this crazy AI thing is the inevitable result of reason, why it is necessary to think about, how it will help humanity, how it could reasonably hurt humanity.
The SIAI tried to do this:
http://www.singinst.org/overview/whatisthesingularity
http://www.singinst.org/overview/whyworktowardthesingularity
Neither of these is compelling in my view. They both go into some detail and leave the un-knowledgeable reader behind. Most importantly neither has what people want: a clear vision of exactly what we are working for. The problem is there isn't a clear vision; there is no consensus on how to start. Which is why in my view the SIAI is more focused on "Global risks" rather than just stating "We want to build an AI"; frankly, people get scared by the latter.
So is this paper going to resolve the dichotomy between the simplified and complex approach, or will we simply be replicating what the SIAI has already done?
Thus if we want to avoid being arbitraged, we should cleave to expected utility.
Sticking with expected utility works in theory if you have a discrete number of variables (options) and can discern between all variables such that they can be judged equally and the cost (in time or whatever) is not greater than the marginal gain from the process. Here is an example I like: Go to the supermarket and optimize your expected utility for breakfast cereal.
The money pump only works if your "utility function" is static, or more accurately, if your preferences update slower than the pumper can change the statistical trade imbalance eg: arbitrage doesn't work if the person outsourced to can also outsource.
I can take advantage of your vN-M axioms if I have any information about one of your preferences which you do not have (this need not be gotten illegally), as a result, you sticking to it would money pump regardless.
This might have something to do with how public commitment may be counterproductive: once you've effectively signaled your intentions, the pressure to actually implement them fades away.
I was thinking about this today in the context of Kurzweil's future predictions and I wonder if it is possible that there is some overlap. Obviously Kurzweil is not designing the systems he is predicting but likely the people who are designing them will read his predictions.
I wonder, if they see the time lines that he predicts if they will potentially think: "oh, well [this or that technology] will be designed by 2019, so I can put it off for a little while longer, or maybe someone else will take the project instead"
It might not be the case and in fact they might use the predicted time line as a motivator to beat. Regardless, I think it would be good for developers to keep things like that in mind.
Okay, I was talking about utility maximization in the decision theory sense. ie, computations of expected utility, etc etc...
As far as happiness being The One True Virtue, well, that's been explicitly addressed
Anyways, "maximize happiness above all else" is explicitly not it. And utility, as discussed on this site is a reference to the decision theoretic concept. It is not a specific moral theory at all.
Now, the stuff that we consider morality would include happiness as a term, but certainly not as the only thing.
Virtue ethics, as you describe it, gives me an "eeew" reaction, to be honest. It's the right thing to do simply because it's what you were optimized for?
If I somehow bioengineer some sort of sentient living weapon thing, is it actually the proper moral thing for that being to go around committing mass slaughter? After all, that's what it's "optimized for"...
As I replied to Tarelton, the not for sake of happiness alone post does not address how he came to the conclusions based on specific decision theoretic optimization. He gives very loose subjective terms for his conclusions:
The best way I can put it, is that my moral intuition appears to require both the objective and subjective component to grant full value.
which is why I worded my question as I did the first time. I don't think he has done the same amount of thinking on his epistemology as he has on his TDT.
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
As I asked in response to your other argument: Who has given utility this new definition?
I think perhaps there is a disconnect between the origins of utilitarianism, and how people who are not economists (Even some economists) understand it.
You as well as black belt bayesian are making the point that utilitarianism as used in an economic sense is somehow non-ethics based, which could not be more incorrect as utilitarianism was explicitly developed with goal seeking behavior in mind - stated by Bentham as greatest hedonic hapiness. It was not derived as a simple calculator, and is rarely used as such in serious academic works because it is so insanely sloppy, subjective and arguably useless as a metric.
True, some economists do use it in reference and it is introduced in academic economic theory as a mathematical principal but I have yet to see an authoritative study which uses expected utility as a variable, nor as it was introduced in my undergrad (Economics) as a reliable measure - again, why you do not see it in authoritative works.
You both imply that the economics version utility is non normative. Again as I said before, it was created specifically to guide economic decision making in how homoeconomicus should act. Does the fact that it can be both used normatively and objectively in economic decision making change the definition? No, because as you said, they use the same math. People forget that political economics was and is still normative whether economist want it to be or not.
Which leads me to what I think the root of this problem is in understanding what economics is. At it's heart economics is both descriptive, prescriptive and normative. Current trends in economics are seeking to turn the discipline into a physics-esqe discipline which seeks to describe economic patterns. Yet, even in these camps they must hold natural rate of employment as good, trade as enhancing, public goods as multiplicative good etc... Lest we forget than Keynesianism was hailed as the next great coming and would revolutionize the way that humans interact. Economics without normative conclusions is just statistics.
I realize it is a semantic point, however if we want to use a term then let's use it correctly. I know Mr. Yudkowski has posted before about the uselessness of debating definitions, however we are talking about the same thing here.
All of this redefining utility discussion smacks of cognitive dissonance to me because it seems to be looking to find some authority on the use of the term utility in the way that people around here want to use it. If you want to use normative utilitarianism then you'll have great fun with Bentham's utilitarianism as it is and has always been normative. The beef seems to lie between expected and average utility - which are both still normative anyway so it is really a moot point.
I have thought of making a separate post on utilitarianism, it's history and errors, mostly because it is the aspect I have been most interested in for the past decade. However I doubt it would give any more information than what exists on the web and in text for any interested parties.
edit: Here is a perfect example of my point about the silliness of expected utility calculation in empirical metrics. The author uses VNM Expected utility based on assumed results of expected utility in terms of summed monetary and psychic income. There are no units, there is no actual calculation. There are however nice pretty formulas which do nothing for us but restate that a terrorist must gain more from his terrorism than other activities.