All of AndrewKemendo's Comments + Replies

I had not read that part. Thanks.

I do not see any difference in inductive bias as it is written there and dictionary and wikipedia definitions of faith:

Something that is believed especially with strong conviction(http://www.merriam-webster.com/dictionary/faith)

Faith is to commit oneself to act based on sufficient experience to warrant belief, but without absolute proof.

I think you, EY and most use the term faith in a historical context related to religion rather than its definitional context as it relates to epistemological concerns of trust in an idea or claim

The best definition I have found so far for faith is thus:

Faith is to commit oneself to act based on sufficient experience to warrant belief, but without absolute proof.

So I have no problem using faith and induction interchangeably because it is used just as you say:

inferring the future from the past (or the past from the present), which basically requires th

... (read more)
3Psychohistorian
The problem with this definition is that it describes every action you will ever take. "Absolute proof" does not exist with respect to anything in the real world. You only have absolute certainty in a definitional context, e.g."There are no married bachelors" - this is true by definition, but tells you nothing about the actual world. Given that the last statement applies to every single instance, your statement reduces to: This statement sounds just like "rational action." That's why many of us take issue with your definition of faith; it does not appear to be a productive concept. Insofar as absolute certainty is impossible, if you're using faith to get you to absolute certainty, you're doing something very, very wrong. The other problem with this definition is that it is not really compatible with the dictionary definitions, the most pertinent one of which is "belief in the absence of proof."

Intuition (what you call "faith") is evidence.

If you will, please define intution as you understand it.

From how I understand intuition, it is knowledge for which the origin cannot be determined. I have certainly experienced the "I know I read something about that somewhere but I just can't remember" feeling before and was right about it. However just as equally I have been wrong about conclusions that I have come to through this means.

I think your entire post gives the same visceral description as someone would describe about having... (read more)

2wedrifid
I still call it intuition once I (believe I) can work out how it originated. Perhaps I would go with "cannot be easily dissected".

confidence level.

Most people do not understand what a confidence interval or confidence levels are. At least in my interactions. Unless you have had some sort of statistics (even basic) you probably haven't heard of it.

2MrHen
I would agree, but I think that teaching them a new term is easier than changing their conception of the term "faith."

I think it improperly relabels "uncertainty" as "faith."

Perhaps. The way I see uncertainty as it pertains to one or another claim is that there will almost always be a reasonable counter claim and in order to dismiss the counter claim and accept the premise, that is faith in the same sense.

The only thing one truly must have faith in (and please correct me if you can; I'd love to be wrong) is induction, and if you truly lacked faith in induction, you'd literally go insane.

Intuition and induction are in my view very similar to wha... (read more)

3Psychohistorian
I don't see how this works. Induction is, basically, the principle of inferring the future from the past (or the past from the present), which basically requires the universe to consistently obey the same laws. The problem with this, of course, is that the only evidence we have that the future will be like the past is the fact that it always has been, so there's a necessary circularity. You can't provide evidence for induction without assuming induction is correct; indeed, the very concept of "evidence" assumes induction is correct. Intuition, on the other hand, is entirely susceptible to being analyzed on its merits. If our intuition tends to be right, we are justified in relying on it, even if we don't understand precisely how it works. If it isn't typically right for certain things, or if it contradicts other, better evidence, we're wrong to rely on it, even though believing contrary to our intuition can be difficult. I don't see how either of these concepts can be equated with a conventional use of "faith." Edited in response to EY's comment below: I'm not meaning to compare faith in induction to faith in religion at all. The "leap" involved differs extraordinarily, as one is against evidence and the other is evidence. Not to mention every religious person also believes in induction, so the faith required for religion is necessarily in addition to that required by everyone to not get hit by a bus.

Sure. What's not rational is to believe ... politicians

I think that is likely the best approach

Your argument seems to conclude that:

It is impossible to reason with unreasonable people

Agreed. Now what?

Ostensibly your post is about how to swing the ethos of a large group of people towards behaving differently. I would argue that has never been necessary and still is not.

A good hard look at any large political or social movement reveals a small group of very dedicated and motivated people, and a very large group of passive marginally interested people who agree with whatever sounds like it is in their best interest without them really doing too muc... (read more)

0cabalamat
Sure. What's not rational is to believe that politicians will deliver on the promise of reducing waste. All politicians say they will do it, and have done for a long time, but governments are not noticable less wasteful than they were 50 or so years ago. It's therefore irrational to believe a politician when they say they will cut waste, unless they say in detail how they will do so (which they usually don't).

I am not a fan of internet currency in all its forms generally because it draws attention away from the argument.

Reddit, which this is based on, went to disabling a subtractive karma rule for all submissions and comments. Submissions with down votes greater than up votes just don't go anywhere while negative comment votes get buried similar to how they do here. That seems like a good way to organize the system.

Is the reason that it was implemented in order to be signaling for other users or is it just an artifact of the reddit API? Would disabling the act... (read more)

0zero_call
I agree with the tone of your post. Voting doesn't work very effectively, in all sorts of situations (e.g., Bush x2). I do kind of like the slashdot style system where maybe you can mark a post from a list of flags, e.g., insightful, funny, intelligent, etc, and perhaps add in a few negative or critical flags like (juvenile, trivial, poorly worded, etc.) I think these changes would encourage a more holistic evaluation of responses and would work to avoid the overall gruffness, and over-priming effect related to the current system. Edit: Even better, you could attach a +1 signifier to positive indications (intelligent, insightful, etc.), and a -1 signifier to negative indications (poorly worded, juvenile, etc.), and then enforce that whenever someone votes, they must include a flag. Then, when displaying total points, they should take the average over all categories, so that the only truly negative posts are those which are extremely dynamically poor, i.e., poor in every category.
Alicorn190

I have no clever reply to most of your comment, but:

I personally do not submit more responses and posts because of the karma system.

In my case, it's very much a motivating factor. In fact, I do not think I would have ever been led to comment or post at all without karma. I think this is primarily because I consider it exceptionally valuable, easy-to-read instant feedback on how I'm being received, which I'm normally bad at discerning and find a very important component of any sort of interaction. I virtually never comment on other blogs at all.

5CarlShulman
The only reason I look at a commenter or poster's karma is when the post or comment seems tremendously bad, and I am trying to decide how much benefit of the doubt to give. In that case I mainly look to see if it's significantly above zero, and don't care beyond that.

The most important of which is: if you only do what feels epistemically "natural" all the time, you're going to be, well, wrong.

Then why do I see the term "intuitive" used around here so much?

I say this by way of preamble: be very wary of trusting in the rationality of your fellow humans, when you have serious reasons to doubt their conclusions.

Hmm, I was told here by another lw user that the best thing humans have to truth is consensus.

Somewhere there is a disconnect between your post and much of the consensus, at least in practice, of LW users.

From my understanding Mr. Yudkowski has two separate but linked interests, that of rationality, which predominates in writings and blog posts and designing AI, which is the interaction with SIAI. While I disagree about their particular approach (or lack thereof) I can see how it is rational to follow both simultaneously toward similar ends.

I would argue that rationality and AI are really the same project at different levels and different stated outcomes. Even if an AI never develops, increasing rationality is a good enough goal in and of itself.

I suppose my post was poorly worded. Yes, in this case omega is the reference set for possible world histories.

What I was referring to was the baseline of w as an accurate measure. It is a normalizing reference, though not a set.

The main problem I have always had with this is that the reference set is "actual world history" when in fact that is the exact thing that observers are trying to decipher.

We all realize that there is in fact an "actual world history" however if it was known then this wouldn't be an issue. Using it as a reference set then, seems spurious in all practicality.

The most obvious way to achieve it is for the two agents to simply tell each other I(w) and J(w), after which they share a new, common information partition.

I think that summatio... (read more)

0janos
Huh? The reference set Ω is the set of possible world histories, out of which one element is the actual world history. I don't see what's wrong with this.

1) In the pursuit of truth, you must always be on the lookout for the motive force of the resource-seeking that hinges on not finding the truth.

I think this sums up the "follow the money" axiom quite nicely.

0CronoDAS
Indeed?

There is a fantastic 24 part CBC podcast called How to think about science mp3 format here. It interviews 24 different research scientists and philosophy of science experts on the history and different views of both the scientific process, historical trends and the role of science in society. It is beyond well worth the time to listen to.

I have found that the series confirms what scientists have known already: Researchers rarely behave differently as a group than any other profession, yet they are presented as a non biased objective homogeneous group by mo... (read more)

0Morendil
I vigorously second the recommendation for How to Think About Science. EDIT: removed the acronym (HTTAS). Sometimes trying to save time results in a net loss... :(

In no way do I think that the parapsychologists have good hypotheses or reasonable claims. I also am a firm adherent to the ethos: Extraordinary claims must have extraordinary proofs. However to state the following:

one in which the null hypothesis is always true.

is making a bold statement about your level of knowledge. You are going so far as to say that there is no possible way that there are hypotheses which have yet to be described which could be understood through the methodology of this particular subgroup. This exercise seems to me to be rejectin... (read more)

1AllanCrossman
OK. But the point about what we can conclude about regular science stands even if this is only mostly correct.

I have never seen a parapsychology study, so I will go look for one. However does every single study have massive flaws in it?

Damien Broderick's Outside the Gates of Science summarizes a number of parapsychology studies, noting that several of the studies do indeed seem quite solid. It doesn't come to any definite conclusion over whether psi phenomena are actually real or if there's just something wrong with our statistical techniques, but it does seem like there might be enough to warrant more detailed study. See also e.g. Ben Goertzel's review of the ... (read more)

6Blueberry
This is exactly the point. Parapsychology is one of the very few things we can reject intuitively, because we understand the world well enough to know that psychic powers just can't exist. We can reject them even when proper analysis doesn't indicate that they're wrong, which tells us something about the limitations of analysis. ETA: Essentially, if the scientific method can't reject parapsychology, that means the scientific method isn't strong enough, not that parapsychology might be legitimate.
2billswift
This http://www.susanblackmore.co.uk/Articles/si87.html isn't a study, it's Susan Blackmore's article discussing 10 years of research attempting to demonstrate psi phenomena.
1CronoDAS
Not really. The ones that aren't flawed are either negative or not replicable, though.

See my response here

You want to consider the utility of the terrorists, at the appropriate level of detail.

Huh? Yes it will. You mean "you will still find it undesirable and or hard for you to understand".

What are the units for expected utility? How do you measure them? Can you graph my utility function?

I can look at behaviors of people and say that on this day Joe bought 5 apples and 4 oranges, on this day he bought 2 kiwis, 2 apples and no oranges etc...but that data doesn't reliably forecast expected utility for oranges. There are so many ... (read more)

efficient markets quite by definition are allowing greater progress along individual value scales than inefficient markets, though not necessarily as much progress as some further refinement

Inefficient markets are great for increasing individual wealth of certain groups. I think Rothbard would disagree with the second point (regulation) - as would I.

In short, I, and much of the modern profession of economics, hold little attachment to the origins of economic theory (though I am surprised that you didn't include Smith's Wealth of Nations in your list,

... (read more)

The description you gave of economic theory completely ignores the origins of micro and macro economics, price theory and comparative economics.

The assumptions that underlie these disciplines are normative.

Steve Levitt's finding that the availability of abortion caused a lagged decrease in crime.

Actually that is descriptive statistics. Just as I pointed out before - economics without normative conclusions is statistics.

Doubtful, but in your undergrad you might have read one of the following:

Adam Smith's Theory of Moral Sentiments

John Maynerd Keynes' Ge... (read more)

0Technologos
However interesting the question of origins in economics is, I was under the impression that we were talking about how it currently works, not how it was conceptualized decades and centuries ago. I'd be fascinated to hear why you thought it doubtful that I've read those books (most happen to be included in the University of Chicago's core reading requirement, and Friedman and Rothbard are obviously connected to the school); perhaps I'm insufficiently aware of the quality of economics education elsewhere. It's just that none of those are being used in modern research, with the exception of Friedman's technical papers--and the modern foundations of economics do not depend on them in the slightest. See e.g. Gary Becker's 1962 paper which discusses how even the normal assumption of rationality on the part of consumers is unnecessary to the basic functioning of markets. As an important note, the efficient functioning of markets, while often spoken of as a terminal value, has never in my experience actually been other than instrumental: efficient markets quite by definition are allowing greater progress along individual value scales than inefficient markets, though not necessarily as much progress as some further refinement (like regulation). I suspect, however, that we are just using different definitions of the subject. It seems that you are primarily interested in the economics of public policy and in the value judgments that drive it. Indeed, you define out the very kind of economics that is most prevalent in modern departments (Berkeley perhaps excepted): mathematical models that seek to understand and predict how humans will act. In short, I, and much of the modern profession of economics, hold little attachment to the origins of economic theory (though I am surprised that you didn't include Smith's Wealth of Nations in your list, being more directly foundational for economics through the 19th century). If you really wanted to get into it, economics goes back to

Economics can conclude "If you want X then you should do Y".

This is what economists are trying to do now. Yet, implicit in their advice are normative economic principals that comprise the set list of X: Full employment, lower inflation, lower taxes, higher revenue etc...Obviously whoever wants x is normatively seeking a solution. As a result the analysis must then also and it is implicit in the formulation.

The economists themselves may have no feelings one way or another but they are using the economic and statistical principals toward normat... (read more)

0wedrifid
I can mostly agree with you. How one chooses to a discipline is inevitably normative. This leaves only a slight difference in how we describe the process, which side of definition we put the 'normative' on. I share that frustration. Economists in particularly should be expected to be able to trace how the motives play out through a system. That more or less is microeconomics.

Murder can increase utility in the economist's utility function

That is really immaterial though and computationally moot. Ok so his "utility function" is negative. Is that it, is that the difference? Besides, I would argue that reevaluating it on those terms does a poor job of actually describing motivation in a coherent set.

Yet murdering is a net negative in the ethicist's utility function.

It isn't in the economists? These things aren't neutral.

The broader aspect that economists seek is normative. You said it yourself in the economists a... (read more)

1wedrifid
No. His utility from murder is greater than his utility from not-murder. Cops describe this as 'motive'. Yes one can. It is much like chemistry. We can say "GDP should be increased" just was we can say "electricity should be produced". But it is better to just let chemistry say "if you put a plate of lead peroxide and a plate of lead metal in sulfuric acid you can generate electricity" and much the same for economics. If you want to. But if your intention is to understand or predict the behaviours of terrorists you don't want to consider that 'aggregate worldwide utility formula'. That's useless. You want to consider the utility of the terrorists, at the appropriate level of detail. Huh? Yes it will. You mean "you will still find it undesirable and or hard for you to understand".

As I asked in response to your other argument: Who has given utility this new definition?

I think perhaps there is a disconnect between the origins of utilitarianism, and how people who are not economists (Even some economists) understand it.

You as well as black belt bayesian are making the point that utilitarianism as used in an economic sense is somehow non-ethics based, which could not be more incorrect as utilitarianism was explicitly developed with goal seeking behavior in mind - stated by Bentham as greatest hedonic hapiness. It was not derived as a ... (read more)

1Technologos
I did separate undergrad degrees in Economics and Philosophy precisely because they speak to different questions. Economics tell us what is possible; philosophy tells us which outcome we should choose. I think you were closer to the mark when you said I would simply drop the normative part. Economics certainly attempts to describe the world, and in so doing it offers conclusions such as wedrifid's: to get X, do Y. The lack of the normative part can be seen in e.g. Steve Levitt's finding that the availability of abortion caused a lagged decrease in crime. Is Levitt arguing that abortion should be more extensive? No--just describing the relationship. In the same way, the actual science of economics doesn't say X is good, whether X is the natural rate of (un)employment, free trade, public goods, whatever. Individual economists certainly do espouse these positions, for a variety of reasons. The actual claims of the science, however, take the form "X will allow people to move higher on their individual value scales." Whether these individual scales are "right" or even whether they are the only consideration in moral decision-making is the concern of philosophy, not economics.
0wedrifid
No it isn't. It is just economics that is less irritating. Economics can conclude "If you want X then you should do Y". This is obviously most useful for people who have consequentialist ethics and happen to desire Y but these preferences are best considered to be attributes (probably) held by the economists and not by economics itself. A trap I have noticed some economists falling into is reasoning "Z is something that probably will happen therefore Z is something that should happen". This tends to invoke my contempt, particularly since it is seldom applied consistently.
0Matt_Simpson
Note that allowing a murderer to, well, murder, improves his economic welfare - it increases his economic utility. Yet murdering is a net negative in the ethicist's utility function. Economics makes normative claims because economists typically have some relatively uncontroversial normative assumptions - like maximizing economic welfare is a good thing. This is by and large true, but see my counter example above. Also, economists aren't trying to prove that the values they assume are the correct ones. They are assuming certain ethical values and proposing policies that maximize these values. The two types of utility functions look very similar - the math is the same, both describe goal seeking behavior, etc., but the difference is the preference sets that each describe. Murder can increase utility in the economist's utility function, but not in the ethicist's (under normal circumstances).

Any AGI will have all dimensions which are required to make a human level or greater intelligence. If it is indeed smarter, then it will be able to figure the theory out itself if the theory is obviously correct, or find a way to get it in a more efficient manner.

2DanArmak
Well, maybe the theory is inobviously correct. The AI called EY because it's stuck while trying to grow, so it hasn't achieved its full potential yet. It should be able to comprehend any theory a human EY can comprehend; but I don't see why we should expect it to be able to independently derive any theory a human could ever derive in their lifetimes, in (small) finite time, and without all the data available to that human.

I'm trying to be Friendly, but I'm having serious problems with my goals and preferences.

So is this an AGI or not? If it is then it's smarter than Mr. Yudkowski and can resolve it's own problems.

2DanArmak
Intelligence isn't a magical single-dimensional quality. It may be generally smarter than EY, but not have the specific FAI theory that EY has developed.
0[anonymous]
Not necessarily. It may well be programmed with limitations that prevent it from creating solutions that it desires. Examples include: * It is programmed to not recursively improve beyond certain parameters. * It is programmed to be law abiding or otherwise restricted in actions in a way such that it can not behave in a consequentialist manner. In such circumstances it will desire things to happen but desire not to be the one doing them. Eliezer may well be useful then. He could, for example, create another AI with supplied theory. (Or have someone whacked.)
1Madbadger
Its a seed AGI in the process of growing. Whether "Smarter than Yudkowski" => "Can resolve own problems" is still an open problem 8-).

[P]resent only one idea at a time.

Most posts do present one idea at a time. However it may not seem like it because most of the ideas presented are additive - that is, you have to have a fairly good background on topics that have been presented previously in order to understand the current topic. OB and LW are hard to get into for the uninitiated.

To provide more background and context, with the necessarily larger numbers of ideas being presented, while still getting useful feedback from readers.

That is what the sequences were designed to do - give the background needed.

it just takes the understanding that five lives are, all things being equal, more important than four lives.

Your examples rely too heavily on "intuitively right" and ceteris paribus conditioning. It is not always the case that five are more important than four and the mere idea has been debunked several times.

if people agree to judge actions by how well they turn out general human preference

What is the method you use to determine how things will turn out?

similarity can probably make them agree on the best action even without complete a

... (read more)

You know the Nirvana fallacy and the fallacy of needing infinite certainty before accepting something as probably true? How the solution is to accept that a claim with 75% probability is pretty likely to be true, and that if you need to make a choice, you should choose based on the 75% claim rather than the alternative? You know how if you refuse to accept the 75% claim because you're virtuously "waiting for more evidence", you'll very likely end up just accepting a claim with even less evidence that you're personally biased towards?

Morality work... (read more)

3Technologos
If there is literally nothing distinguishing the two scenarios except for the number of people--you have no information regarding who those people are, how their life or death will affect others in the future (including the population issues you cite), their quality of life or anything else--then it matters not whether it's 5 vs. 4 or a million vs. 4. Adding a million people at quality of life C or preventing their deaths is better than the same with four, and any consequentialist system of morality that suggests otherwise contains either a contradiction or an arbitrary inflection point in the value of a human life. The utility monster citation is fascinating because of a) how widely it diverges from all available evidence about human psychology, both with diminishing returns and the similarity of human valences, b) how much improved the thought experiment is by substituting "human" (a thing whose utility I care about) for "monster" (for which I do not), and c) how straightforward it really seems: if it were really the case that there were something 100 times more valuable than my life, I certainly ought to sacrifice my life for that, if I am a consequentialist. I'll ignore the assumption made by the second article that human population growth is truly exponential rather than logistic. It further assumes--contrary to the utility monster, I note--that we ought to be using average utilitarianism. Even then, if all things were equal, which the article stipulates they are not, more humans would still be better. The article is simply arguing that that state of affairs does not hold, which may be true. Consequentialism is, after all, about the real world, not only about ceteris paribus situations.
0Pavitra
Bayes' rule. Of course not, don't make straw men. Consensus is simply the best indicator of rightness we know of so far.

The economist's utility function is not the same as the ethicist's utility function

According to who? Are we just redefining terms now?

As far as I can tell your definition is the same as Benthams only implying rules bound more weakly for the practitioner.

I think someone started (incorrectly) using the term and it has taken hold. Now a bunch of cognitive dissonance is fancied up to make it seem unique because people don't know where the term originated.

1Matt_Simpson
The economist wants to predict human behavior. This being the case, the economist is only interested in values that someone actually acts on. The 'best' utility function for an economist is the one that completely predicts all actions of the agent in interest. Capturing the agent's true values is subservient to predicting actions. The ethicist wants to come up with the proper course of action, and thus doesn't care about prediction. The difference between the two is the two is normativity. Human psychology is complicated. Buried deep inside is some set of values that we truly want to maximize. When it comes to every day actions, this set of values need not be relevant for predicting our actual behavior.
2Technologos
See my reply and the following comments for the distinction. The economist's utility function is ordinal; the ethicist's is cardinal.

This is a problem for both those who'd want to critique the concept, and for those who are more open-minded and would want to learn more about it.

Anyone who is sufficiently technically minded undoubtedly finds frustration in reading books which give broad brush stroked counterfactuals to decision making and explanation without delving into the details of their processes. I am thinking of books like Freakonomics, Paradox of Choice, Outliers, Nudge etc..

These books are very accessible but lack the in depth analysis which are expected to be thoroughly cri... (read more)

4righteousreason
I found the two SIAI introductory pages very compelling the first time I read them. This was back before I knew what SIAI or the Singularity really was, as soon as I read through those I just had to find out more.

Thus if we want to avoid being arbitraged, we should cleave to expected utility.

Sticking with expected utility works in theory if you have a discrete number of variables (options) and can discern between all variables such that they can be judged equally and the cost (in time or whatever) is not greater than the marginal gain from the process. Here is an example I like: Go to the supermarket and optimize your expected utility for breakfast cereal.

The money pump only works if your "utility function" is static, or more accurately, if your prefer... (read more)

-1Stuart_Armstrong
This sounds like an even better reason to use expected utility! If you have ignorance about your preferences, then you should reduce the amount of other unknowns, and hence simplify your decision theory to expected utility.

This might have something to do with how public commitment may be counterproductive: once you've effectively signaled your intentions, the pressure to actually implement them fades away.

I was thinking about this today in the context of Kurzweil's future predictions and I wonder if it is possible that there is some overlap. Obviously Kurzweil is not designing the systems he is predicting but likely the people who are designing them will read his predictions.

I wonder, if they see the time lines that he predicts if they will potentially think: "oh, w... (read more)

As I replied to Tarelton, the not for sake of happiness alone post does not address how he came to the conclusions based on specific decision theoretic optimization. He gives very loose subjective terms for his conclusions:

The best way I can put it, is that my moral intuition appears to require both the objective and subjective component to grant full value.

which is why I worded my question as I did the first time. I don't think he has done the same amount of thinking on his epistemology as he has on his TDT.

Yes I remember reading both and scratching my head because both seemed to beat around the bush and not address the issues explicitly. Both lean to much on addressing the subjective aspect of non-utility based calculations, which in my mind is a red herring.

Admittedly I should have referenced it and perhaps the issue has been addressed as well as it will be. I would rather see this become a discussion as in my mind it is more important than any of the topics dealt with daily here - however that may not be appropriate for this particular thread.

3CronoDAS
"Preference satisfaction utilitarianism" is a lot closer to Eliezer's ethics than hedonic utilitarianism. In other words, there's more important things to maximize than happiness.

You'll have to forgive me because I am economist by training and mentions of utility have very specific references to Jeremy Bentham.

Your definition of what the term "maximizing utility" means and the Bentham definition (who was the originator) are significantly different; If you don't know what it is then I will describe it (if you do, sorry for the redundancy).

Jeremy Bentham devised Felicific calculus which is a hedonistic philosophy and seeks as its defining purpose to maximize happiness. He was of the opinion that it was possible in theory t... (read more)

0Psy-Kosh
Okay, I was talking about utility maximization in the decision theory sense. ie, computations of expected utility, etc etc... As far as happiness being The One True Virtue, well, that's been explicitly addressed Anyways, "maximize happiness above all else" is explicitly not it. And utility, as discussed on this site is a reference to the decision theoretic concept. It is not a specific moral theory at all. Now, the stuff that we consider morality would include happiness as a term, but certainly not as the only thing. Virtue ethics, as you describe it, gives me an "eeew" reaction, to be honest. It's the right thing to do simply because it's what you were optimized for? If I somehow bioengineer some sort of sentient living weapon thing, is it actually the proper moral thing for that being to go around committing mass slaughter? After all, that's what it's "optimized for"...

Ha, fair enough.

I often see reference to maximizing utility and individual utility functions in your writing and it would seem to me (unless I am misinterpreting your use) that you are implying that hedonic (fellicific) calculation is the most optimal way to determine what is correct when applying counterfactual outcomes to optimizing decision making.

I am asking how you determined (if that is the case) that the best way to judge the optimality of decision making was through utilitarianism as opposed to say ethical egoism or virtue (not to equivocate). Or perhaps your reference is purely abstract and does not invoke the fellicific calculation.

1Nick_Tarleton
See Not For The Sake of Happiness (Alone). See The "Intuitions" Behind "Utilitarianism" for a partial answer.

Since you and most around here seem to be utilitarian consequentialists, how much thought have you put into developing your personal epistemological philosophy?

Worded differently, how have you come to the conclusion that "maximizing utility" is the optimized goal as opposed to say virtue seeking?

4Eliezer Yudkowsky
...very little, you know me, I usually just wing that epistemology stuff... (seriously, could you expand on what this question means?)
2Psy-Kosh
*blinks* I'm curious as to what it is you are asking. A utility function is just a way of encoding and organizing one's preferences/values. Okay, there're a couple additional requirements like internal consistency (if you prefer A to B and B to C, you'd better prefer A to C) and such, but other than that, it's just a convenient way of talking about one's preferences. The goal isn't "maximize utility", but rather "maximizing utility" is a way of stating what it is you're doing when you're working to achieve your goals. Or did I completely misunderstand?
0RobinZ
Those two books look excellent ... but I don't see how they are relevant to this philosophical question. Both appear to discuss the problem of justifying inconsistent theories, not justifying an inconsistent universe. I think it is perfectly obvious that a superior and consistent theory would still be preferred by either of these philosophers.
1Alicorn
"Inconsistencies" in the enactment of politics aren't real contradictions. If this is the kind of example you find relevant, I must have no idea at all what you're talking about.

Inconsistency is a general, powerful case of having reason to reject something. Inconsistency brings with it the guarantee of being wrong in at least one place.

I would agree if the laws of the universe or the system, political or material are also consistent and understood completely. I think history shows us clearly that there are few laws which, under enough scrutiny are consistent in their known form - hence exogenous variables and stochastic processes.

2Alicorn
We don't have to understand the universe completely to be very confident that it contains no contradictions. If the laws as we understand them are not self-consistent, then we have reason to reject them - we just might, until we have better alternatives, have stronger reason to keep them around.

I looked into that but it lacks the database support that would be desired from this project. With LW owning the xml or php database, closest match algorithms can be built which optimize meeting locations for particular members.

That said, if the current LW developer wants to implement this I think it would at least be a start.

I thought so too - however not in the implementation that I think is most user friendly.

0khafra
Or developer-friendly, at any rate--but I must admit, frappr's AFLAX interface isn't the most stable on Linux. St. Petersburg here, so I'm excited about hearing from Mr. Vassar in Sarasota, Orlando, and possibly Tampa.

I am currently working on a google map API application which will allow LW/OB readers to add their location, hopefully encouraging those around them to form their own meetups. That might also make determining the next singularity summit location easier.

If there are any PHP/MySQL programmers who want to help I could def use some.

0olimay
Neat idea! I missed this first time around, and so posted a comment on the open thread asking about creating some central regular-meetup info repo. Same general desire to connect people with meetups. I know a little PHP (can create forms, WordPress themes/plugins and such, but I use other languages more often). Let me know how it's going and if I can be of any use.
0whpearson
Not google maps but http://www.frappr.com/ would do.
0Vladimir_Nesov
Sounds like something that likely has already been implemented somewhere.

Perhaps this could be expanded to be Q&A for the people the readers agree would comparably elucidate on all manners rationality/AGI such as Wei Dei and Nesov rather than a single person.

To me it gives a broader perspective and has an added benefit of eliminating any semblance of cultishness, despite Mr. Yudkowski's protests of such a following.

Would it be inappropriate to put this list somewhere on the Less Wrong Wiki?

I think that would be great if we had a good repository of mind games

6PeerInfinity
Ok, I went ahead and added the list to the wiki: http://wiki.lesswrong.com/wiki/Puzzle_Game_Index Unless anyone objects, I plan to continue adding more games to this list.

I think a lot of it has to do with your experience with computer based games and web applications.

This is why I say it would have to be a controlled study because those with significant computer experience and gaming experience have a distinct edge on those who do not. For example many gamers would automatically go to the WASD control pattern (which is what some first person shooting games use) on the "alternate control" level.

5:57:18 with 15 deaths here

A few months ago I stumbled upon a game wherein the goal is to guide an elephant from one side of the screen to a pipe; perhaps you have seen it:

This is the only level

Here's the rub: The rules change on every level. In order to do well you have to be quick to change your view of how the new virtual world works. That takes a flexible mind and accurate interpretation of the cues that the game gives you.

I sent this to some of my colleagues and have concluded anecdotally that their mental flexibility is in rough correlation with their results from the game. I ... (read more)

0taryneast
Thanks for that one... Extremely addictive and fun :)
7PeerInfinity
While I'm at it, here are links to a bunch of other games that require some degree of thinking outside the box and adapting to changing rules: Factory Balls and Factory Balls 2 Easy. Each level introduces new puzzle pieces, but no dramatic changes in the rules. The solutions are all inside the box, once you figure out what the rules are. Aether Easy. Requires solving some puzzles without any hints what the puzzle is or what a solution would look like. Solvabe just by trying random things until something happens. Duck, think outside the flock Medium. A series of puzzles, each of which has different rules. me and the key Medium. A series of puzzles, each of which has different rules. Electric Box Medium. Each level introduces new puzzle pieces, but no dramatic changes in the rules. The solutions are all inside the box, but you have to figure out how to put them together. Dynamic Systems Medium. Each level introduces new puzzle pieces, but no dramatic changes in the rules. The solutions are all inside the box, but you have to figure out how to put them together. Casual Gameplay Escape Hard! A series of puzzles, connected by other puzzles, each of which have different rules, and most of which have a counterintuitive solution. Hints about the solutions are cleverly hidden in the game. Hint: Gur cevagfperra ohggba vf lbhe sevraq. (rot13'd) Take Something Literally Hard. A series of puzzles, each of which has a deliberately counterintuitive, and often malevolent, solution. Don't worry if you can't solve all of them, some of the solutions require specific computer hardware or softwae to win. The Impossible Quiz and The Impossible Quiz 2 Almost Impossible. A series of quiz questions and other challenges that have deliberately counterintuitive solutions. Some of the quiz questions are solvable only by trial and error. Some of the challenges require extremely fast reflexes. Many of the puzzles are blatantly evil. Do not expect to win this. You have been warne
1SilasBarta
Okay, I completed it without any help (didn't read the comments). My stats are: 15:03:50, Deaths: 85. Should I be proud of myself? ETA: Some of them I didn't even understand how the rules were different, I just manipulated the elephant well enough to get it to the end.
1Morendil
Well, I'm stuck at "Time for a refresh" for the moment. I'll have to sleep on it, I guess. ETA: my kids took over, and naturally they just breezed right through "Time to refresh", I didn't even have time to notice how. I got my revenge when they got stuck at "Credit page", my favorite of all. We got through the whole thing in 35 minutes. Quite fun, though not quite the kind of "serious game" I have in mind above.
1taw
It's a brilliant game. Are there any others like it?

I probably came off as more "anticapitalist" or "collectivist" than I really am, but the point is important: betraying your partners has long-term consequences which aren't apparent when you only look at the narrow version of this game.

This is actually the real meaning of "selfishness." It is in my own best interest to do things for the community.

The mantras of collectivists and anti-capitalists seem to either not realize or ignore the fact that greedy people aren't really doing things in their own best interest if they are making enemies in the process.

With mechanical respiration, survival with ALS can be indefinitely extended.

What a great opportunity to start your transhuman journey (that is if you indeed are a transhumanist). Admittedly these are not the circumstances you or anyone would have chosen but here we are nonetheless.

If you decide to document your process then I look forward to watching your progression out of organic humanity. I think it is people like you who have both the impetus and the knowledge to really show how transhuman technology can be a bolster to our society.

Cheers!

Upon reading of that link (which I imagine is now fairly outdated?) his theory falls apart under the weight of its coercive nature - as the questioner points out.

It is understood that the impact of an AI will be on all in humanity regardless of it's implementation if it is used for decision making. As a result consequentialist utilitarianism still holds a majority rule position, as the link talks about, which implies that the decisions that the AI would make would favor a "utility" calculation (Spare me the argument about utilons; as an economis... (read more)

"Utilons" are a stand-in for "whatever it is you actually value"

Of course - which makes them useless as a metric.

we tend to support decision making based on consequentialist utilitarianism

Since you seem to speak for everyone in this category - how did you come to the conclusion that this is the optimal philosophy?

Thanks for the link.

Maybe I'm just dense but I have been around a while and searched, yet I haven't stumbled upon a top level post or anything of the like here on the FHI, SIAI (other than ramblings about what AI could theoretically give us) OB or otherwise which either breaks it down or gives a general consensus.

Can you point me to where you are talking about?

3timtyler
Probably the median of such discussions was on http://www.sl4.org/ Machines will probably do what they are told to do - and what they are told to do will probably depend a lot on who owns them and on who built them. Apart from that, I am not sure there is much of a consensus. We have some books of the topic: Moral Machines: Teaching Robots Right from Wrong - Wendell Wallach Beyond AI: Creating The Conscience Of The Machine - J. Storrs Hall ...and probably hundreds of threads - perhaps search for "friendly" or "volition".

I never see discussion on what the goals of the AI should be. To me this is far more important than any of the things discussed on a day to day basis.

If there is not a competent theory on what the goals of an intelligent system will be, then how can we expect to build it correctly?

Ostensibly, the goal is to make the correct decision. Yet there is nearly no discussion on what constitutes a correct decision. I see lot's of contributors talking about calculating utilons so that demonstrates that most contributors are hedonistic consequentialist utilitarians.... (read more)

0timtyler
The topic of what the goals of the AI should be has been discussed an awful lot. I think the combination of moral philosopher and machine intelligence expert must be appealing to some types of personality.
1CronoDAS
"Utilons" are a stand-in for "whatever it is you actually value". The psychological state of happiness is one that people value, but not the only thing. So, yes, we tend to support decision making based on consequentialist utilitarianism, but not hedonistic consequentialist utilitarianism. See also: Coherent Extrapolated Volition
Load More