All of JonatasMueller's Comments + Replies

Apart from that, what do you think of the other points? If you wish, we could continue a conversation on another online medium.

0Stuart_Armstrong
Certainly, but I don't have much time for the next few weeks :-( Send me a message in mid-April if you're still interested!

I think that this is an important point: the previously argued normative badness of directly accessible bad conscious experiences is not absolute and definitive, or in terms of justifying actions. It should weight on the scale with all other factors involved, even indirect and instrumental ones that could only affect intrinsic goodness or badness in a distant and unclear way.

Bad, negative, unpleasant, all possess partial semantic correspondence, which justifies their being a value.

The normative claims in this case need not be definitive and overruling in that case. Perhaps that is where your resistance to accepting it comes from. In moral realism, a justified preference or instrumental / indirect value that weights more can overpower a direct feeling as well. This justified preference will be ultimately reducible to direct feelings in the present or in the future, for oneself or for others, though.

Could you give me examples of... (read more)

0Stuart_Armstrong
Then they are no longer purely descriptive, and I can't agree that they are logically or empirically true.
0JonatasMueller
I think that this is an important point: the previously argued normative badness of directly accessible bad conscious experiences is not absolute and definitive, or in terms of justifying actions. It should weight on the scale with all other factors involved, even indirect and instrumental ones that could only affect intrinsic goodness or badness in a distant and unclear way.

A bad occurrence must be a bad ethical value.

Why? That's an assertion - it won't convince anyone who doesn't already agree with you. And you're using two meanings of the word "bad" - an unpleasant subjective experience, and badness according to a moral system.

If it is a bad occurrence, then the definition of ethics, at least as I see it (or this dictionary, although meaning is not authoritative), is defining what is good and bad (values), as normative ethics, and bringing about good and avoiding bad, as applied ethics. It seems to be a matt... (read more)

1Stuart_Armstrong
Which is exactly why I critiqued using the word "bad" for the conscious experiences, using "negative" or "unpleasant", words which describe the conscious experience in a similar way without sneaking in normative claims. Er, nothing complex - in my ethics, there are cases where preferences trump feelings (eg experience machines) and cases where feelings trump preferences (eg drug users who are very unhappy). That's all I'm saying.

I thought it was relevant to this, if not, then what was meant by motivation?

The inherent-desirableness of happiness is your mind reifying the internal data describing its motivation to do something

Consciousness is that of which we can be most certain of, and I would rather think that we are living in a virtual world under an universe with other, alien physical laws, than that consciousness itself is not real. If it is not reducible to nonmental facts, then nonmental facts don't seem to account for everything there is of relevant.

From my perspective,

... (read more)

It's a reasonably good description, though wanting and liking seem to be neurologically separate, such that liking does not necessarily reflect a motivation, nor vice-versa (see: Not for the sake of pleasure alone. Think the pleasurable but non-motivating effect of opioids such as heroin. Even in cases in which wanting and liking occur together, this does not necessarily invalidate the liking aspect as purely wanting.

Liking and disliking, good and bad feelings as qualia, especially in very intense amounts, seem to be intrinsically so to those who are immediately feeling them. Reasoning could extend and generalize this.

3Eliezer Yudkowsky
Heh. Yes, I remember reading the section on noradrenergic vs. dopaminergic motivation in Pearce's BLTC as a 16-year-old. I used to be a Pearcean, ya know, hence the Superhappies. But that distinction didn't seem very relevant to the metaethical debate at hand.

I slightly disagree with that on a personal moral level, and entirely disagree with the assertion that it's a logical transition.

Could you explain more at length for me?

The feeling of badness is something bad (imagine yourself or someone being tortured and tell me it's not bad), and it is a real occurrence, because conscious contents are real occurrences. It is then a bad occurrence. A bad occurrence must be a bad ethical value. All this is data, since conscious perceptions have a directly accessible nature, they are "is", and the "ought... (read more)

1Stuart_Armstrong
Why? That's an assertion - it won't convince anyone who doesn't already agree with you. And you're using two meanings of the word "bad" - an unpleasant subjective experience, and badness according to a moral system. Minds in general need not have moral systems, or conversely may lack hedonistic feelings, making the argument incomprehensible to them. I have a personal moral system that isn't too far removed from the one you're espousing (a bit more emphasise on preference). However, I do not assume that this moral system can be deduced from universal or logical principles, for the reasons stated above. Most humans will have moral systems not too far removed from ours (in the sense of Kolmogorov complexity - there are many human cultural universals, and our moral instincts are generally similar), but this isn't a logical argument for the correctness of something.

I agree with what you agree with.

Did you read my article Arguments against the Orthogonality Thesis?

I think that the argument for the intrinsic value (goodness or badness) of conscious feelings goes like this:

  1. Conscious experiences are real, and are the most certain data about the world, because they are directly accessible, and don't depend on inference, unlike the external world as we perceive it. It would not be possible to dismiss conscious experiences as unreal, inferring that they not be part of the external world, since they are more certain than

... (read more)
2Stuart_Armstrong
Of course! -> Likewise, an experience of extreme success or pleasure is often intrinsically felt as good, and this feeling of goodness is a real occurrence in the world. And that renders the 4th point moot - your extra axiom (the one that goes from "is" to "ought") is "feelings of goodness are actually goodness". I slightly disagree with that on a personal moral level, and entirely disagree with the assertion that it's a logical transition.

Why it would not do paperclip (or random value) maximization as a goal is explained more at length in the article. There is more than one reason. We're considering a generally superintelligent agent, assuming above-human philosophical capacity. In terms of personal identity, there is a lack of personal identities, so it would be rational to take an objective, impersonal view, taking account of values and reasonings of relevant different beings. In terms of meta-ethics, there is moral realism and values can be reduced to the quality of conscious experience,... (read more)

I see. I think that ethics could be taken as, even individually, the formal definition of one's goals and how to reach them, although in the orthogonality thesis ethics is taken in a collective level. Since personal identities cannot be sustained by logic, the distinction between individual goals and societal goals becomes trivial, and both are mutually inclusive.

What sort of cognitive and physical actions would make you think a robot is superintelligent?

For general superintelligence, proving performance in all cognitive areas that surpasses the highest of any humans. This naturally includes philosophy, which is about the most essential type of reasoning.

What fails in the program when one tries to build a robot that takes both the paperclip-maximizing actions and superintelligent actions?

It could have a narrow superintelligence, like a calculating machine, surpassing human cognitive abilities in some areas b... (read more)

0Manfred
My hope was to get you to support that claim in an inside-view way. Oh well.

I'm not sure I'm using rational in that sense, I could substitute "being rational" with "using reason", "thinking intelligently", "making sense", "being logical", what seems to follow from being generally superintelligent. Ethics is the study of defining what ought to be done and how to achieve it, so it seems to follow from general superintelligence as well. The trickier part seems to be defining ethics. Humans often act with motivations which are not based on formal ethics, but ethics is like a formal elaboration of what one's (or everyone's) motivations and actions ought to be.

1Elithrion
Hm, sorry, it's looking increasingly difficult to reach a consensus on this, so I'm going to bow out after this post. With that in mind, I'd like to say that what I have in mind when I say "an action is rational" is approximately "this action is the best one for achieving one's goals" (approximately because that ignores practical considerations like the cost of figuring out which action this is exactly). I also personally believe that insofar as ethics is worth talking about at all, it is simply the study of what we socially consider to be convenient to term good, not the search for an absolute, universal good, since such a good (almost certainly) does not exist. As such, the claim that you should always act ethically is not very convincing in my worldview (it is basically equivalent to the claim that you should try to benefit society and is similarly differently persuasive for different people). Instead, each individual should satisfy her own goals, which may be completely umm... orthogonal... to whatever we decide to use for "ethics". The class of agents that will indeed decide to care about the ethics we like seems like a tiny subset of all potential agents, as well as of all potential superintelligent agents (which is of course just a restatement of the thesis). Consequently, to me, the idea that we should expect a superintelligence to figure out some absolute ethics (that probably don't exist) and decide that it should adhere to them looks fanciful.

I don't think that someone can disagree with it (good conscious feelings are intrinsically good; bad conscious feelings are intrinsically bad), because it would be akin to disagreeing that, for instance, the color green feels greenish. Do you disagree with it?

Because I have certain beliefs (broadly, but not universally, shared). But I don't see how any of those beliefs can be logically deduced.

Can you elaborate? I don't understand... Many valid wants or beliefs can be ultimately reduced as to good and bad feelings, in the present or future, for oneself or for others, as instrumental values, such as peace, learning, curiosity, love, security, longevity, health, science...

2Stuart_Armstrong
I do disagree with it! :-) Here is what I agree with: * That humans have positive and negative conscious experiences. * That humans have an innate sense that morality exists: that good and bad mean something. * That humans have preferences. I'll also agree that preferences often (but not always) track the positive or negative conscious experiences of that human. That human impressions of good and bad sometimes (but not always) track positive or negative conscious experiences of humans in general, at least approximately. But I don't see any grounds for saying "positive conscious experiences are intrinsically (or logically) good". That seems to be putting in far to many extra connotations, and moving far beyond the facts we know.

What is defined as ethically good is by definition what ought to be done, at least rationally. Some agents, such as humans, often don't act rationally, due to a conflict of reason with evolutionarily selected motivations, which have really their own evolutionary values in mind (e.g. have as many children as possible), not ours. This shouldn't happen for much more intelligent agents, with stronger rationality (and possibly a capability to self-modify).

3Elithrion
Then your argument is circular/tautological. You define a "rational" action as one that "does that which ethically good", and then you suppose that a superintelligence must be very "rational". However, this is not the conventional usage of "rational" in economics or decision theory (and not on Less Wrong). Also, by this definition, I would not necessarily wish to be "rational", and the problem of making a superintelligence "rational" is exactly as hard, and basically equivalent to, making it "friendly".

Sorry, I thought you already understood why wanting can be wrong.

Example 1: imagine a person named Eliezer walks to an ice cream stand, and picks a new flavor X. Eliezer wants to try the flavor X of ice cream. Eliezer buys it and eats it. The taste is awful and Eliezer vomits it. Eliezer concludes that wanting can be wrong and that it is different from liking in this sense.

Example 2: imagine Eliezer watched a movie in which some homophobic gangsters go about killing homosexuals. Eliezer gets inspired and wants to kill homosexuals too, so he picks a knife ... (read more)

1Stuart_Armstrong
I understand why those examples are wrong. Because I have certain beliefs (broadly, but not universally, shared). But I don't see how any of those beliefs can be logically deduced. Quite a lot follows from "positive conscious experiences are intrinsically valuable", but that axiom won't be accepted unless you already partially agree with it anyway.

Their motivation (or what they care about) should be in line with their rationality. This doesn't happen with humans because we have evolutionarily selected and primitive motivations, coupled with a weak rationality, but should not happen with much more intelligent and designed (possibly self-modifying) agents. Logically, one should care about what one's rationality tells.

We seem to be moving from personal identity to ethics. In ethics it is defined that good is what ought to be, and bad is what ought not to be. Ethics is about defining values (what is good and ought to be), and how to cause them.

Good and bad feelings are good and bad as direct data, being direct perceptions, and this quality they have is not an inference. Their good and bad quality is directly accessible by consciousness, as data with the highest epistemic certainty. Being data they are "is", and being good and bad, under the above definition of ethics, they are "ought" too. This is a special status that only good and bad feelings have, and no other values do.

0Elithrion
I'm not convinced by that (specifically that feelings can be sorted into bad and good in a neat way and that we can agree on which ones are more bad/good), however that is still not my point. Sorry, I thought I was being clear, but apparently not. You claim that a general superintelligence ought to care about all sorts of consciousnesses because it is very very intelligent (and understands what good/bad feelings are and the illusion of personal identities and whatnot). Why? Why wouldn't it only care about something like the stereotypical example of creating more paperclips?

Indeed, but what separates wanting and liking is that preferences can be wrong, they require no empirical basis, while liking in itself cannot be wrong, and it has an empirical basis.

When rightfully wanting something, that something gets a justification. Liking, understood as good feelings, is a justification, while another is avoiding bad feelings, and this can be causally extended to include instrumental actions that will cause this in indirect ways.

2Stuart_Armstrong
Then how can wanting be wrong? They're there, they're conscious preferences (you can introspect and get them, just as liking), and they have as much empirical basis as liking. And wanting can be seen as more fundamental - they are your preferences, and inform your actions (along with your world model), whereas using liking to take action involve having a (potentially flawed) mental model of what will increase your good experiences and diminish bad ones. The game can be continued endlessly - what you're saying is that your moral system revolves around liking, and that the arguments that this should be so are convincing to you. But you can't convince wanters with the same argument - their convictions are different, and neither set of arguments are "logical". It becomes a taste-based debate.

One argument is that from empiricism or verification. Wanting can be and often is wrong. Simple examples can show this, but I assume that they won't be needed because you understand. Liking can be misleading in terms of motivation or in terms of the external object which is liked, but it cannot be misleading or wrong in itself, in that it is a good feeling. For instance, a person could like to use cocaine, and this might be misleading in terms of being a wrong motivation, that in the long-term would prove destructive and dislikeable. However, immediately, ... (read more)

1Stuart_Armstrong
"Wanting can be misleading in terms of the long term or in terms of the internal emotional state with which it is connected, but it cannot be misleading or wrong in itself, in that it is a clear preference."

I should have explained things much more at length. The intelligence in that context I use is general superintelligence, being defined as that which surpasses human intelligence in all domains. Why is a native capacity for sociability implied?

A "God's-eye view", as David Pearce says, is an impersonal view, an objective rather than subjective view, a view that does not privilege one personal perspective over another, but take the universe as a whole as its point of reference. This comes from the argued non-existence of personal identities. To check arguments on this, see this comment.

In practical terms, it's very hard to change the intuitive opinions of people on this, even after many philosophical arguments. Those statements of mine don't touch the subject. For that the literature should be read, for instance the essay I wrote about it. But if we consider general superintelligences, then they could easily understand it and put it coherently into practice. It seems that this can be naturally expected, except perhaps in practice under some specific cases of human intervention.

2wedrifid
Yet, as the eminent philosopher Jos Whedon observed, "Yeah... but [they] don't care!"

Hi Stuart,

Why? This is the whole core of the disagreement, and you're zooming over it way too fast. Even for ourselves, our wanting systems and our liking systems are not well aligned - we want things we don't like, and vice-versa. A preference utilitarian would say our wants are the most important; you seem to disagree, focusing on the good/bad aspect instead. But what logical reason would there be to follow one or the other?

Indeed, wanting and liking do not always correspond, also from a neurological perspective. Wanting involves planning and plannin... (read more)

3Stuart_Armstrong
That is your opinion. Others believe wanting is fundamental and rational, that can be checked and explained and shared - while liking is a misleading emotional response (that probably shows much less consistency, too). How would you resolve the difference? They say something is more important, you say something else is. Neither of your disagree about the facts of the world, just about what is important and what isn't. What can you point to that makes this into a logical disagreement?

Indeed, a robot could be built that makes paperclips or pretty much anything. For instance, a paperclip assembling machine. That's an issue of practical implementation and not what the essay has been about, as I mention in the first paragraph and concede in the last.

The issue I argued about is that generally superintelligent agents, on their own will, without certain outside pressures from non-superintelligent agents, would understand personal identity and meta-ethics, leading them to converge to the same values and ethics. This is for two reasons: (1) th... (read more)

1Manfred
Well then, let's increase the problem to where it's meaningful, and take a look at that. What sort of cognitive and physical actions would make you think a robot is superintelligent? Discovery of new physics, modeling humans so precisely that it can predict us better than we can, making intricate plans that will work flawlessly? What fails in the program when one tries to build a robot that takes both the paperclip-maximizing actions and superintelligent actions?
1ikrase
But how do they know what a Gods-eye view even is?

I read that and similar articles. I deliberately didn't say pleasure or happiness, but "reduced to good and bad feelings", including other feelings that might be deemed good, such as love, curiosity, self-esteem, meaningfulness..., and including the present and the future. The part about the future includes any instrumental actions in the present which be taken with the intention of obtaining good feelings in the future, for oneself or for others.

This should cover visiting Costa Rica, having good sex, and helping loved ones succeed, which are the... (read more)

0Elithrion
I want to know more about the future. I do not expect to make much use of the information, and the tiny good feeling I expect to get when I am proven right is far smaller than the good feelings I could get from other uses of my time. My defence for this value as legitimate is that I am quite capable of rational reasoning and hearing out any and all of your arguments, and yet I am also quite certain that neither you nor others will be able to persuade me to abandon it. No further justification or defence beyond that is necessary or possible, in my opinion.

Indeed, epiphenomenalism can seemingly be easily disproved by its implication that if it were true, then we wouldn't be able to talk about our consciousness. As I said in the essay, though, consciousness is that of which we can be most certain of, by its directly accessible nature, and I would rather think that we are living in a virtual world under an universe with other, alien physical laws, than that consciousness itself is not real.

A certain machine could perhaps be programmed with an utility function over causal continuity, but a privileged stance for one's own values wouldn't be rational lacking a personal identity, in an objective "God's eye view", as David Pearce says. That would call at least for something like coherent extrapolated volition, at least including agents with contextually equivalent reasoning capacity. Note that I use "at least" twice, to accommodate your ethical views. More sensible would be to include not only humans, but all known sentient perspectives, because the ethical value(s) of subjects arguably depend more on sentience than on reasoning capacity.

I argue (in this article) that the you (consciousness) in one second bears little resemblance to the you in the next second.

In the subatomic world, the smallest passage of time changes our composition and arrangement to a great degree, instantly. In physical terms, the frequency of discrete change at this level, even in just one second, is a number with 44 digits, so vast as to be unimaginable... In comparison, the amount of seconds that have passed since the start of the universe, estimated at 13.5 billion years ago, is a number with just 18 digits. At

... (read more)
2Elithrion
I read Kaj Sotala's post, as you may surmise from the fact that I was the one who first linked (to a comment on it) in the grandparent. I also skimmed your article, and it seems equivalent to the idea of considering algorithmic identity or humans as optimizations processes or what-have-you (not sure if there's a specific term or post on it) that's pretty mainstream on LW, and with which I at least partially sympathise. However, this has nothing to do with my objection. Let me rephrase in more general and philosophical terms, I guess. As far as I can tell, somewhere in your post you purport to solve the is-out problem. However, I do not find that any such solution follows from anything you say.

Being open to criticism is very important, and the bias to disvalue it should be resisted. Perhaps I defined the truth conditions later on (see below).

"There is a difference between valid and invalid human values, which is the ground of justification for moral realism: valid values have an epistemological justification, while invalid ones are based on arbitrary choice or intuition. The epistemological justification of valid values occurs by that part of our experiences which has a direct certainty, as opposed to indirect: conscious experiences in them... (read more)

8lukeprog
See here.

Indeed the orthogonality thesis in that practical sense is not what this essay is about, as I explain in the first paragraph and concede in the last paragraph. This article addresses the assumed orthogonality between ethics and intelligence, particularly general superintelligence, based on considerations from meta-ethics and personal identity, and argues for convergence.

There seems to be surprisingly little argumentation in favor of this convergence, what is utterly surprising to me, given how clear and straightforward I take it to be, though requiring an... (read more)

1Qiaochu_Yuan
An error often feels like a clear and straightforward solution from the inside. Have you read the posts surrounding No Universally Compelling Arguments?
4wedrifid
It appears here from time to time. It tends to be considered a trivial error. (This is unlikely to change.)

What if I changed the causation chain in this example, and instead of having the antagonistic values caused by the identical agents themselves, I had myself inserted the antagonistic values in their memories, while I did their replication? I could have picked the antagonistic value from the mind of a different person, and put it into one of the replicas, complete with a small reasoning or justification in its memory.

They would both wake up, one with one value in their memory, and another with an antagonistic value. What would it be that would make one of t... (read more)

OK, that is the interpretation I found less convincing. The bare axiomatic normative claim that all the desires and moral intuitions not concerned with pleasure as such are errors with respect to maximization of pleasure isn't an argument for adopting that standard.

The argument for adopting that standard was based on epistemological prevalence of the goodness and badness of good and bad feelings, while other hypothetical intrinsic values could be so only by much less certain inference. But I'd also argue that the nature of how the world is perceived nec... (read more)

Hi Carl,

Thank you for a thoughtful comment. I am not used to writing didactically, so forgive my excessive conciseness.

You understood my argument well, in the 5 points, with the detail that I define value as good and bad feelings rather than pleasure, happiness, suffering and pain. The former definition allows for subjective variation and universality, while the latter utilitarian definition is too narrow and anthropocentric, and could be contested on these grounds.

What kind of value do you mean here? Impersonal ethical value? Impact on behavior? Differe

... (read more)
8CarlShulman
OK, that is the interpretation I found less convincing. The bare axiomatic normative claim that all the desires and moral intuitions not concerned with pleasure as such are errors with respect to maximization of pleasure isn't an argument for adopting that standard. And given the admission that biological creatures can and do want things other than pleasure, have other moral intuitions and motivations, and the knowledge that we can and do make computer programs with preferences defined over some model of their environment that do not route through an equivalent of pleasure and pain, the connection from moral philosophy to empirical prediction is on shakier ground than the purely normative assertions. But why? You seem to be just giving an axiom without any further basis, that others don't accept. Once one is valuing things in a model of the world, why stop at your particular axiom? And people do have reactions of approval to their mental models of an equal society, or a diversity of goods, or perfectionism, which are directly experienced. You can say that you might pursue something vaguely like X, which people feel is morally good or obligatory as such, is instrumental in pursuit of Y. But that doesn't change the pursuit of X, even in conflict with Y.

Where do you include environmental and cultural influences?

While these vary, I don't see legitimate values that could be affected by them. Could you provide examples of such values?

This does not follow. Maybe you need to give some examples. What do you mean by "correct" and "error" here?

Imagine that two exact replicas of a person exist in different locations, exactly the same except for an antagonism in one of their values. Both could not be correct at the same time about that value. I mean error in the sense, for example, that E... (read more)

1aleksiL
The two can't be perfectly identical if they disagree. You have to additionally assume that the discrepancy is in the parts that reason about their values instead of the values themselves for the conclusion to hold.

For the question of personal identity, another essay, that was posted on Less Wrong by Eliezer, is here:

http://lesswrong.com/lw/19d/the_anthropic_trilemma/

However, while this essay presents the issue, it admittedly does not solve it, and expresses doubt that it would be solved in this forum. The solution exists in philosophy, though. For example, in the first essay I linked to, in Daniel Kolak's work "I Am You: The Metaphysical Foundations for Global Ethics", or also, in a partial form, in Derek Parfit's work "Reasons and Persons".

I tend to be a very concise writer, assuming a quick understanding from the reader, and I don't perceive very well what is obvious and what isn't to people. Thank you for the advice. Please point to specific parts that you would like further explaining or expanding, and I will provide it.

David, what are those multiple possible defeaters for convergence? As I see it, the practical defeaters that exist still don't affect the convergence thesis, they just are possible practical impediments, from unintelligent agents, to the realization of the goals of convergence.

Another argumentation for moral realism:

  1. Let's imagine starting with a blank slate, the physical universe, and building ethical value in it. Hypothetically in a meta-ethical scenario of error theory (which I assume is where you're coming from), or possible variability of values, this kind of "bottom-up" reasoning would make sense for more intelligent agents that could alter their own values, so that they could find, from "bottom-up", values that could be more optimally produced, and also this kind of reasoning would make sense for them

... (read more)

Stuart, here is a defense of moral realism:

http://lesswrong.com/lw/gnb/questions_for_moral_realists/8g8l

My paper which you cited needs a bit of updating. Indeed some cases might lead a superintelligence to collaborate with agents without the right ethical mindset (unethical), which constitutes an important existential risk (a reason why I was a bit reluctant to publish much about it).

However, isn't the orthogonality thesis basically about the orthogonality between ethics and intelligence? In that case, the convergence thesis is would not be flawed if some unintelligent agents kidnap and force an intelligent agent to act unethically.

-1JonatasMueller
Another argumentation for moral realism: 1. Let's imagine starting with a blank slate, the physical universe, and building ethical value in it. Hypothetically in a meta-ethical scenario of error theory (which I assume is where you're coming from), or possible variability of values, this kind of "bottom-up" reasoning would make sense for more intelligent agents that could alter their own values, so that they could find, from "bottom-up", values that could be more optimally produced, and also this kind of reasoning would make sense for them in order to fundamentally understand meta-ethics and the nature of value. 2. In order to connect to the production of some genuine ethical value in this universe, arguably some things would have to be built the same way, with certain conditions, while hypothetically others things could vary, in the value production chain. This is because ethical value could not be absolutely anything, otherwise those things could not be genuinely valuable. If all could be fundamentally valuable, then nothing would really be, because value requires a discrimination in terms of better and worse. Somewhere in the value production chain, some things would have to be constant in order for there to be genuine value. Do you agree so far? 3. If some things have to be constant in the value production chain, and some things could hypothetically vary, then the constant things would be the really important in creating value, and the variable things would be accessory, and could be randomly specified with some degree of freedom, by those that be analyzing value production from a "bottom-up" perspective in a physical universe. It would seem therefore that the constant things could likely be what is truly valuable, while the variable and accessory things could be mere triggers or engines in the value production chain. 4. I argue that, in the case of humans and of this universe, the constant things are what really constitute value. There is some constant a

Yes, that is correct. I'm glad a Less Wronger finally understood.

Who cares about that silly game. Accepting to play it or not is my choice.

You can only validly like ice cream by way of feelings, because all that you have direct access to in this universe is consciousness. The difference between Monday and Tuesday in your example is only in the nature of the feelings involved. In the pain example, it is liked by virtue of the association with other good feelings, not pain in itself. If a person somehow loses the associated good feelings, certain painful stimuli cease to be desirable.

1Qiaochu_Yuan
Yes, in the same way that explaining your ideas well or poorly is your choice, but I don't see what this has to do with explaining the difference between liking X and liking associated good feelings that X provides.
4skepsci
If a person somehow loses the associated good feelings, ice cream also ceases to be desirable. I still don't see the difference between Monday and Tuesday. I think I might have some idea what you mean about masochists not liking pain. Let me tell a different story, and you can tell me whether you agree... Masochists like pain, but only in very specific environments, such as roleplaying fantasies. Within that environment, masochists like pain because of how it affects the overall experience of the fantasy. Outside that environment, masochists are just as pain-averse as the rest of the world. Does that story jibe with your understanding?

The idea that one can like pain in itself is not substantiated by evidence. Masochists or self-harmers seek some pleasure or relief they get from pain or humiliation, not pain for itself. They won't stick their hands in a pot with boiling water.

http://en.wikipedia.org/wiki/Sadomasochism http://en.wikipedia.org/wiki/Self-harm

To follow that line of reasoning, please provide evidence that there exists anyone that enjoys pain in itself. I find that unbelievable, as pain is aversive by nature.

1Qiaochu_Yuan
This is not how you play the Monday-Tuesday game! Also, a request to play the Monday-Tuesday game isn't an argument, it's a request for clarification. Specifically, I'm asking you to clarify what the difference between two statements is. Maybe we should try a simpler example: On Monday I like ice cream. On Tuesday I like some associated good feeling that ice cream provides. What's the difference between Monday and Tuesday?

Liking pain seems impossible, as it is an aversive feeling. However, for some people, some types of pain or self-harm cause a distraction from underlying emotional pain, which is felt as good or relieving, or it may give them some thrill, but in these cases it seems that it is always pain + some associated good feeling, or some relief of an underlying bad feeling, and it is for the good feeling or relief that they want pain, rather than pain for itself.

Conscious perceptions in themselves seem to be what is most certain in terms of truth. The things they represent, such as the physical world, may be illusions, but one cannot doubt feeling the illusions themselves.

0Qiaochu_Yuan
Let's play the Monday-Tuesday game. On Monday I like pain. On Tuesday I like some associated good feeling that pain provides. What's the difference between Monday and Tuesday?

I think that it is a worthy use of time, and I applaud your rational attitude of looking to refute one's theories. I also like to do that in order to evolve them and discard wrong parts.

Don't hesitate to bring up specific parts for debate.

"Why is fostering good conscious feelings and prevent bad conscious feelings necessarily correct? It is intuitive for humans to say we should maximize conscious experience, and that falls under the success theory that Peter talks about, but why is this necessarily the one true moral system?"

If we agree that good and bad feelings are good and bad, that only conscious experiences produce direct ethical value, which lies in its good or bad quality, then theories that contradict this should not be correct, or they would need to justify their points, ... (read more)

Conscious perceptions are quite direct and simple. Do you feel, for example, a bad feeling like intense pain as being a bad occurrence (which, like all occurrences in the universe, is physical), and likewise, for example, a good feeling like a delicious taste as being a good occurrence?

I argue that these are perceived with the highest degree of certainty of all things and are the only things that can be ultimately linked to direct good and bad value.

0Jabberslythe
No, though I admit it has felt like that for me at some points in my life. Even if I did, there are a bunch of reasons why that I would not trust that intuition I like certain things and dislike certain things, and in a certain sense I would be mistaken if I were doing things that reliably caused me pain. That certain sense is that if I were better informed I would not take that action. If, however, I liked pain, I would still take that action, and so I would not be mistaken. I could go through the same process to explain why an sadist is not mistaken. I do not know what else to say except that this is just an appeal to intuition, and that specific intuitions are worthless unless they are proven to reliably point towards the truth.

"Not quite anything, since the size and complexity of conscious thought is bounded by the human brain. But that is not relevant to this discussion of ethics."

Indeed, conscious experience may be bound by the size and complexity of brains or similar machinery, of humans, other animals, and cyborgs. Theoretically, conscious perceptions may be able to be anything (or nearly), as we could theorize about brains the size of Jupiter or much larger. You get the point.

"Should I interpret this as you defining ethics as good and bad feelings?"

Almos... (read more)

I will answer by explaining my view of morally realist ethics.

Conscious experiences and their content are physical occurrences and real. They can vary from the world they represent, but they are still real occurrences. Their reality can be known with the highest possible certainty, above all else, including physics, because they are immediately and directly accessible, while the external world is accessible indirectly.

Unlike the physical world, it seems that physical conscious perceptions can theoretically be anything. The content of conscious perceptions ... (read more)

2Stuart_Armstrong
A bit unclear, but I'm assuming you mean something like "we have good or bad (technically, pleasant or unpleasant) conscious experiences, and we know this with great certainty". That seems fine. Why? This is the whole core of the disagreement, and you're zooming over it way too fast. Even for ourselves, our wanting systems and our liking systems are not well aligned - we want things we don't like, and vice-versa. A preference utilitarian would say our wants are the most important; you seem to disagree, focusing on the good/bad aspect instead. But what logical reason would there be to follow one or the other? You seem to get words to do too much of the work. We have innate senses of positivity and negativity for certain experiences; we also have an innate sense that morality exists. But those together do not make positive experiences good "by definition" (nor does calling them "good" rather than "positive"). But those are relatively minor points - if there was a single consciousness in the universe, them maybe your argument could get off the ground. But we have many current and potential consciousnesses, with competing values and conscious experiences. You seem to be saying that we should logically be altruists, because we have conscious experiences. I agree we should be altruists; but that's a personal preference, and there's no logic to it. Following your argument (consciousness before physics) one could perfectly become a solipsist, believing only one's own mind exists, and ignoring others. Or your could be a racist altruist, preferring certain individuals or conscious experiences. Or you could put all experiences together on an infinite numbers of comparative scales (there is no intrinsic measure to compare the quality of two positive experiences in different people). But in a way, that's entirely a moot point. Your claim is that a certain ethics logically follows from our conscious reality. There I must ask you to prove it. State your assumptions, show your
1Peter Wildeford
Right now I see this as perhaps the most challenging and serious form of moral realism, so I definitely intend to take time and care to study it. I'll have to get back to you, as I think I said I would before.
3twanvl
Not quite anything, since the size and complexity of conscious thought is bounded by the human brain. But that is not relevant to this discussion of ethics. Should I interpret this as you defining ethics as good and bad feelings? So, do you endorse wireheading?
0Jabberslythe
I don't directly apprehend anything as the being "good" or the "bad" in the moral realist sense and I don't count other peoples' accounts of directly apprehending such things as evidence (especially since schizophrenics and theists exist).
9falenas108
Why is fostering good conscious feelings and prevent bad conscious feelings necessarily correct? It is intuitive for humans to say we should maximize conscious experience, and that falls under the success theory that Peter talks about, but why is this necessarily the one true moral system? You say But valuable to who? If there were a person who valued others being in pain, why would this person's views matter less?
Load More