UFAI is about singletons. If you have an AI society whose members compare notes and share information -- which ins isntrumentally useful for them anyway -- your reduce the probability of singleton fooming.
And is that valid or not? If you can validly decide some systems are better than others, you are some of the way to deciding which is best.
I'm increasingly baffled as to why AI is always brought in to discussions of metaethics. Societies of rational agents need ethics to regulate their conduct. Out AIs aren't sophisticated enough to live in their own socieities. A wireheading AI isn't even going to be able to survive "in the wild". If you could build an artificial society of AI, then the questions of whether they spontaneously evolved ethics would be a very interesting and relevant datum. But AIs as we know them aren't good models for the kinds of entities to which morality is relevant. And Clippy is particularly exceptional example of an AI. So why do people keep saying "Ah, but Clippy..."...?
Isn't the idea of moral progress based on one reference frame being better than another?
At last, an interesting reply!
Arbitrary and biased are value judgments.
And they'ree built into rationality.
Whether more than one non-contradictory value system exists is the topic of the conversation, isn't it?
Non contradictoriness probably isn't a sufficient condition for truth.
You are the monarch in that society, you do not need to guess which role you're being born into, you have that information. You don't need to make all the slaves happy to help your goals, you can just maximize your goals directly. You may choose any moral principle you want to govern your actions. The Categorical Imperative would not give you the best result.
For what value of "best"? If the CI is the correct theory of morality, it will necessarily give your the morally best result. Maybe your complaint is that it wouldn't maximise your persona...
How many 5 year olds have the goal of Sitting Down WIth a Nice Cup of Tea?
However, both the pebble-sorters and myself share one key weakness: we cannot examine ourselves from the outside; we can't see our own source code.
Being able to read all you source code could be ultimate in self-reflection (absent Loeb's theorem), but it doens't follow that those who can't read their source-code can;t self reflect at all. It's just imperfect, like everything else.
This is where you are confused. Almost certainly it is not the only confusion. But here is one:
Values are not claims. Goals are not propositions. Dynamics are not beliefs.
A machine that maximises paperclips can believe all true propositions in the world, and go on maximising paperclips. Nothing compels it to act any differently. You expect that rational agents will eventually derive the true theorems of morality. Yes, they will. Along with the true theorems of everything else. It won't change their behaviour, unless they are built so as to send those actio...
First of all, thanks for the comment. You have really motivated me to read and think about this more
That's what I like to hear!
If there are no agents to value something, intrinsically or extrinsically, then there is also nothing to act on those values. In the absence of agents to act, values are effectively meaningless. Therefore, I'm not convinced that there is objective truth in intrinsic or moral values.
But there is no need for morality in the absence of agents. When agents are there, values will be there, when agents are not there, the absence ...
But is the justification for its global applicability that "if everyone lived by that rule, average happiness would be maximized"?
Well, not, that's not Kant's justification!
That (or any other such consideration) itself is not a mandatory goal, but a chosen one.
Why would a rational agent choose unhappiness?
If you find yourself to be the worshipped god-king in some ancient Mesopotanian culture, there may be many more effective ways to make yourself happy, other than the Categorical Imperative.
Yes, but that wouldn't count as ethics. You ...
Yea, honestly I've never seen the exact distinction between goals which have an ethics-rating, and goals which do not
A number of criteria have been put forward. For instance, do as you would be done by. If you don't want to be murdered, murder is not an ethical goal.
...My problem with "objectively correct ethics for all rational agents" is, you could say, where the compellingness of any particular system comes in. There is reason to believe an agent such as Clippy could exist, and its very existence would contradict some "'rational' corre
Spun off from what, and how?
I am not sure I can expalin that succintly at the moment. It is also hard to summarise how you get from counting apples to transfinite numbers.
Why does their being rational demand that they have values in common? Being rational means that they necessarily share a common process, namely rationality, but that process can be used to optimize many different, mutually contradictory things. Why should their values converge?
Rationality is not an automatic process, it is skill that has to be learnt and consciously applied. Indivi...
Yes, but the fact that the universe itself seems to adhere to the logical systems by which we construct mathematics gives credence to the idea that the logical systems are fundamental, something we've discovered rather than producing. We judge claims about nonobserved mathematical constructs like transfinites according to those systems,
But claims about transfinities don't correspond directly to any object. Maths is "spun off" from other facts, on your view. So, by analogy, moral realism could be "spun off" without needing any Form of...
Since no claim has a probability of 1.0, I only need to argue that a clear majority of rational minds converge.
I haven't seen anything to say that is for meta discussion, it mostly isn't de facto, and I haven't seen a "take it elsewhere" notice anywhere as an aternative to downvote and delete.
All that's needed to is reject the idea that there are some mysterious properties to sensation which somehow violate basic logic and the principles of information theory.
Blatant strawman.
Banning all meta discussion on LW of any kind seems like an increasingly good idea - in terms of it being healthy for the community, or rather, meta of any kind being unhealth
Have you considered having a separate "place" for it?
But being able to handle criticism properly is a very important rational skill. Those who feel they cannot do it need to adjust their levels of self-advertisement as rationalists accordingly.
Maths isn't very relevant to Rand's philosophy. What's more relevant about her Aristoteleanism is her attitude to modern science; she was fairly ignorant. and fairly sceptical, of evolution, QM, and relativity.
Is unpleasantness the only criterion? Nobody much likes criticism, but it is hardly rational to disregard it becuase you don't like it.
I should've said, "updatable terminal goals".
You can make the evidence compatble with the theory of terminal values, but there is still no support for the theory of terminal values.
a successful rationalist organisation should be right up at the zero end of the scale
Because everyone knows that reversed stupidity is the best form of rationality ever.
Here are some guidelines for the new ultra-rational community to follow:
" values are nothing to do with rationality"=the Orthogonality Thesis, so it's a step in the argument.
Too see why someone might think that, imagine the following scenario: You find scientific evidence for the fact that if one forces the minority of the best-looking young women of a society at gunpoint to be of sexual service to whomever wishes to be pleased (there will be a government office regulating this) increases the average happiness of the country.
If you disregard the happiness of the women, anyway
...In other words, my argument questions that the happiness (needs/wishes/etc.) of a majority is at all relevant. This position is also known as individ
I don't get it: any agent that fooms becomes superintelligent. It's values don't necessarily change at all, nor does its connection to its society.