Maths isn't very relevant to Rand's philosophy. What's more relevant about her Aristoteleanism is her attitude to modern science; she was fairly ignorant. and fairly sceptical, of evolution, QM, and relativity.
"...getting them to admit that Scandinavia is not doing something inherently wrong with it's high tax system, given that they have relatively high happiness and quality of life."
There is another conservative argument against this: To acknowledge that it might actually be true that the average happiness is increased, but to reject the morality of it.
Too see why someone might think that, imagine the following scenario: You find scientific evidence for the fact that if one forces the minority of the best-looking young women of a society at gunpoint to be of sexual service to whomever wishes to be pleased (there will be a government office regulating this) increases the average happiness of the country.
In other words, my argument questions that the happiness (needs/wishes/etc.) of a majority is at all relevant. This position is also known as individualism and at the root of (American) conservatism.
Too see why someone might think that, imagine the following scenario: You find scientific evidence for the fact that if one forces the minority of the best-looking young women of a society at gunpoint to be of sexual service to whomever wishes to be pleased (there will be a government office regulating this) increases the average happiness of the country.
If you disregard the happiness of the women, anyway
In other words, my argument questions that the happiness (needs/wishes/etc.) of a majority is at all relevant. This position is also known as individualism and at the root of (American) conservatism.
This can be looked at as a form of deontology: govts don't have the right to tax anybody, and the outcomes of wisely spent taxation don't affect that.
Getting rid of religion is a bit like getting rid of the economy or government. Yes, the whole business of ritual (and most other cultural stuff religion claims) can be changed, eliminating religion as we know it today, but simply declaring one day that "religion doesn't exist" will lead to other problems, which may actually be WORSE than some people holding a usually non-harmful belief, or belief-in-belief. Cults, of personality and otherwise, come up as a terrifying option...
Changing religion is a Long Game.
A far more constructive use of one's time, to increase rationality in the population, is to encourage rational thinking among the majority of mankind (who are religious, anyway, so you give them the option of thinking about religion better, thus playing the Long Game).
Uncomfortable truth warning:
Atheists have to concede that religions is widespread because people are in some sense wired up for it. Getting rid of religion, therefore, does not get rid of religious thinking, feeling and behaviour. This can be seen in the prevalence of quaisi-religious rituals, such as going to concerts to worship "rock gods", regarding charismatic politicians as "saviours of the nation", and various other phenomena hiding in plain sight.
A further step, and one that is rarely taken, is realising that atheists and ratiinalists aren't immune. People who identify as atheists don't want to concede that they might still have some baggage of religious behaviour because that means they no longer firmly in the Tribe of Good People..but that is itself a religious pattern.
I'm not clear what you are meaning by "spatial slice". That sounds like all of space at a particular moment in time. In speaking of a space-time region I am speaking of a small amount of space (e.g. that occupied by one file on a hard drive) at a particular moment in time.
Your can prove conservation of information over small space times volumes without positing information as an ontological extra ingredient. You will also get false positives over larger space time volumes.
So... correct me if I'm wrong here... are you saying that no true superintelligence would fail to converge to a shared moral code?
How do you define a 'natural or artificial' superintelligence, so as to avoid the No True Scotsman fallacy?
So... correct me if I'm wrong here... are you saying that no true superintelligence would fail to converge to a shared moral code?
I'm saying such convergence has a non negligible probability, ie moral objectivism should not be disregarded.
How do you define a 'natural or artificial' superintelligence, so as to avoid the No True Scotsman fallacy?
As one that is too messilly designed to have a rigid distinction between terminal and instrumental values, and therefore no boxed-off unapdateable TVs. It's a structural definition, not a definition in terms of goals.
A perfectly designed Clippy would be able to change its own values - as long as changing its own values led to a more complete fulfilment of those values, pre-modification. (There are a few incredibly contrived scenarios where that might be the case). Outside of those few contrived scenarios, however, I don't see why Clippy would.
(As an example of a contrived scenario - a more powerful superintelligence, Beady, commits to destroying Clippy unless Clippy includes maximisation of beads in its terminal values. Clippy knows that it will not survive unless it obeys Beady's ultimatum, and therefore it changes its terminal values to optimise for both beads and paperclips; this results in more long-term paperclips than if Clippy is destroyed).
A likely natural or artificial superintelligence would, for the reasons already given.
The reason I asked, is because I am not understanding your reasons. As far as I can tell, you're saying that a likely paperclipper would somehow become a non-paperclipper out of a desire to do what is right instead of a desire to paperclip? This looks like a very poorly made paperclipper, if paperclipping is not its ultimate goal.
A likely natural or artificial superintelligence would,[zoom to the top of the Kohlberg hierarchy] for the reasons already given
As far as I can tell, you're saying that a likely paperclipper would somehow become a non-paperclipper out of a desire to do what is right instead of a desire to paperclip?
I said "natural or artificial superinteligence", not a paperclipper. A paperclipper is a highly unlikey and contrived kind of near-superinteligence that combines an extensive ability to update with a carefully walled of set of unupdateable terminal values. It is not a typical or likely [ETA: or ideal] rational agent, and nothing about the general behaviour of rational agents can be inferred from it.
Or there could be a fourth explanation neither of us has thought of.
"There could be an (n+1)th explanation neither of us has thought of" is a fully general counterargument to any argument by cases.
It's valid too. Which is one reason not to put p=1.0 on anything.
I will answer your question, but I do not understand your last statement; it looks like you retyped it several times and left all the old parts in.
I meant that with a sufficiently detailed understanding of physics, it would be meaningless to even posit the existence of (strong) free will. By meaningless here I mean a pointless waste of one's time. I was willing to clarify, but deep down I suspect that you already knew that.
Uh-huh. So "meaningless" means "very false". Although there are physically based models of Free WIll
Ah, but Kawoomba doesn't expect ethics to regulate other people, because he thinks everyone has incompatible goals. Thus ethics serves purely to define your goals.
Which, honestly, should simply be called "goals", not "ethics", but there you go.
Ah, but Kawoomba doesn't expect ethics to regulate other people, because he thinks everyone has incompatible goals. Thus ethics serves purely to define your goals.
Why not just say there is no ethics? His theory is like saying that since teapots are made of chocolate, their purpose is to melt into a messy puddle instead of making tea.
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
That traditional anecdote (and its modified forms) only illustrate how little the pro-qualia advocates understand the arguments against the idea.
Dismissing 'qualia' does not, as many people frequently imply, require dismissing the idea that sensory stimuli can be distinguish and grouped into categories. That would be utterly absurd - it would render the senses useless and such a system would never have evolved.
All that's needed to is reject the idea that there are some mysterious properties to sensation which somehow violate basic logic and the principles of information theory.
Blatant strawman.