Comment author: Annoyance 13 August 2009 03:32:05PM -1 points [-]

That traditional anecdote (and its modified forms) only illustrate how little the pro-qualia advocates understand the arguments against the idea.

Dismissing 'qualia' does not, as many people frequently imply, require dismissing the idea that sensory stimuli can be distinguish and grouped into categories. That would be utterly absurd - it would render the senses useless and such a system would never have evolved.

All that's needed to is reject the idea that there are some mysterious properties to sensation which somehow violate basic logic and the principles of information theory.

Comment author: PrawnOfFate 22 April 2013 01:21:17PM 1 point [-]

All that's needed to is reject the idea that there are some mysterious properties to sensation which somehow violate basic logic and the principles of information theory.

Blatant strawman.

Comment author: PrawnOfFate 22 April 2013 12:55:05PM 1 point [-]

Maths isn't very relevant to Rand's philosophy. What's more relevant about her Aristoteleanism is her attitude to modern science; she was fairly ignorant. and fairly sceptical, of evolution, QM, and relativity.

Comment author: jt4242 21 April 2013 02:37:57PM 1 point [-]

"...getting them to admit that Scandinavia is not doing something inherently wrong with it's high tax system, given that they have relatively high happiness and quality of life."

There is another conservative argument against this: To acknowledge that it might actually be true that the average happiness is increased, but to reject the morality of it.

Too see why someone might think that, imagine the following scenario: You find scientific evidence for the fact that if one forces the minority of the best-looking young women of a society at gunpoint to be of sexual service to whomever wishes to be pleased (there will be a government office regulating this) increases the average happiness of the country.

In other words, my argument questions that the happiness (needs/wishes/etc.) of a majority is at all relevant. This position is also known as individualism and at the root of (American) conservatism.

Comment author: PrawnOfFate 21 April 2013 03:13:36PM -1 points [-]

Too see why someone might think that, imagine the following scenario: You find scientific evidence for the fact that if one forces the minority of the best-looking young women of a society at gunpoint to be of sexual service to whomever wishes to be pleased (there will be a government office regulating this) increases the average happiness of the country.

If you disregard the happiness of the women, anyway

In other words, my argument questions that the happiness (needs/wishes/etc.) of a majority is at all relevant. This position is also known as individualism and at the root of (American) conservatism.

This can be looked at as a form of deontology: govts don't have the right to tax anybody, and the outcomes of wisely spent taxation don't affect that.

Comment author: Osiris 21 April 2013 12:32:27PM 1 point [-]

Getting rid of religion is a bit like getting rid of the economy or government. Yes, the whole business of ritual (and most other cultural stuff religion claims) can be changed, eliminating religion as we know it today, but simply declaring one day that "religion doesn't exist" will lead to other problems, which may actually be WORSE than some people holding a usually non-harmful belief, or belief-in-belief. Cults, of personality and otherwise, come up as a terrifying option...

Changing religion is a Long Game.

A far more constructive use of one's time, to increase rationality in the population, is to encourage rational thinking among the majority of mankind (who are religious, anyway, so you give them the option of thinking about religion better, thus playing the Long Game).

Comment author: PrawnOfFate 21 April 2013 01:37:13PM *  3 points [-]

Uncomfortable truth warning:

Atheists have to concede that religions is widespread because people are in some sense wired up for it. Getting rid of religion, therefore, does not get rid of religious thinking, feeling and behaviour. This can be seen in the prevalence of quaisi-religious rituals, such as going to concerts to worship "rock gods", regarding charismatic politicians as "saviours of the nation", and various other phenomena hiding in plain sight.

A further step, and one that is rarely taken, is realising that atheists and ratiinalists aren't immune. People who identify as atheists don't want to concede that they might still have some baggage of religious behaviour because that means they no longer firmly in the Tribe of Good People..but that is itself a religious pattern.

Comment author: RogerS 20 April 2013 02:07:30PM 0 points [-]

I'm not clear what you are meaning by "spatial slice". That sounds like all of space at a particular moment in time. In speaking of a space-time region I am speaking of a small amount of space (e.g. that occupied by one file on a hard drive) at a particular moment in time.

Comment author: PrawnOfFate 20 April 2013 02:11:20PM -1 points [-]

Your can prove conservation of information over small space times volumes without positing information as an ontological extra ingredient. You will also get false positives over larger space time volumes.

Comment author: CCC 20 April 2013 01:00:43PM *  0 points [-]

So... correct me if I'm wrong here... are you saying that no true superintelligence would fail to converge to a shared moral code?

How do you define a 'natural or artificial' superintelligence, so as to avoid the No True Scotsman fallacy?

Comment author: PrawnOfFate 20 April 2013 01:07:56PM *  -1 points [-]

So... correct me if I'm wrong here... are you saying that no true superintelligence would fail to converge to a shared moral code?

I'm saying such convergence has a non negligible probability, ie moral objectivism should not be disregarded.

How do you define a 'natural or artificial' superintelligence, so as to avoid the No True Scotsman fallacy?

As one that is too messilly designed to have a rigid distinction between terminal and instrumental values, and therefore no boxed-off unapdateable TVs. It's a structural definition, not a definition in terms of goals.

Comment author: CCC 20 April 2013 12:40:28PM 2 points [-]

A perfectly designed Clippy would be able to change its own values - as long as changing its own values led to a more complete fulfilment of those values, pre-modification. (There are a few incredibly contrived scenarios where that might be the case). Outside of those few contrived scenarios, however, I don't see why Clippy would.

(As an example of a contrived scenario - a more powerful superintelligence, Beady, commits to destroying Clippy unless Clippy includes maximisation of beads in its terminal values. Clippy knows that it will not survive unless it obeys Beady's ultimatum, and therefore it changes its terminal values to optimise for both beads and paperclips; this results in more long-term paperclips than if Clippy is destroyed).

A likely natural or artificial superintelligence would, for the reasons already given.

The reason I asked, is because I am not understanding your reasons. As far as I can tell, you're saying that a likely paperclipper would somehow become a non-paperclipper out of a desire to do what is right instead of a desire to paperclip? This looks like a very poorly made paperclipper, if paperclipping is not its ultimate goal.

Comment author: PrawnOfFate 20 April 2013 12:49:27PM *  -2 points [-]

A likely natural or artificial superintelligence would,[zoom to the top of the Kohlberg hierarchy] for the reasons already given

As far as I can tell, you're saying that a likely paperclipper would somehow become a non-paperclipper out of a desire to do what is right instead of a desire to paperclip?

I said "natural or artificial superinteligence", not a paperclipper. A paperclipper is a highly unlikey and contrived kind of near-superinteligence that combines an extensive ability to update with a carefully walled of set of unupdateable terminal values. It is not a typical or likely [ETA: or ideal] rational agent, and nothing about the general behaviour of rational agents can be inferred from it.

In response to comment by MugaSofer on Wrong Questions
Comment author: satt 20 April 2013 11:52:18AM 1 point [-]

Or there could be a fourth explanation neither of us has thought of.

"There could be an (n+1)th explanation neither of us has thought of" is a fully general counterargument to any argument by cases.

In response to comment by satt on Wrong Questions
Comment author: PrawnOfFate 20 April 2013 12:25:40PM 0 points [-]

It's valid too. Which is one reason not to put p=1.0 on anything.

Comment author: MaoShan 20 April 2013 03:00:19AM 0 points [-]

I will answer your question, but I do not understand your last statement; it looks like you retyped it several times and left all the old parts in.

I meant that with a sufficiently detailed understanding of physics, it would be meaningless to even posit the existence of (strong) free will. By meaningless here I mean a pointless waste of one's time. I was willing to clarify, but deep down I suspect that you already knew that.

Comment author: PrawnOfFate 20 April 2013 11:17:14AM *  -1 points [-]

Uh-huh. So "meaningless" means "very false". Although there are physically based models of Free WIll

Comment author: MugaSofer 19 April 2013 10:54:14PM -1 points [-]

Ah, but Kawoomba doesn't expect ethics to regulate other people, because he thinks everyone has incompatible goals. Thus ethics serves purely to define your goals.

Which, honestly, should simply be called "goals", not "ethics", but there you go.

Comment author: PrawnOfFate 20 April 2013 12:42:09AM *  0 points [-]

Ah, but Kawoomba doesn't expect ethics to regulate other people, because he thinks everyone has incompatible goals. Thus ethics serves purely to define your goals.

Why not just say there is no ethics? His theory is like saying that since teapots are made of chocolate, their purpose is to melt into a messy puddle instead of making tea.

View more: Prev | Next