Comment author: [deleted] 20 April 2013 07:00:05PM 0 points [-]

"Rational" is broader than "human" and narrower than "physically possible".

Do you really mean to say that there are physically possible minds that are not rational? In virtue of what are they 'minds' then?

Comment author: PrawnOfFate 21 April 2013 02:12:46AM 1 point [-]

Do you really mean to say that there are physically possible minds that are not rational?

Yes. There are irrational people, and they still have minds.

Comment author: TheOtherDave 20 April 2013 07:53:14PM 0 points [-]

So, just to pick an example, IIRC Dan Dennett believes the philosophical study of consciousness (qualia, etc.) is fundamentally confused in more or less the same way Desrtopa claims of the philosophical study of ethics is.

So under this formulation, if most of the institutional participants in the philosophical study of consciousness are intelligent, well-educated people, Dan Dennet is a crank?

No, I don't think we can reasonably say that. Dan Dennet might be a crank, but it takes more than that argument to demonstrate the fact.

Comment author: PrawnOfFate 20 April 2013 08:04:15PM -1 points [-]

So, just to pick an example, IIRC Dan Dennett believes the philosophical study of consciousness (qualia, etc.) is fundamentally confused in more or less the same way Desrtopa claims of the philosophical study of ethics is.

Dennett only thinks the idea of qualia is confused. He has no problem with his own books on consciousness.

So under this formulation, if most of the institutional participants in the philosophical study of consciousness are intelligent, well-educated people, Dan Dennet is a crank?

No. He isn't dismissing a whole academic subject, or a sub-field. Just one idea.

Comment author: Desrtopa 20 April 2013 07:16:09PM 2 points [-]

I have no idea what you mean by that. I don't think value systems don't come into it, I just think they are not isolated from rationality. And I am sceptical that you could predict any higher-level phenomenon from "the ground up", whether its morality or mortgages.

I mean that value systems are a function of physically existing things, the way a 747 is a function of physically existing things, but we have no evidence suggesting that objective morality is an existing thing. We have standards by which we judge beauty, and we project those values onto the world, but the standards are in us, not outside of us. We can see, in reductionist terms, how the existence of ethical systems within beings, which would feel from the inside like the existence of an objective morality, would come about.

Create a reasoning engine that doesn't have those ethical systems built into it, and it would have no reason to care about them.

Where is it proven they can be discarded?

You can't build a tower on empty air. If a debate has been going on for hundreds of years, stretching back to an argument which rests on "this defies our moral intuitions, therefore it's wrong," and that was never addressed with "moral intuitions don't work that way," then the debate has failed to progress in a meaningful direction, much as a debate over whether a tree falling in an empty forest makes a sound has if nobody bothers to dissolve the question.

All of them.

That's not an example. Please provide an actual one.

Are you aware that that is basically what every crank says about some other field?

Sure, but it's also what philosophers say about each other, all the time. Wittgenstein condemned practically all his predecessors and peers as incompetent, and declared that he had solved nearly the entirety of philosophy. Philosophy as a field is full of people banging their heads on a wall at all those other idiots who just don't get it. "Most philosophers are incompetent, except for the ones who're sensible enough to see things my way," is a perfectly ordinary perspective among philosophers.

Comment author: PrawnOfFate 20 April 2013 07:51:46PM -1 points [-]

I mean that value systems are a function of physically existing things, the way a 747 is a function of physically existing things, but we have no evidence suggesting that objective morality is an existing thing.

But I wans't saying that. I am arguing that moral claims truth values, that aren;t indexed to individuals or socieities. That epistemic claim can be justified by appeal to an ontoogy including Moral Objects, but that is not how I am justifying it: my argument is based on rationality, as I have said many times.

We have standards by which we judge beauty, and we project those values onto the world, but the standards are in us, not outside of us.

We have standards by which we jusdge the truth values of mathematical claims, and they are inside us too, and that doens't stop mathematics being objective. Relativism requires that truthvalues are indexed to us, that there is one truth for me and another for thee. Being located in us, or being operated by us are not sufficient criteria for being indexed to us.

We can see, in reductionist terms, how the existence of ethical systems within beings, which would feel from the inside like the existence of an objective morality, would come about.

We can see, in reductionistic terms, how the entities could converge on a unform set of truth values. There is nothing non reductionist about anything I have said. Reductionsm does not force one answer to metaethics.

reate a reasoning engine that doesn't have those ethical systems built into it, and it would have no reason to care about them.

Provide evidence that ethics is a whole separate modue, and not part of general reasoning ability.

You can't build a tower on empty air. If a debate has been going on for hundreds of years, stretching back to an argument which rests on "this defies our moral intuitions, therefore it's wrong," and that was never addressed with "moral intuitions don't work that way," then the debate has failed to progress in a meaningful direction, much as a debate over whether a tree falling in an empty forest makes a sound has if nobody bothers to dissolve the question.

Please explain why moral intuitions don't work that way.

Please provide some foundations for somethng that aren;t unjustofied by anything more foundationa.

That's not an example. Please provide an actual one

You can select one at random. obviously.

Sure, but it's also what philosophers say about each other, all the time.

No, philosophers don't regularly accuse each other of being incpompetent..just of being wrong. There's a difference.

Wittgenstein condemned practically all his predecessors and peers as incompetent, and declared that he had solved nearly the entirety of philosophy.

You are inferring a lot from one example.

Philosophy as a field is full of people banging their heads on a wall at all those other idiots who just don't get it. "Most philosophers are incompetent, except for the ones who're sensible enough to see things my way," is a perfectly ordinary perspective among philosophers.

Nope.

Comment author: [deleted] 20 April 2013 05:54:16PM 1 point [-]

The central point is a bit buried.

If we restrict ourselves to minds specifiable in a trillion bits or less, then each universal generalization "All minds m: X(m)" has two to the trillionth chances to be false, while each existential generalization "Exists mind m: X(m)" has two to the trillionth chances to be true.

This would seem to argue that for every argument A, howsoever convincing it may seem to us, there exists at least one possible mind that doesn't buy it.

So, there's some sort of assumption as to what minds are:

I also wish to establish the notion of a mind as a causal, lawful, physical system... [emphasis original]

and an assumption that a suitably diverse set of minds can be described in less than a trillion bits. Presumably the reason for that upper bound is because there are a few Fermi estimates that the information content of a human brain is in the neighborhood of one trillion bits.

Of course, if you restrict the set of minds to those with special properties (e.g., human minds), then you might find universally compelling arguments on that basis:

Oh, there might be argument sequences that would compel any neurologically intact human...

From which we get Coherent Extrapolated Volition and friends.

Comment author: PrawnOfFate 20 April 2013 06:12:02PM -1 points [-]

"Rational" is broader than "human" and narrower than "physically possible".

Comment author: TheOtherDave 20 April 2013 04:32:58PM 2 points [-]

it is absurd to characterise the practice of treating everyone the same as a form of bias.

Can you expand on what you mean by "absurd" here?

Comment author: PrawnOfFate 20 April 2013 05:08:42PM 0 points [-]

In the sense of "Nothing is a kind of something" or "atheism is a kind of religion".

Comment author: ciphergoth 20 April 2013 03:08:43PM 0 points [-]

The question of moral realism is AFAICT orthogonal to the Orthogonality Thesis.

Comment author: PrawnOfFate 20 April 2013 03:31:50PM 0 points [-]

A lot of people here would seem to disagree, since I keep hearing the objection that ethics is all about values, and values are nothing to do with rationality.

Comment author: Desrtopa 20 April 2013 02:12:18PM 1 point [-]

Do you think that it's even plausible? Do you think we have any significant reason to suspect it, beyond our reason to suspect, say, that the Invisible Flying Noodle Monster would just reprogram the AI with its noodley appendage?

Comment author: PrawnOfFate 20 April 2013 02:18:45PM -1 points [-]

There are experts in moral philosophy, and they generally regard the question realism versus relativism (etc) to be wide open. The "realism -- huh, what, no?!?" respsonse is standard on LW and only on LW. But I don't see any superior understanding on LW.

Comment author: RogerS 20 April 2013 02:07:30PM 0 points [-]

I'm not clear what you are meaning by "spatial slice". That sounds like all of space at a particular moment in time. In speaking of a space-time region I am speaking of a small amount of space (e.g. that occupied by one file on a hard drive) at a particular moment in time.

Comment author: PrawnOfFate 20 April 2013 02:11:20PM -1 points [-]

Your can prove conservation of information over small space times volumes without positing information as an ontological extra ingredient. You will also get false positives over larger space time volumes.

Comment author: CCC 20 April 2013 01:00:43PM *  0 points [-]

So... correct me if I'm wrong here... are you saying that no true superintelligence would fail to converge to a shared moral code?

How do you define a 'natural or artificial' superintelligence, so as to avoid the No True Scotsman fallacy?

Comment author: PrawnOfFate 20 April 2013 01:07:56PM *  -1 points [-]

So... correct me if I'm wrong here... are you saying that no true superintelligence would fail to converge to a shared moral code?

I'm saying such convergence has a non negligible probability, ie moral objectivism should not be disregarded.

How do you define a 'natural or artificial' superintelligence, so as to avoid the No True Scotsman fallacy?

As one that is too messilly designed to have a rigid distinction between terminal and instrumental values, and therefore no boxed-off unapdateable TVs. It's a structural definition, not a definition in terms of goals.

Comment author: CCC 20 April 2013 12:40:28PM 2 points [-]

A perfectly designed Clippy would be able to change its own values - as long as changing its own values led to a more complete fulfilment of those values, pre-modification. (There are a few incredibly contrived scenarios where that might be the case). Outside of those few contrived scenarios, however, I don't see why Clippy would.

(As an example of a contrived scenario - a more powerful superintelligence, Beady, commits to destroying Clippy unless Clippy includes maximisation of beads in its terminal values. Clippy knows that it will not survive unless it obeys Beady's ultimatum, and therefore it changes its terminal values to optimise for both beads and paperclips; this results in more long-term paperclips than if Clippy is destroyed).

A likely natural or artificial superintelligence would, for the reasons already given.

The reason I asked, is because I am not understanding your reasons. As far as I can tell, you're saying that a likely paperclipper would somehow become a non-paperclipper out of a desire to do what is right instead of a desire to paperclip? This looks like a very poorly made paperclipper, if paperclipping is not its ultimate goal.

Comment author: PrawnOfFate 20 April 2013 12:49:27PM *  -2 points [-]

A likely natural or artificial superintelligence would,[zoom to the top of the Kohlberg hierarchy] for the reasons already given

As far as I can tell, you're saying that a likely paperclipper would somehow become a non-paperclipper out of a desire to do what is right instead of a desire to paperclip?

I said "natural or artificial superinteligence", not a paperclipper. A paperclipper is a highly unlikey and contrived kind of near-superinteligence that combines an extensive ability to update with a carefully walled of set of unupdateable terminal values. It is not a typical or likely [ETA: or ideal] rational agent, and nothing about the general behaviour of rational agents can be inferred from it.

View more: Prev | Next