Comment author: RogerS 19 April 2013 09:26:34PM 0 points [-]

..absent collapse..

Ah, is that so.

But a 4D descriptions of all the changes involved in the copy-and-delete process would be sufficient..

Yes, I can see that that's one way of looking at it.

In fact, your problem would be false positives

I don't think so, since the information I would be comparing in this case (the "file contents") would be just a reduction of the information in two regions of space-time.

Comment author: PrawnOfFate 20 April 2013 12:39:30AM -1 points [-]

I don't think so, since the information I would be comparing in this case (the "file contents") would be just a reduction of the information in two regions of space-time.

And under determinsim, all the information in any spatial slice will be reproduced throughout time. Hence the false positives.

Comment author: MugaSofer 19 April 2013 11:02:28PM -2 points [-]

Well, if Prawn knew that they could just tell us and we would be convinced, ending this argument.

More generally ... maybe some sort of social contract theory? It might be stable with enough roughly-equal agents, anyway. Prawn has said it would have to be deducible from the axioms of rationality, implying something that's rational for (almost?) every goal.

Comment author: PrawnOfFate 20 April 2013 12:24:30AM -1 points [-]

Why would Clippy want to hit the top of the Kohlberg Hierarchy?

Well, if Prawn knew that they could just tell us

"The way people sometimes realise their values are wrong...only more efficiently, because its super intelligent. Well, I'll concede that with care you might be able to design a clippy, by very carefully boxing off its values from its ability to update. But why worry? Neither nature nor our haphazard stabs at AI are likely to hit on such a design. Intelligence requires the ability to update, to reflect, and to reflect on what is important. Judgements of importance are based on values. So it is important to have the right way of judging importance, the right values. So an intelligent agent would judge it important to have the right values."

Comment author: CCC 19 April 2013 09:32:59PM 3 points [-]

Why would a superintelligence be unable to figure that out..why would it not shoot to the top of the Kohlberg Hierarchy ?

Why would Clippy want to hit the top of the Kohlberg Hierarchy? You don't get more paperclips for being there.

Clippy's ideas of importance are based on paperclips. The most important vaues are those which lead to the acquiring of the greatest number of paperclips.

Comment author: PrawnOfFate 20 April 2013 12:19:27AM *  -1 points [-]

Why would Clippy want to hit the top of the Kohlberg Hierarchy?

"Clippy" meaning something carefully designed to have unalterable boxed-off values wouldn't...by definition.

A likely natural or artificial superintelligence would, for the reasons already given. Clippies aren'tt non-existent in mind-space..but they are rare, just because there are far more messy solutions there than neat ones. So nature is unlikely to find them, and we are unmotivated to make them.

Comment author: shminux 18 April 2013 04:30:40AM 0 points [-]

Here is the difference: the superstring theory is a reasonably good mathematical model which predicts a spacetime with 10 or 11-dimensions on purely mathematical grounds. It also predicts that particles should come in pairs (quarks+squarks). Despite its internal self-consistency, it's not a good model of the world we live in. Whether mathematicians use the scientific method depends on your definition of the scientific method (a highly contested issue on the relevant wikipedia page). Feel free to give your definition and we can go from there.

Comment author: PrawnOfFate 20 April 2013 12:12:02AM -1 points [-]

Here is the difference: the superstring theory is a reasonably good mathematical model which predicts a spacetime with 10 or 11-dimensions on purely mathematical grounds.

Not quite. More like abstractly physical gorunds...combining various symmetry principles from preceding theories.

Despite its internal self-consistency, it's not a good model of the world we live in.

Not quite. it doesn't predict a single world that is different. It predicts a landscape in which our world may be located with difficulty.

Comment author: Indon 18 April 2013 10:42:37PM -1 points [-]

If that's the case, and if it is also the case that scientists prefer to use proofs and logic where available (I can admittedly only speak for myself, for whom the claim is true), then I would argue that all scientists are necessarily also mathematicians (that is to say, they practice mathematics).

And, if it is the case that mathematicians can be forced to seek inherently weaker evidence when proofs are not readily available, then I would argue that all mathematicians are necessarily also scientists (they practice science).

At that point, it seems like duplication of work to call what mathematicians and scientists do different things. Rather, they execute the same methodology on usually different subject matter (and, mind you, behave identically when given the same subject matter). You don't have to call that methodology "the scientific method", but what are you gonna call it otherwise?

Comment author: PrawnOfFate 20 April 2013 12:08:35AM 0 points [-]

And, if it is the case that mathematicians can be forced to seek inherently weaker evidence when proofs are not readily available

Their inherently weaker evidence still isnt empirical evidence. Computation isn;t intrinsically emprical, because a smart enough mathematician could do it in their head...they are just offloading the cognitive burden.

Comment author: shminux 18 April 2013 09:01:28PM 0 points [-]

When scientists in any field can prove something with just logic, they do. Evidence is the tiebreaker

You have it backwards. Evidence is the only thing that counts. Logic is a tool to make new models, not to test them. Except in mathematics, where there is no way to test things experimentally.

Comment author: PrawnOfFate 20 April 2013 12:05:49AM -1 points [-]

. Logic is a tool to make new models

Has anyone come up with a decent model by mechanically applying a logical procedure?

Comment author: Indon 18 April 2013 07:17:15PM 1 point [-]

You go back and prove it if you can - and are mathematicians special in that regard, save that they deal with concepts more easily proven than most? When scientists in any field can prove something with just logic, they do. Evidence is the tiebreaker for exclusive, equivalently proven theories, and elegance the tiebreaker for exclusive, equivalently evident theories.

And that seems true for all fields labeled either a science or a form of mathematics.

Comment author: PrawnOfFate 20 April 2013 12:04:17AM *  0 points [-]
Comment author: shminux 18 April 2013 04:05:30AM *  0 points [-]

Not mathematical models... These can be motivated by experiment, but they are not bound to make accurate predictions. If that's what you were asking...

Comment author: PrawnOfFate 19 April 2013 11:59:10PM -2 points [-]

Mathematical theories and constructs aren't bound by nature...but models? Models are there to model something, surely?

Comment author: MugaSofer 19 April 2013 11:14:29PM -2 points [-]

Don't worry, you're being pattern-matched to the nearest stereotype. Perfectly normal, although thankfully somewhat rarer on LW.

Comment author: PrawnOfFate 19 April 2013 11:52:27PM 0 points [-]

Nowhere near rare enough for super-smart super-rationalists. Not as good as bog standard philosophers.

Comment author: MugaSofer 19 April 2013 07:09:13PM *  -2 points [-]

Well then, a universally correct solution based on axioms which can be chosen by the agents is a contradiction in and of itself.

I have not put forward an object-level ethical system, and I have explained why I do not need to. Physical realism does not imply that my physics is correct, metaethical realism does not imply that my ethics is the one true theory.

That doesn't actually answer the quoted point. Perhaps you meant to respond to this:

I presume that you take your particular ethical system (or a variant thereof) to be the one that every alien, AI and human should adopt.

... which is, in fact, refuted by your statement.

Because ethics needs to regulate behaviour -- that is its functional role -- and could not if individuals could justify any behaviour by re arranging action->goodness mappings.

... which Kawoomba believes they can, AFAICT.

Their optimally satisfying the constraints on ethical axioms arising from the functional role of ethics.

Could you unpack this a little? I think I see what you're driving at, but I'm not sure.

Comment author: PrawnOfFate 19 April 2013 09:09:40PM -1 points [-]

Perhaps you meant to respond to this:

Yes, I did, thanks.

" if individuals could justify any behaviour"

which Kawoomba believes they can, AFAICT.

Then what about the second half of the argument? If individuals can "ethically" justify any behaviour, then does or does not such "ethics" completely fail in its essential role of regulating behaviour? Because anyone can do anything, and conjure up a justification after the fact by shifting their "frame"? A chocolate "teapot" is no teapot, non-regulative "ethics" is no ethics...

Could you unpack this a little?

Not now.

View more: Prev | Next