Posts

Sorted by New

Wiki Contributions

Comments

I agree with your analysis, and further:

Gurer ner sbhe yvarf: gur gjb "ebgngvat" yvarf pbaarpgrq gb gur pragre qbg, naq gur gjb yvarf pbaarpgrq gb gur yrsg naq evtug qbgf. Gur pragre yvarf fgneg bhg pbaarpgrq gb gur fvqr qbgf, gura ebgngr pybpxjvfr nebhaq gur fdhner. Gur bgure yvarf NYFB ebgngr pybpxjvfr: gur yrsg bar vf pragrerq ba gur yrsg qbg, naq ebgngrf sebz gur pragre qbg qbja, yrsg, gura gb gur gbc, gura evtug, gura onpx gb gur pragre. Gur evtug yvar npgf fvzvyneyl.

On the other hand... people say they hate politicians and then vote for them anyway.

Who are they going to vote for instead?

Ah, I see what you mean. I don't think one has to believe in objective morality as such to agree that "morality is the godshatter of evolution". Moreover, I think it's pretty key to the "godshatter" notion that our values have diverged from evolution's "value", and we now value things "for their own sake" rather than for their benefit to fitness. As such, I would say that the "godshatter" notion opposes the idea that "maladaptive is practically the definition of immoral", even if there is something of a correlation between evolutionarily-selectable adaptive ideas and morality.

For those who think that morality is the godshatter of evolution, maladaptive is practically the definition of immoral.

Disagree? What do you mean by this?

Edit: If I believe that morality, either descriptively or prescriptively, consists of the values imparted to humans by the evolutionary process, I have no need to adhere to the process roughly used to select these values rather than the values themselves when they are maladaptive.

But the theory fails because this fits it but isn't wireheading, right? It wouldn't actually be pleasing to play that game.

Fair question! I phrased it a little flippantly, but it was a sincere sentiment - I've heard somewhere or other that receiving a prosthetic limb results in a decrease in empathy, something to do with becoming detached from the physical world, and this ties in intriguingly with the scifi trope about cyborging being dehumanizing.

neurotypical

Are you using this to mean "non-autistic person", or something else?

a GAI with [overwriting its own code with an arbitrary value] as its only goal, for example, why would that be impossible? An AI doesn't need to value survival.

A GAI with the utility of burning itself? I don't think that's viable, no.

What do you mean by "viable"? You think it is impossible due to Godelian concerns for there to be an intelligence that wishes to die?

As a curiosity, this sort of intelligence came up in a discussion I was having on LW recently. Someone said "why would an AI try to maximize its original utility function, instead of switching to a different / easier function?", to which I responded "why is that the precise level at which the AI would operate, rather than either actually maximizing its utility function or deciding to hell with the whole utility thing and valuing suicide rather than maximizing functions (because it's easy)".

But anyway it can't be that Godelian reasons prevent intelligences from wanting to burn themselves, because people have burned themselves.

I'd be interested in the conclusions derived about "typical" intelligences and the "forbidden actions", but I don't see how you have derived them.

At the moment it's little more than professional intuition. We also lack some necessary shared terminology. Let's leave it at that until and unless someone formalizes and proves it, and then hopefully blogs about it.

Fair enough, though for what it's worth I have a fair background in mathematics, theoretical CS, and the like.

could you clarify your position, please?

I think I'm starting to see the disconnect, and we probably don't really disagree.

You said:

This sounds unjustifiably broad

My thinking is very broad but, from my perspective, not unjustifiably so. In my research I'm looking for mathematical formulations of intelligence in any form - biological or mechanical.

I meant that this was a broad definition of the qualitative restrictions to human self-modification, to the extent that it would be basically impossible for something to have qualitatively different restrictions.

Taking a narrower viewpoint, humans "in their current form" are subject to different laws of nature than those we expect machines to be subject to. The former use organic chemistry, the latter probably electronics. The former multiply by synthesizing enormous quantities of DNA molecules, the latter could multiply by configuring solid state devices.

Do you count the more restrictive technology by which humans operate as a constraint which artificial agents may be free of?

Why not? Though of course it may turn out that AI is best programmed on something unlike our current computer technology.

I think it could make a pretty interesting Discussion post, and would pair well with some discussion of how becoming a cyborg supposedly makes you less empathic.

I find this quite aesthetically pleasing :D

Load More