Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: Abd 07 November 2012 01:29:14AM *  1 point [-]

question 26 only. rot13

Frcnengryl pbafvqre gur "znva yvar," naq gur npprffbel yvarf. Va rirel genafsbezngvba, gur znva yvar ebgngrf pbhagrepybpxjvfr 45 qrterrf. Gung yrnqf gb N, Q, naq R nf cbffvovyvgvrf. Va gur gjb iregvpny genafsbezngvbaf, gur npprffbel yvarf pbaarpg gb gur raqcbvagf bs gur znva yvar. Gung yrnirf bayl N. Gurer vf nyfb n pbafvfgrapl va gur ebgngvba bs gur fznyyre yvarf, ohg vg'f zber pbzcyrk gb rkcerff. Guvf vf yvxryl abg gur orfg nafjre.

In looking again at the survey, I answered no questions at all, and got an IQ score of 78. I answered A to all questions and also got a 78.

This calls the test into question, which I'd love to do. Effing test gave me a 110 when I tried! How sucky is that?

(I think it might be a good test, showing me a developing problem with speed. But obviously it doesn't work well at the low end.)

Comment author: CuSithBell 07 November 2012 03:06:55AM 0 points [-]

I agree with your analysis, and further:

Gurer ner sbhe yvarf: gur gjb "ebgngvat" yvarf pbaarpgrq gb gur pragre qbg, naq gur gjb yvarf pbaarpgrq gb gur yrsg naq evtug qbgf. Gur pragre yvarf fgneg bhg pbaarpgrq gb gur fvqr qbgf, gura ebgngr pybpxjvfr nebhaq gur fdhner. Gur bgure yvarf NYFB ebgngr pybpxjvfr: gur yrsg bar vf pragrerq ba gur yrsg qbg, naq ebgngrf sebz gur pragre qbg qbja, yrsg, gura gb gur gbc, gura evtug, gura onpx gb gur pragre. Gur evtug yvar npgf fvzvyneyl.

Comment author: pnrjulius 12 June 2012 02:59:28AM 2 points [-]

On the other hand... people say they hate politicians and then vote for them anyway.

So hypocrisy does have upsides, and maybe we shouldn't dismiss it so easily.

Comment author: CuSithBell 12 June 2012 03:35:00AM 0 points [-]

On the other hand... people say they hate politicians and then vote for them anyway.

Who are they going to vote for instead?

Comment author: TimS 12 June 2012 02:06:34AM 0 points [-]

If one is committed to a theory that says morality is objective (aka moral realism), one needs to point at what it is that make morality objectively true. Obvious candidates include God and the laws of physics. But those two candidates have been disproved by the empiricism (aka the scientific method).

At this point, some detritus of evolution starts to look like a good candidate for the source of morality. There isn't an Evolution Fairy who commanded the humans evolve to be moral, but evolution has created drives and preferences within us all (like hunger or desire for sex). More on this point here - the source of my reference to godshatter.

It might be that there is an optimal way of bringing these various drives into balance, and the correct choices to all moral decisions can be derived from this optimal path. As far as I can tell, those who are trying to derive morality from evo. psych endorse this position.

In short, if morality is the product of human drives created by evolution, then behavior that is maladaptive (i.e. counter to what is selected for by evolution) is by essentially correlated with immoral behavior.

That said, my summary of the position may be a bit thin, because I'm a moral anti-realist and don't believe the evo. psych -> morality story.

Comment author: CuSithBell 12 June 2012 03:33:31AM 2 points [-]

Ah, I see what you mean. I don't think one has to believe in objective morality as such to agree that "morality is the godshatter of evolution". Moreover, I think it's pretty key to the "godshatter" notion that our values have diverged from evolution's "value", and we now value things "for their own sake" rather than for their benefit to fitness. As such, I would say that the "godshatter" notion opposes the idea that "maladaptive is practically the definition of immoral", even if there is something of a correlation between evolutionarily-selectable adaptive ideas and morality.

Comment author: TimS 12 June 2012 12:53:14AM 0 points [-]

For those who think that morality is the godshatter of evolution, maladaptive is practically the definition of immoral. For me, maladaptive-ness is the explanation for why certain possible moral memes (insert society-wide incest-marriage example) don't exist in recorded history, even though I should otherwise expect them to exist given my belief in moral anti-realism.

Comment author: CuSithBell 12 June 2012 01:01:32AM *  1 point [-]

For those who think that morality is the godshatter of evolution, maladaptive is practically the definition of immoral.

Disagree? What do you mean by this?

Edit: If I believe that morality, either descriptively or prescriptively, consists of the values imparted to humans by the evolutionary process, I have no need to adhere to the process roughly used to select these values rather than the values themselves when they are maladaptive.

Comment author: Will_Newsome 11 June 2012 10:07:59PM 1 point [-]

Right. That said, wireheading, aka the grounding problem, is a huge unsolved philosophical problem, so I'm not sure Schmidhuber is obligated to answer wireheading objections to his theory.

Comment author: CuSithBell 11 June 2012 10:37:55PM 3 points [-]

But the theory fails because this fits it but isn't wireheading, right? It wouldn't actually be pleasing to play that game.

Comment author: witzvo 09 June 2012 02:47:35AM 1 point [-]

Serious question: is the cyborg part a joke? I can't tell around here.

Comment author: CuSithBell 09 June 2012 03:55:38AM 3 points [-]

Fair question! I phrased it a little flippantly, but it was a sincere sentiment - I've heard somewhere or other that receiving a prosthetic limb results in a decrease in empathy, something to do with becoming detached from the physical world, and this ties in intriguingly with the scifi trope about cyborging being dehumanizing.

Comment author: duckduckMOO 08 June 2012 06:44:19PM 0 points [-]

Ah yes, the danger of thinking you can think for yourself.

The danger is that it avoids regression to the mean. For that reason, yes it is the most dangerous dogma, but it also has a lot of potential. I'd trust someone like this more than I'd trust your average "agreeable" neurotypical who can at any moment be convinced by a charismatic enough charlattan cult leader to do just about anything if the neurotypical is down on their luck. Yes, some people like this have dangerous beliefs and a dangerous tendency to act on them but at least you can usually see them coming.

Also, what if they are free from dogma? What if they just think better than you or I? Depending on how free they are from dogma the danger may just be that they are excellent rationalisers. If someone who I think is mostly someone who thinks for themselves: they view every claim critically and insist on rederiving every conclusion before they believe it, if they tell me theythey are totally free from dogma and the masses are brainwashed idiots they're probably wrong about the "entirely". But, more or less, they are right. The only danger here is you can't talk them out of things, if they think you are one of the brainwashed masses and they might be angry about most people being brainwashed.

If they are a typically dogmatic thinker then they are really good at believing things which aren't true which presents a whole different kind of danger. Also they probably think of people who disagree with them as evil mutants and themselves as noble saints.

It's not dangerous for someone who is better at thinking undogmatically than people in general to found their philosophy on this difference, or even the overestimation of it that you propose.

Can you link the scary moment of dogma from the blog of a certain locally famous software engineer? Is it paul graham?

In a comment below you say "intolerance for "blindness" or "delusion", the insistence that there's one calculable right way to run things is culturally destructive." You sound like you are talking about something completely different. I suspect thinking they are free from dogma is simply something people who think there's one calculable right way to run things happen to tend to do and you are throwing out the baby (okay, maybe a crocodile) with the bathwater. Thinking that demonstrates blindness to the facts. Thinking that one's preferences are objective pronouncements on how the world should be in some fuzzy non value dependent way demonstrates that you mistake your feelings for facts. Believing you don't do this does intensify the danger such people pose massively but it isn't the source of the danger. And for people who don't do this, or don't do it very much, or who are just not abnormally vindictive or aggressive or callous enough to come up with a right way to run things that hurts people, or accept that their right way to run things will not be implemented are not a danger.

Comment author: CuSithBell 08 June 2012 06:49:20PM 0 points [-]

neurotypical

Are you using this to mean "non-autistic person", or something else?

In response to comment by CuSithBell on Fake Causality
Comment author: royf 06 June 2012 10:19:41PM *  0 points [-]

a GAI with [overwriting its own code with an arbitrary value] as its only goal, for example, why would that be impossible? An AI doesn't need to value survival.

A GAI with the utility of burning itself? I don't think that's viable, no.

I'd be interested in the conclusions derived about "typical" intelligences and the "forbidden actions", but I don't see how you have derived them.

At the moment it's little more than professional intuition. We also lack some necessary shared terminology. Let's leave it at that until and unless someone formalizes and proves it, and then hopefully blogs about it.

could you clarify your position, please?

I think I'm starting to see the disconnect, and we probably don't really disagree.

You said:

This sounds unjustifiably broad

My thinking is very broad but, from my perspective, not unjustifiably so. In my research I'm looking for mathematical formulations of intelligence in any form - biological or mechanical.

Taking a narrower viewpoint, humans "in their current form" are subject to different laws of nature than those we expect machines to be subject to. The former use organic chemistry, the latter probably electronics. The former multiply by synthesizing enormous quantities of DNA molecules, the latter could multiply by configuring solid state devices.

Do you count the more restrictive technology by which humans operate as a constraint which artificial agents may be free of?

In response to comment by royf on Fake Causality
Comment author: CuSithBell 08 June 2012 02:29:32PM 0 points [-]

a GAI with [overwriting its own code with an arbitrary value] as its only goal, for example, why would that be impossible? An AI doesn't need to value survival.

A GAI with the utility of burning itself? I don't think that's viable, no.

What do you mean by "viable"? You think it is impossible due to Godelian concerns for there to be an intelligence that wishes to die?

As a curiosity, this sort of intelligence came up in a discussion I was having on LW recently. Someone said "why would an AI try to maximize its original utility function, instead of switching to a different / easier function?", to which I responded "why is that the precise level at which the AI would operate, rather than either actually maximizing its utility function or deciding to hell with the whole utility thing and valuing suicide rather than maximizing functions (because it's easy)".

But anyway it can't be that Godelian reasons prevent intelligences from wanting to burn themselves, because people have burned themselves.

I'd be interested in the conclusions derived about "typical" intelligences and the "forbidden actions", but I don't see how you have derived them.

At the moment it's little more than professional intuition. We also lack some necessary shared terminology. Let's leave it at that until and unless someone formalizes and proves it, and then hopefully blogs about it.

Fair enough, though for what it's worth I have a fair background in mathematics, theoretical CS, and the like.

could you clarify your position, please?

I think I'm starting to see the disconnect, and we probably don't really disagree.

You said:

This sounds unjustifiably broad

My thinking is very broad but, from my perspective, not unjustifiably so. In my research I'm looking for mathematical formulations of intelligence in any form - biological or mechanical.

I meant that this was a broad definition of the qualitative restrictions to human self-modification, to the extent that it would be basically impossible for something to have qualitatively different restrictions.

Taking a narrower viewpoint, humans "in their current form" are subject to different laws of nature than those we expect machines to be subject to. The former use organic chemistry, the latter probably electronics. The former multiply by synthesizing enormous quantities of DNA molecules, the latter could multiply by configuring solid state devices.

Do you count the more restrictive technology by which humans operate as a constraint which artificial agents may be free of?

Why not? Though of course it may turn out that AI is best programmed on something unlike our current computer technology.

Comment author: Nornagest 08 June 2012 09:04:25AM 8 points [-]

It's interesting, all right, but I think it would likely be better received as a standalone Discussion post (ideally with some more context and expansion). The rationality quotes threads tend to be more for quotes directly about rationality or bias than quotes indirectly contributing to our potential understanding of the same.

Comment author: CuSithBell 08 June 2012 01:05:57PM 1 point [-]

I think it could make a pretty interesting Discussion post, and would pair well with some discussion of how becoming a cyborg supposedly makes you less empathic.

In response to comment by CuSithBell on Poly marriage?
Comment author: [deleted] 07 June 2012 06:20:14PM *  7 points [-]

I had exactly that as a sort of model in my brain. :)

In response to comment by [deleted] on Poly marriage?
Comment author: CuSithBell 08 June 2012 03:33:44AM 1 point [-]

I find this quite aesthetically pleasing :D

View more: Next